AI is reshaping payment systems — but without ‘handmade’ finance, trust is a pipe dream

Grace McNicholas
February 5, 2026
5 minutes
Robot AI illustration on computer with blue background

AI is becoming foundational to modern payment infrastructure, enabling massive computational problems to be solved in a fraction of the time previously required. Speed and scale are now competitive advantages. But as these systems grow more powerful, it’s worth remembering that optimisation is not the same as intelligence.

 

The same optimisation engines that underpin fraud prevention can also be used to probe, model and stress-test financial systems. As a result, efficiency may be accelerating faster than security, and faster than institutions’ ability to explain or govern automated decisions. Trust and human oversight are poised to become premium features of ‘handmade’ finance.

 

FICO joins the optimisation arms race

FICO is part of this broader industry shift. This month, the global analytics software provider announced the launch of Xpress 9.8, an optimisation suite designed to prioritise speed, scale and real-time decision-making. Built to handle extremely large, dense computational problems, the platform uses a GPU-accelerated hybrid gradient algorithm with lower memory overhead, outperforming traditional CPU-based approaches.

 

The promise is clear: faster automation across core financial workflows, including fraud detection and risk optimisation. But as these tools become more widely deployed across financial institutions, the same optimisation techniques are also becoming easier to access, directly or indirectly, for criminal actors seeking to understand system behaviour.

 

“The ecosystem has quickly embraced GPU solvers, and by leveraging NVIDIA CUDA-X libraries, FICO Xpress 9.8 is helping customers achieve breakthroughs in speed and scale.” said Alex Fender, Director of Decision Optimization at NVIDIA.

 

What’s the catch?

 

Optimisation tools like Xpress 9.8 help organisations calculate the “best possible move” when faced with millions, or even billions, of variables. For cybersecurity and fraud teams, this represents a meaningful advance in detecting anomalous behaviour and modelling risk.

 

The catch is structural rather than moral. Optimisation tools don’t just solve problems; they expose constraints. Once those constraints are visible, they can be tested, mapped and, in some cases, gamed. What changes with platforms like Xpress 9.8 isn’t criminal intent—it’s scale. The ability to run vast what-if simulations cheaply and repeatedly makes certain forms of abuse economically viable in ways that were previously impractical.

 

In this context, faster optimisation doesn’t automatically tilt the balance toward defenders. It compresses the time between discovery and exploitation.

 

Trust is the real bottleneck

 

As optimisation technology advances, trust is emerging as the limiting factor. Kurt Peterson, senior vice president at Camunda, highlights this in its 2026 State of Agentic Orchestration and Automation report:

“The promise of agentic AI is undeniable, but trust remains the key barrier to adoption.”

 

Donut chart on Perceptions of agentic AI

Perceptions of AI without agentic orchestration.[1]

A premium on ‘handmade’ finance

The result is a paradox. As financial services become more automated, demand for visible human involvement will increase. When AI systems are ubiquitous, and accessible to both legitimate institutions and criminal actors, trust becomes a differentiator rather than a byproduct.

 

In payments, the door has been opened for a premium tier of “handmade” services: built on the same AI stacks, but with earlier human checkpoints, clearer accountability, and slower, more legible decision paths. In an optimisation arms race, restraint and transparency may prove to be competitive advantages rather than inefficiencies.

 

[1] Based on data from Camuda’s ‘State of Agentic Orchestration and Automation 2026’ report.

Join the mailing list