With AI-driven retail traffic soaring – even as the stock market AI bubble is bursting – where will payments and banking AI go next? Helped by industry experts, we separate the hype from the happening.
The value of AI firms on the NYSE is dropping. Consumer use of AI in commerce is soaring. More fintechs are deploying AI – but so are more fraudsters. Another day, another contradictory claim. Will the real AI please stand up?
One thing’s for sure: AI has been the victim of industry hype, much like NFTs before it and, long ago, the dot.com bubble. Indeed, Scott Dawson, CEO of DECTA UK, makes an explicit comparison between AI and the “dot bomb” era in recent remarks, noting that “the AI bubble is seventeen times bigger than the dot-com bust of the early 2000s.”
Just as the dot.com bust acted as a filter, taking out companies that weren’t viable, so Dawson argues the same will happen with AI. However, Dawson adds that the AI bubble is mainly about OpenAI and Nvidia – and calls on the AI industry to “get real”, deriding “claims that AI can create new forms of ‘vibe physics’ or that superhuman intelligence will arrive to be our ‘loving parent’ within the next five years”, as, “beneath the dignity of multibillion-dollar company CEOs.”
That said, Dawson does recognise AI’s usefulness in identifying fraud patterns in payments, among other cases. Which is helpful, since it appears that fraudsters are themselves increasingly using AI. Husnain Bajwa, SVP, Risk Solutions at SEON says 2026 will see a shift in balance, with fraudsters using AI more frequently – and in a more sophisticated manner – than many legitimate organisations.
According to Bajwa, “We’ve entered an era of agentic and adversarial AI, where systems plan fraud, act and adapt without human input. What once took coordinated human effort can now be done by autonomous agents and this will really change the game. The result is a new breed of fraud that is persistent, contextual and difficult to distinguish from legitimate activity.”
Not great news. Arguably worse news comes from the apparent huge consumer appetite for AI-driven shopping. According to a recent survey from PSE Consulting, nearly half of Brits use AI tools regularly, and almost one in four UK citizens plan to use AI to shop for Christmas gifts, rising to almost 50% for younger, higher-income groups. While the research doesn’t say if these consumers are using agents to purchase, rather than just search for goods, the idea that they might be shopping with agents when agentic commerce remains unregulated and fraud goes berserk is, to understate the case, concerning.
Apart from fraud detection, there are so many other uses of AI which aren’t connected to the purchase process and which might deliver real benefit. Take, for instance, the capacity of AI-driven software engines to update APIs and read documents and contracts – meaning that the need for API compatibility between organisations sharing data and information could become a thing of the past by 2030.
Sticking to 2026, the financial services sector will find that it needs to change the way it utilises AI. The truth is that some of the sexier AI use cases require deep transformation which many banks would struggle to implement given that they are hamstrung by legacy tech foundations and poor quality data.
As Nelson Wootton, CEO and Co-Founder at SaaScada, puts it, “by identifying suitable use cases for AI, rather than simply treating it as a hammer without a nail, innovative banks will transform their ability to work effectively in 2026.”
Precisely. And given that some of the biggest opportunities for the application of AI come in areas such as compliance (embedding AI as part of risk detection during the onboarding process to prevent fraud before a single payment is made) and pairing AI with human oversight to identify and resolve anomalies rapidly, perhaps some banks should reflect that an AI implementation is for the long term – and not just for Christmas.
Share this