
Sign up to save your podcasts
Or


Banks are no strangers to artificial intelligence. For years, machine learning and deep learning models have quietly powered fraud detection, transaction monitoring, and risk analysis. But the industry is now approaching a more consequential shift: agentic AI—systems that don’t just analyze data, but can act on it. With that shift comes a fundamental question about how much authority banks are prepared to give to machines.
Trust sits at the center of the debate. Is AI ready to be trusted with decisions that carry financial and regulatory consequences? That question was featured prominently in a recent conversation between Deepak Gupta, Chief Product Engineering and Delivery Officer at Volante, and Christopher Miller, Lead Analyst of Emerging Payments at Javelin Strategy & Research. And if the answer today is “not yet,” what needs to change for banks to get there?
Across financial institutions, AI adoption is accelerating for a clear reason—efficiency. Internally, banks are under pressure to do more with fewer resources. AI is increasingly used to automate repetitive tasks, improve accuracy and consistency, reduce investigation backlogs, and bring greater predictability to operations that have been historically labor-intensive.
Externally, the focus shifts to customer impact. Banks are exploring how AI can lower operational costs for clients, reduce friction across payment flows, and strengthen compliance.
Some of the most compelling opportunities sit at the intersection of both. In payments operations and exception handling, AI can repair and enrich payment data, classify exceptions in real time, and route transactions to the right place. Machine learning models can identify fraud as it happens while reducing false positives.
Conversational AI adds another layer, enabling natural language queries such as “Why did this payment fail?” “Where did it get stuck?” “How was a similar issue resolved before?” Meanwhile, banks are applying AI to intelligent payment routing, liquidity optimization, and funding prediction—turning what were once reactive processes into proactive ones.
For the moment, the simplest answer is that AI reduces the amount of time required to perform certain tasks. This progress tends to happen in fits and starts, which makes the impact feel uneven—especially when AI affects only one part of a task or workflow. To understand the impact that ultimately shows up on the bottom line, it is important to take an end-to-end view.
The real benefit is not solving a specific problem, although that remains important. Understanding how AI is changing outcomes requires an end-to-end perspective across an entire domain or set of workflows.
“Our approach is learn to walk before you run, and run before you sprint,” said Gupta. “We are thinking of AI as an assistant to payment operations teams. Maybe in a couple of years, the confidence level increases, the predictability increases, and the algorithms gain more acceptance, to a stage where you might be able to say to a subset of your payment system: OK, go ahead and approve it automatically.”
The first area of impact is efficiency. For example, has the cost and effort required to process a payment been reduced? Given a fixed volume of payments handled by a single person, AI can enable a higher volume to be processed with the same headcount. In concrete terms, efficiency is reflected in the number of transactions processed per person before and after AI.
The second area is risk reduction, such as identifying and minimizing false positives or preventing compliance violations. The goal is to create business value, whether by lowering the cost per transaction or allowing customers to expand their revenue base.
Finally, there’s adoption. Even the best tool has no value if it’s not used.
Achieving widespread adoption depends on organizational trust in AI. Miller analogizes this to career ladders used to develop individuals over time, where capability and responsibility increase gradually.
“If you show up as a new hire, you get limits around the amount of damage you can do,” Miller said. “It might be that you can only approve things below a certain volume, or you can’t work with certain clients. We build guardrails around people to limit the amount of damage that their learning process can cause. As we think about how to measure the effectiveness of AI, we might have to actually return to that.”
“These guardrails are not because AI is dangerous,” he said. “It is because learning is a process that generates risk. AI has to prove that it’s trustworthy. If it can’t do that, there will be no adoption. But for trust to emerge, you have to start using it first.”
That trust has to be prevalent on both sides.
“When I get in my Tesla, I find it safer for Tesla to drive than myself, because I get distracted,” Gupta said. “I get a phone call or I’m looking at something else. But once I put the car on self-drive, I know it will stop itself at the right time. In fact, my family says when we go together, ‘Dad, why don’t you let the car drive itself? It drives better than you do.’
“The key is to take the risk to let the car drive itself first,” he said. “You can still be in control, but let the car drive itself. The same thing that should happen in payments: trust the new technologies, trust the new paradigms.”
One development already underway is the emergence of systems capable of taking action autonomously. Guardrails are not just controls—they form the foundation of trust, allowing leaders and operations teams to delegate more tasks to AI that can learn and adapt.
“Instead of delegating the workflows as they exist, you create the possibility of a world where the systems might reinvent the workflows on their own,” Miller said.
As AI continues to evolve, banks will not just respond to payments. They’ll anticipate them, becoming more proactive, efficient, and strategic in managing the flow of money.
“Payments will transition from largely a transactional back-office function to an intelligent continuously available capability,” Gupta said. “AI will enable banks to shift from reactive processing to proactive and predictive operations. When you go to FedEx, you don’t tell them which plane you want the package to go on. You just say when you want the package to get there and how much you’re willing to pay for it. And then voila, FedEx does the magic for you and says: OK, these are the options, which one do you want?
“Similarly, you shouldn’t have to figure out which payment is the cheapest option. Should I send it through RTP or FedNow? Just let the AI do that for you. AI will find the fastest and the cheapest path.”
By PaymentsJournalBanks are no strangers to artificial intelligence. For years, machine learning and deep learning models have quietly powered fraud detection, transaction monitoring, and risk analysis. But the industry is now approaching a more consequential shift: agentic AI—systems that don’t just analyze data, but can act on it. With that shift comes a fundamental question about how much authority banks are prepared to give to machines.
Trust sits at the center of the debate. Is AI ready to be trusted with decisions that carry financial and regulatory consequences? That question was featured prominently in a recent conversation between Deepak Gupta, Chief Product Engineering and Delivery Officer at Volante, and Christopher Miller, Lead Analyst of Emerging Payments at Javelin Strategy & Research. And if the answer today is “not yet,” what needs to change for banks to get there?
Across financial institutions, AI adoption is accelerating for a clear reason—efficiency. Internally, banks are under pressure to do more with fewer resources. AI is increasingly used to automate repetitive tasks, improve accuracy and consistency, reduce investigation backlogs, and bring greater predictability to operations that have been historically labor-intensive.
Externally, the focus shifts to customer impact. Banks are exploring how AI can lower operational costs for clients, reduce friction across payment flows, and strengthen compliance.
Some of the most compelling opportunities sit at the intersection of both. In payments operations and exception handling, AI can repair and enrich payment data, classify exceptions in real time, and route transactions to the right place. Machine learning models can identify fraud as it happens while reducing false positives.
Conversational AI adds another layer, enabling natural language queries such as “Why did this payment fail?” “Where did it get stuck?” “How was a similar issue resolved before?” Meanwhile, banks are applying AI to intelligent payment routing, liquidity optimization, and funding prediction—turning what were once reactive processes into proactive ones.
For the moment, the simplest answer is that AI reduces the amount of time required to perform certain tasks. This progress tends to happen in fits and starts, which makes the impact feel uneven—especially when AI affects only one part of a task or workflow. To understand the impact that ultimately shows up on the bottom line, it is important to take an end-to-end view.
The real benefit is not solving a specific problem, although that remains important. Understanding how AI is changing outcomes requires an end-to-end perspective across an entire domain or set of workflows.
“Our approach is learn to walk before you run, and run before you sprint,” said Gupta. “We are thinking of AI as an assistant to payment operations teams. Maybe in a couple of years, the confidence level increases, the predictability increases, and the algorithms gain more acceptance, to a stage where you might be able to say to a subset of your payment system: OK, go ahead and approve it automatically.”
The first area of impact is efficiency. For example, has the cost and effort required to process a payment been reduced? Given a fixed volume of payments handled by a single person, AI can enable a higher volume to be processed with the same headcount. In concrete terms, efficiency is reflected in the number of transactions processed per person before and after AI.
The second area is risk reduction, such as identifying and minimizing false positives or preventing compliance violations. The goal is to create business value, whether by lowering the cost per transaction or allowing customers to expand their revenue base.
Finally, there’s adoption. Even the best tool has no value if it’s not used.
Achieving widespread adoption depends on organizational trust in AI. Miller analogizes this to career ladders used to develop individuals over time, where capability and responsibility increase gradually.
“If you show up as a new hire, you get limits around the amount of damage you can do,” Miller said. “It might be that you can only approve things below a certain volume, or you can’t work with certain clients. We build guardrails around people to limit the amount of damage that their learning process can cause. As we think about how to measure the effectiveness of AI, we might have to actually return to that.”
“These guardrails are not because AI is dangerous,” he said. “It is because learning is a process that generates risk. AI has to prove that it’s trustworthy. If it can’t do that, there will be no adoption. But for trust to emerge, you have to start using it first.”
That trust has to be prevalent on both sides.
“When I get in my Tesla, I find it safer for Tesla to drive than myself, because I get distracted,” Gupta said. “I get a phone call or I’m looking at something else. But once I put the car on self-drive, I know it will stop itself at the right time. In fact, my family says when we go together, ‘Dad, why don’t you let the car drive itself? It drives better than you do.’
“The key is to take the risk to let the car drive itself first,” he said. “You can still be in control, but let the car drive itself. The same thing that should happen in payments: trust the new technologies, trust the new paradigms.”
One development already underway is the emergence of systems capable of taking action autonomously. Guardrails are not just controls—they form the foundation of trust, allowing leaders and operations teams to delegate more tasks to AI that can learn and adapt.
“Instead of delegating the workflows as they exist, you create the possibility of a world where the systems might reinvent the workflows on their own,” Miller said.
As AI continues to evolve, banks will not just respond to payments. They’ll anticipate them, becoming more proactive, efficient, and strategic in managing the flow of money.
“Payments will transition from largely a transactional back-office function to an intelligent continuously available capability,” Gupta said. “AI will enable banks to shift from reactive processing to proactive and predictive operations. When you go to FedEx, you don’t tell them which plane you want the package to go on. You just say when you want the package to get there and how much you’re willing to pay for it. And then voila, FedEx does the magic for you and says: OK, these are the options, which one do you want?
“Similarly, you shouldn’t have to figure out which payment is the cheapest option. Should I send it through RTP or FedNow? Just let the AI do that for you. AI will find the fastest and the cheapest path.”