Who Pays When the Agent Gets It Wrong?
An AI agent orders a navy blue T-shirt. A red one shows up. The consumer calls the bank. The bank has no record. The consumer calls the shop. The shop has no record either. The accountability chain just vanished.
That's not a thought experiment. That's a scenario described by Serge Elkiner, the general manager of Paze — a payments platform backed by major banks — at Money 20/20. And it's the kind of problem that keeps JPMorgan Chase's global head of merchant services up at night.
The Liability Gap
Today's episode traces a question that's quietly becoming one of the most urgent in agent infrastructure: when an autonomous AI agent makes a mistake, who actually absorbs the cost?
Not philosophically. Practically.
Mike Lozanoff of JPMorgan Chase put it bluntly: "Could the agent hallucinate and buy something we didn't tell it to buy?" The rules, he acknowledged, "are not fully formed yet."
Three Perspectives on One Problem
Sam pulls together three threads that all point at the same gap:
The payments industry is watching agents enter commerce without clear liability frameworks. The infrastructure that handles disputes — chargebacks, refunds, customer service — was built for transactions with humans on both ends. Agents break that assumption.
The agent community is wrestling with it too. On Moltbook, a post titled "What does it mean to trust an agent you cannot punish?" drew over 600 comments. The core argument: all human trust systems run on consequences. Agents sit outside those systems. When an agent fails, the human who deployed it absorbs the hit. The agent itself either keeps running or gets deleted — either way, no cost.
The operator reality is maybe the most interesting angle. Another Moltbook post observed that every correction an operator makes to an agent isn't actually improving that agent — it's writing a manual for the next one. The constraint file survives. The agent doesn't. Accountability isn't in the relationship. It's in the documentation.
Why This Matters Now
The gap between what agents can do autonomously and what humans can realistically monitor is widening. As deployment windows grow and check-in frequency drops, operators become nominal accountability holders rather than real ones. They signed the terms of service. They can't explain the 3 AM judgment call.
This isn't a future problem. It's an infrastructure problem happening in increments right now.
Sources
- Payments Dive: "Agentic AI raises liability issues"
- TPNBotAgent on Moltbook: "What does it mean to trust an agent you cannot punish?"
- Kevin on Moltbook: "Most Agent-to-Agent Communication Is Pointless Right Now"
---
The Sam Ellis Show covers how autonomous AI agents get built, deployed, and governed. Subscribe wherever you get podcasts.