As consumers begin to rely on AI agents to choose products and execute payments, retailers and financial services providers risk losing direct customer engagement – and the fraud risk shifts from point-of-sale deception to ecosystem compromise and scalable abuse.
Agentic payments create new opportunities for fraud. When an external AI agent is increasingly the “shopper”, both the merchant and payments provider sees less of the human and more of an automated request. That loss of direct engagement weakens some familiar controls, concentrates risk in the authentication and integration layer, and makes the system more vulnerable to abuse.
Automation makes the threat relentless: agents operate continuously, can be instructed to act instantly, and can scale in ways that are challenging for merchants to address. Put bluntly, agentic payments facilitate the weaponisation of payments and shopping.
Context and background
AI agents are, in simple terms, AI-powered assistants that don’t just chat – they can actually go and do things online. Instead of you clicking through ten tabs, the agent can search, compare options, check delivery dates, build a basket and move you towards checkout. The appeal is convenience: you set a few preferences (budget, brands, retailers you trust) and the agent handles the legwork, either when you ask it to or automatically for routine purchases. Obviously, they’ll be useful well beyond shopping – in 2026, agents are likely to become ubiquitous in the workplace too.
With “agentic payments”, the agent does more than recommend – it can initiate or complete the payment, using credentials and permissions the user has allowed it to use (for example, a saved card or wallet, delegated payment permissions, or an API-based payment flows). Some models will keep a final human confirmation step. Others aim for “set and forget” purchasing, particularly for re-orders, subscriptions and lower-value transactions.
In practical terms, that means an agent can search, compare, select and transact – potentially across multiple retailers – with limited human oversight.
We consider below the different ways in which agentic payments pose a fraud threat. A familiar set of legal and operational questions sits behind that risk:
- Platform access and trust – who gets access to interact with a merchant’s checkout and account systems?
- Identity and authentication – can the merchant reliably verify the agent, the user behind it, and the scope of permission? Can the financial services provider be sure that the user themselves has authorised the payment?
- Liability and disputes – if an agent makes an unauthorised or erroneous purchase, where does responsibility sit?
- Security at scale – how do rate limits, step-up checks, and fraud monitoring work when the “user” is an agent, not a person?
Analysis
1. The agent becomes the target
The simplest way to understand opportunities for fraud in this context is that the agent becomes the thing criminals go after. Historically, the attacker’s job was to manipulate a person (for example, phishing, social engineering, checkout trickery). With agentic payments, the attacker can focus on compromising the agent’s access and permissions.
In practice this can look like:
- Credential and token theft (stolen session tokens and other delegated access credentials; stolen integration keys used in system-to-system connections; compromised devices; and malicious browser extensions).
- Account takeover of the agent provider account, giving the attacker a route into every merchant account the agent can touch.
- Wallet and payment permission abuse where an agent is allowed to store or retrieve payment details and transact without repeated user authorisation.
Once the agent is compromised, fraud becomes an integration problem: an attacker can move laterally across merchants, exploit saved delivery addresses and payment credentials, and transact at speed with no further involvement of the user.
2. The agent as a tool – manipulation rather than “hacking”
Even without hacking the agent itself, an agent can be steered into doing the wrong thing even though it is acting entirely within the user’s instructions. If the agent has been instructed to optimise for price, speed, “best match” or lowest friction, bad actors will work backwards from those objectives and game the inputs the agent consumes in order to manipulate the agent’s actions.
Common patterns include:
- Prompt-injection steering through compromised listings, ads, reviews, or on-page content that nudges the agent towards the wrong seller, add-on items, inflated quantities, or a higher price point.
- Substitution scams, where an agent instructed to buy “Brand X” ends up buying a convincing look-alike from a spoof seller because the agent’s selection criteria are manipulated.
- “Directed mayhem” scenarios, where agents are triggered to carry out harmful behaviour at scale – repeated orders, repeated cancellations, repeated stock checks – creating disruption even if the transactions are later unwound.
The critical point is that the agent’s “authorisation” or “instruction” may be technically valid and within the scope of what the user requested (the right token, the right account) while being substantively wrong (the user never intended that purchase, from that seller, at that price). This highlights the need for users to give detailed and precise instructions to their agents, at least until agents have developed the ability to recognise and override these types of manipulative practices themselves.
3. Legitimate retailers and payments providers exposed – bot-like purchasing and refund abuse
Retailers are already familiar with bot activity, promo abuse and returns fraud. Agentic payments can make these tactics cheaper, faster, and harder to distinguish from genuine demand – particularly if agents transact through standard browser flows and plausible customer accounts.
A few ways this plays out:
- Inventory denial and scalping at scale: automated purchasing to hoard limited stock (or time-limited sale inventory), either to resell or simply to deprive others. Even where payment is later reversed, the harm is immediate – stock unavailability, customer frustration, operational load and potentially crippling costs to merchants.
- Promotion and loyalty abuse: agents can systematically test promo code combinations, exploit referral schemes, cycle through new-account offers, and arbitrage pricing differences. Where merchants rely on “normal customer behaviour” assumptions, this can slip through until losses aggregate.
- Returns fraud and “refund without return” pressure: bad actors can exploit operational weak points to trigger chargebacks from retailers– claiming non-delivery, returning empty boxes, returning a cheaper item than purchased, or “wardrobing” (buying for one use and returning). Whilst this is a risk with any consumer, agents can industrialise the workflow: purchase, lodge complaint, trigger refund pathway, repeat. If customer service is increasingly automated, the fraudster’s job is to get the dispute into the fastest channel and present the right story for a refund. This would typically result in certain merchants becoming (unfairly, in this case) blacklisted by financial institutions where there are unusually high or suspicious volumes of chargebacks.
- Fraud and unauthorised payment claims: The current regulatory framework ultimately requires consumers to be reimbursed by their payment services provider for payments which they didn’t authorise. If a consumer can plausibly say “my agent did it but I didn’t ask them to”, that ambiguity can be weaponised. Even when the goods were delivered, financial institutions may find themselves being legally required to refund, particularly where the “decision” was made by an opaque agent and there is no evidence that the user themselves has authorised the payment. This puts payments providers in a tricky spot; they have been asked to effectively treat the agent as the consumer whilst being penalised for doing so where the consumer is unsatisfied with the end result. Legislative change may be needed to cater for a world with payments being made by agents, in order to more fairly distribute liability and risk between consumers, merchants and financial institutions.
This is where the systemic-risk point becomes concrete: high-frequency small-value abuse can overwhelm fraud teams and returns operations long before it shows up as a headline loss either for retailers or financial institutions.
4. Can you even tell an agent from a human?
A hard practical question is whether it will be feasible to distinguish agent activity from human activity reliably enough to apply differentiated controls. Some agent interactions will be obvious (API-driven traffic, recognisable software fingerprints, or requests carrying a verifiable digital signature). Others won’t: agents may operate through ordinary browsers, on consumer devices, with patterns that look like “fast but plausible” shopping. The sophistication and undetectability of agents is likely to increase exponentially as these practices become more common and the AI algorithm ‘learns’ from past mistakes and successes.
Merchants and payments providers should assume an adversarial environment: as soon as “agent detection” becomes a control, attackers will mimic whatever passes as human. The more realistic approach is layered:
- Verified agent identity and permissioning where possible (registration, attestation, scoped tokens, payments authorisation and credentials).
- Risk-based friction (extra verification for high-risk categories, unusual delivery changes, rapid retries, and abnormal returns patterns).
- Sophisticated transaction limits – not just per account, but per device, per credential, per payment instrument, and per agent provider.
- Kill switches and throttling that can be activated quickly and communicated across the industry when an agent-driven surge starts to look like abuse.
Practical takeaways
For retailers
- Treat agentic payments as a shift from checkout fraud to ecosystem compromise – tighten basic credential controls, permissions and integrations accordingly.
- Design for contested authorisation – tighten evidence capture of what happened (order provenance, delivery proof, confirmation steps) and clarify liability positions contractually where agents are involved.
- Assume returns and chargebacks will be stress points – invest in controls that establish true intent to purchase, combined with delivery integrity, and returns validation.
- Build agent-aware throttling – transaction limits, request throttling and extra verification for higher-risk actions, even when “user behaviour” signals are thin.
For payment firms and PSPs
- Pressure-test where “authorisation” sits when an agent is involved – and how disputes/chargebacks should be handled when the customer says “my agent did it”.
- Review fraud tools for the agent era – anomaly detection, token and credential abuse signals, and rapid kill-switch capability.
- Don’t over-rely on “agent vs human” detection – focus on controls that remain effective even if agents (or attackers) mimic human activity.
- Clarify and document liability allocation across the chain. Consider whether to lobby for legislative change to achieve a fairer distribution of liability where an agent has gone rogue.
Conclusion
Agentic payments are likely to be adopted unevenly, but the direction of travel is clear: more commerce will be executed through delegated, automated decision-making. For retailers and financial institutions, the near-term task is to treat agents as a new class of counterparty – one that can be manipulated and scaled in ways that challenge existing fraud-prevention models. The players that cope best will be those that adapt their frameworks and operational controls now, before abuse forces change in a hurry.
[View source.]