AccueilEnglishVisa Wants Your AI to Shop for You—Europe’s the Test Lab, With...

Visa Wants Your AI to Shop for You—Europe’s the Test Lab, With Two Hard Guardrails

Visa is rolling out a new program in Europe that boils down to this: your next “purchase” might start with software, not you.

The initiative is called Visa Agentic Ready, and it’s designed to help banks and merchants handle payments kicked off by AI “agents” acting on a consumer’s behalf. Visa says this is part of a broader global push, but Europe—plus the U.K.—is where the company wants to run the first real-world drills.

Visa’s pitch is practical, not sci-fi: let issuers and retailers test what happens when “I want to buy this” is generated by an algorithm, then executed on today’s payment rails—without turning automated commerce into a lawless free-for-all.

Agentic Ready isn’t a shiny new wallet—it’s a controlled test range

Visa is framing Agentic Ready as an “activation program,” not a finished consumer product. Translation: this is infrastructure work. The target customers are card-issuing banks and merchants that need to figure out what “AI-initiated payment” even means in practice.

And no, Visa isn’t saying your AI gets its own credit card. The agent triggers a transaction for you, while responsibility and controls stay tied to the existing payments ecosystem—banks, merchant systems, fraud tools, dispute processes.

For issuers, the headache is obvious: authorization has traditionally been about a card number, a device, a password, a fingerprint, a one-time code. Now add a new variable—automated intent. Banks still have to meet compliance requirements and keep fraud down, even when the “buyer” is a bot following rules you set last week and forgot about.

For merchants, AI shopping changes the messy stuff: out-of-stocks, substitutions, returns, and the whole question of what the customer actually agreed to. An agent might compare prices, tweak a cart, pick shipping, then pay. But it can only execute cleanly if the merchant’s catalog data, terms, and constraints are structured in a way machines can’t misread.

There’s also a competitive subtext here. Big tech is racing ahead with assistants and agents. If Visa doesn’t shape how agent-driven checkout works, it risks becoming the invisible plumbing behind someone else’s interface—useful, but powerless.

Visa’s bigger play: “Intelligent Commerce” plugs AI into existing rails

Agentic Ready sits inside a broader Visa umbrella the company calls Visa Intelligent Commerce. The key detail: Visa isn’t leading with a new consumer wallet. It’s trying to adapt the payments system people already trust so AI-driven shopping doesn’t break it.

The technical challenge is turning an agent’s “intent” into a transaction that still meets security, authentication, and audit requirements. Buying groceries isn’t the same as booking a flight. Renewing a subscription isn’t the same as ordering a one-off replacement part. Amounts, frequency, merchant category, and risk all change the rules.

Visa’s argument is that the industry needs a common grammar for this stuff—otherwise every bank and every retailer invents their own duct-taped approach, and costs (and fraud) explode.

Eduardo Prieto, Visa’s general manager in Spain, summed up the company line: as AI agents start influencing how people search, choose, and buy, payments have to evolve right alongside them—without losing the security and scale financial institutions demand.

And here’s the part consumers will care about fast: automation can speed up purchases, but it can also speed up disputes. If an agent buys the wrong item, renews something you didn’t mean to renew, or “helpfully” chooses a pricier substitute, the chargeback pipeline is going to get a workout.

Why Europe first: tokenization, passkeys, and a culture of strict authentication

Visa’s reasoning for starting in Europe is blunt: the region is already further along on three building blocks that matter when software is buying things for humans—tokenization, passkeys, and advanced authentication.

Tokenization swaps card details for tokens, making stolen data less valuable. That matters more when an agent is shopping frequently across multiple merchants—more transactions means more chances for something to leak.

Passkeys—cryptographic credentials often tied to a device and biometrics—reduce reliance on passwords and cut down phishing. In an agent-driven world, you can’t be asked to type a password or approve a code every five minutes or the whole “automation” idea collapses. The trick is strong authentication at the right moment, then permission for a defined set of actions.

Advanced authentication is the broader toolkit: behavioral signals, context, risk scoring, and stepping up to stronger checks when something looks off. Europe’s banking system has been living with strong customer authentication norms for years, pushed by regulation and industry practice. Visa sees that as a safer proving ground than markets where “security” still means “hope the password isn’t ‘Password123.’”

There’s a credibility bet here, too. If agent-initiated payments can survive Europe’s stricter filters without spiking fraud or tanking approval rates, Visa can sell the model elsewhere with a straight face.

The real fight: consent, liability, and fraud that targets the agent

Letting an AI agent pay isn’t mainly a tech problem. It’s a rules problem.

Consent has to become configurable. Maybe you’re fine letting an agent reorder household staples within a price range, but you don’t want it booking travel. Maybe substitutions are okay for paper towels, not for baby formula. Maybe it can buy from Merchant A but never Merchant B. Without tight guardrails, “convenience” turns into “why is my card getting hammered?”

Liability gets ugly fast. If the agent screws up, who eats it—the consumer who delegated, the agent provider whose model misinterpreted, the merchant whose product info was ambiguous, the bank that approved it, or Visa that built the framework? Visa is calling this a test environment for a reason: these edge cases need to be simulated before anyone flips the switch at scale.

Fraud won’t disappear; it’ll mutate. Criminals won’t just steal card numbers—they’ll try to hijack agents, manipulate settings, steer purchases to shady merchants, or exploit delegation loopholes. An agent authorized for small purchases could be pushed into repetitive transactions, or tricked into working around limits. Fraud systems will need new signals: weird frequency patterns, unnatural cart behavior, sudden preference changes, new merchants appearing out of nowhere, and other “this doesn’t look like a human” tells.

Visa’s promise—secure and scalable—runs straight into the classic tradeoff: the smoother the automation, the more the security has to be invisible and sharp. Make it too intrusive and people won’t use it. Make it too lax and fraud and disputes climb.

By starting with banks and merchants, Visa is doing what payments companies always have to do: build a coalition. You don’t roll out a new payments behavior by decree. Now add a new player—the agent provider—and the chain gets even more complicated. Visa wants to be the rule-maker for interoperability, not just the pipe that moves money.

Top News

Favorites