Imagine telling a digital agent, “Use my points and book a family trip to Italy. Keep it within budget, pick hotels we’ve liked before, and handle the details.” Instead of returning a list of links, the agent assembles an itinerary and executes the purchase.

That shift, from assistance to execution, is what makes agentic AI different. It also changes the operating speed of commerce. Payment transactions are already clear in milliseconds. The new acceleration is everything before the payment: discovery, comparison, decisioning, authorization, and follow-through across many systems. As humans step out of routine decisions, “good enough” data stops being good enough. In an agent-driven economy, the constraint isn’t speed; it’s trust at machine speed and scale.

Automated markets already work because identity, authority, and accountability are built in. As agents transact across businesses, that same clarity is required. Master data management (MDM)—the discipline of creating a single master record—becomes the exchange layer: tracking who an agent represents, what it can do, and where responsibility sits when value moves. Markets don’t fail from automation; they fail from ambiguous ownership. MDM turns autonomous action into legitimate, scalable trust.

To make agentic commerce safe and scalable, organizations will need more than better models. They will need a modern data architecture and an authoritative system of context that can instantly recognize, resolve, and distinguish entities. It is the difference between automation that scales and automation that needs constant human correction.

The agent is a new participant

Digital commerce has long been built on two primary sides: buyers and suppliers/merchants. Agentic commerce adds a third participant that must be treated as a first-class entity: the agent acting on the buyer’s behalf.

That sounds simple until you ask the questions every enterprise will face:

Related work from others:  Latest from MIT : “Robot, make me a chair”

Who is the individual, across channels and devices, with enough certainty for automation?

Who is the agent, and what permissions and limits define what it can do?

Who is the merchant or supplier, and are we sure we mean the right one?

Who holds liability if the agent acts with permission, but against user intent?

The practical risk is confusion. Humans, for example, can infer that “Delta” means the airline when they are booking a flight, not the faucet company. An agent needs deterministic signals. If the system guesses wrong, it either breaks trust or forces a human confirmation step that defeats the promise of speed.

Why ‘good enough’ data breaks at machine speed

Most organizations have learned to live with imperfect data. Duplicate customer records are tolerable. Incomplete product attributes are annoying. Merchant identities can be reconciled later.

Agentic workflows change that tolerance. When an agent takes action without a human checking the output, it needs data that is close to perfect, because it cannot reliably notice when data is ambiguous or wrong the way a person can.

The failure modes are predictable, and they show up in places that matter most:

Product truth: If the catalog is inconsistent, an agent’s choices will look arbitrary (“the wrong shirt,” “the wrong size,” “the wrong material”), and trust collapses quickly.

Payee truth: Agentic commerce expands beyond cards to account-to-account and open-banking-connected experiences, broadening the universe of payees and the need to recognize them accurately in real time.

Identity truth: People operate in multiple contexts (work versus personal). Devices shift. A system that cannot distinguish amongst these contexts will either block legitimate activity or approve risky activity, both of which damage adoption.

This is why unified enterprise data and entity resolution move from nice to have to operationally required. The more autonomy you want, the more you must invest in modern data foundations that ensure it is safe.

Related work from others:  UC Berkeley - Offline RL Made Easier: No TD Learning, Advantage Reweighting, or Transformers

Context intelligence: The missing layer

When leaders talk about agentic AI, they often focus on model capability: planning, tool use, and reasoning. Those are necessary, but they are not sufficient.

Agentic commerce also requires a layer that provides authoritative context at runtime. Think of it as a real-time system of context that can answer instantly and consistently:

• Is this the right person?
• Is this the right agent, acting within the right permissions?
• Is this the right merchant or payee?
• What constraints apply right now (budget, policy, risk, loyalty rules, preferred suppliers)?

Two design principles matter.

First, entity truth must be deterministic enough for automation. Large language models are probabilistic by nature. That is helpful for creating options for writing and drawing. It is risky for deciding where money goes, especially in B2B and finance workflows, where “probably correct” is not acceptable.

Second, context must travel at the speed of interaction and remain portable across the entire connected network value chain. Mastercard’s experience optimizing payment flows is instructive: the more services you layer onto a transaction, the more you risk slowing it down. The pattern that scales pre-resolves, curates, and packages the signal so that execution is lightweight.

This is also where tokenization is heading. Initiatives like Mastercard’s Agent Pay and Verifiable Intent signal a future in which consumer credentials, agent identities, permissions, and provable user intent are encoded as cryptographically secure artifacts — enabling merchants, issuers and platforms to deterministically verify authorization and execution at machine speed.

What leaders should do in the next 12 to 24 months

Adoption will not be uniform. Early traction will often depend less on industry and more on the sophistication of an organization’s systems and data discipline.

Related work from others:  Latest from MIT Tech Review - Google is finally taking action to curb non-consensual deepfakes

That makes the next two years a window for practical preparation. Five moves stand out.

Treat agents as governed identities, not features. Define how agents are onboarded, authenticated, permissioned, monitored, and retired.

Prioritize entity resolution where the cost of being wrong is highest. Start with payees, suppliers, employee-versus-personal identity, and high-volume product categories.

Build a reusable context service that every workflow and agent can call. Do not force each system to reconstruct identity and relationships from scratch.

Precompute and compress signals. Resolve and curate context upstream so that runtime decisioning stays fast and predictable.

Expand autonomy only as trust is earned. Build a governance framework to address disputes, keep humans in the loop for higher-risk actions, measure accuracy, and expand automation as outcomes prove reliable.

A tsunami effect across industries

Agentic AI will not be confined to shopping carts. It will touch procurement, travel, claims, customer service, and finance operations. It will compress decision cycles and remove manual steps, but only for organizations that can supply agents with clean identity, precise entity truth, and reliable context.

The winners will treat entity truth and context as core infrastructure for automation, not as a back-office cleanup project. In commerce at machine speed, trust is not a brand attribute; it is an architectural decision encoded in identity, context, and control.

This content was produced by Reltio. It was not written by MIT Technology Review’s editorial staff.

Share via
Copy link
Powered by Social Snap