ChatGPT has moved from conversational assistant to an autonomous point of sale with the recent activation of Instant Checkout in the US. Retail is entering a new phase of algorithmic power, and that power raises not only operational questions but existential ones. Who decides what products surface, which voices are amplified, and how is fairness, trust, and visibility maintained when AI owns the checkout? The traditional retail funnel was a simple concoction of awareness, consideration, conver
conversion and loyalty. But when discovery and purchase both occur inside an AI interface, those stages risk collapse.
Digital marketing strategist and AI expert Kelly Slessor explains the major shift in power from retailers and marketplaces to AI systems that sit between the customer and the transaction.
“Customers are starting their buying journey using AI. In fact, traffic from AI-powered chat tools is expected to surge 520 per cent this peak season, according to Adobe. Agents will manage discovery, comparison and increasingly, the purchase itself,” she told Inside Retail.
Slessor warns that this new dynamic changes who controls visibility. Without structured, machine-readable data, even the strongest brands risk becoming fulfilment partners in someone else’s UX.
What emerges is a new kind of commerce infrastructure where answers replace ads, and attribution becomes cryptic.
The disappearing customer
The risk, for retailers, is faltering relationships with customers. Autonomous AI systems capable of making independent decisions are transforming every layer of retail operations, from inventory management to customer service.
However, governance must move as fast as innovation. The brand-to-consumer trust chain has always relied on marketing, loyalty programs, and direct attribution. ChatGPT’s Instant Checkout, which enables consumers to make purchases directly within the chat, eliminates these communication points.
As a result, retailers may lose visibility of who bought what, where and why.
When the sale happens inside an AI ecosystem, retailers can no longer control the experience that once defined loyalty.
As retail strategist Dean Salakis recently told Inside Retail, retailers should be examining how they can enhance their visibility in AI right now. Reddit, for instance, is one of the top sources feeding these systems.
The ethics of the answer
The second challenge is epistemic: what happens when AI becomes the recommender, curator and adjudicator of the best product?
Although algorithmic bias has become the focal point of debate revolving around ChatGPT’s commerce tools, it is merely one dimension of the broader injustices embedded in AI systems.
Bias in commerce manifests covertly in the ranking of products, including the tone of responses, and the invisibility of smaller players.
According to RWS, a UK-based leader in language and content services, 62 per cent of consumers say transparency around AI usage directly increases their trust in a brand.
Yet, at the same time, few brands can currently explain why their product appears or doesn’t appear in an AI-generated answer.
Slessor noted that bias in commerce exists in every digital algorithm, but AI-driven commerce amplifies it.
“The focus should be on auditing data across channels to ensure it’s structured and accurate,” she said. “The more complete and diverse your data, the more likely AI is to display your products fairly.”
At an industry level, she argues, retailers should be pushing for transparency from AI platforms to reveal how ranking systems work; otherwise, bias could determine which brands consumers get to see.
The opacity is structural. Most generative systems blend vast data sources, ranging from brand websites to Reddit threads, without clear attribution. This is what makes the question “who guards the algorithm?” more than philosophical.
The next phase of ethical AI may demand more than bias audits; it could require transparency protocols and a reframing of accountability.
More broadly, that means a poorly trained model could over-index large retail players and under-surface independent or sustainability-led brands. It could also inadvertently reference false claims, attributing features or values to a product that it never had, thereby opening new avenues of brand risk.
The question is, who takes responsibility when that happens? The retailer, the model provider or the brand whose reputation is compromised by a machine’s misstep? For Slessor, the answer begins with shared standards.
“Right now, AI in retail operates with little regulation,” she said, “each platform defines fairness, relevance, and transparency on its own terms.”
She believes the sector urgently needs independent bias audits, fairness metrics and transparent governance to prevent AI-driven commerce from concentrating power in the hands of a few dominant platforms.
Retail’s existential moment
Loyalty is being rewritten for the AI age. As Slessor states, the data that once powered rewards programs will soon reside in conversations between people and machines, meaning trust will hinge less on CRM ownership and more on whether a brand can prove its reliability.
In her view, perks like free shipping or VIP access must become machine-readable, as AI agents won’t be swayed by emotional marketing; they’ll be looking for proof.
Inside this transformation, the locus of authority is shifting from merchant to model, from curated shelves to probabilistic outputs.
As LLMs intermediate between brands and buyers, they assume the cultural role once held by retailers.
The opportunity is enormous, showcasing a frictionless, personalised, 24/7 commerce that understands context and intent. The underlying risk is that retailers become invisible participants in their own transactions.
Until governance catches up, no one truly guards the algorithm. And that, in the new age of AI-mediated commerce, is retail’s most urgent question.