Jan 22, 2026
Before satisfying new AI, do you know who you're serving?
Vivianne Gravel
Why digital identity comes before AI — not the other way around
Conversational agents, automated recommendations, intelligent assistants: AI is establishing itself as a new intermediary between organizations and their users.
But as AI shifts from assistance to action, a strategic question too often remains unanswered — sometimes until an incident occurs: do we really know who AI is serving, and with what rights?
The real shift isn't technological. It's relational.
For years, the relationship with users was structured around a website as the main entry point, fragmented journeys, independent channels, and multiple identities — often partial. This model worked as long as the user made the effort to seek out information, navigate, understand, and act.
That's no longer the case.
Journeys now flow through notifications, mobile apps, digital cards, AI agents, and contextual recommendations. In this new context, AI no longer just responds — it guides, recommends, and can even trigger actions.
When AI becomes the intermediary, the nature of the relationship changes.
When AI acts, identity becomes critical
An AI model, even a high-performing one, doesn't inherently know who the user is, what services they're entitled to, what data can be used, or what actions are authorized.
One of AI's major promises is hyperpersonalization: recommendations, services, and interactions tailored to each user, in real time. But this promise has a prerequisite that's often overlooked — knowing precisely who the user is. Without reliable digital identity, hyperpersonalization becomes random personalization, or even intrusive. And the finer the personalization, the more visible and damaging the error.
Without clear, unified, and governed digital identity, AI operates without a solid relational framework. The risks then become very real: poorly contextualized responses, unauthorized access, inexplicable decisions, loss of trust — and ultimately, legal and reputational exposure that no one anticipated.
According to Gartner, organizations that implement comprehensive AI governance platforms will experience 40% fewer AI-related ethical incidents by 2028. Yet fewer than a quarter of IT leaders say they are very confident in their ability to manage governance when deploying generative AI tools.
An AI without reliable identity isn't neutral. It becomes a systemic risk factor.
Digital identity is no longer just an identifier
In the age of AI and large language models, digital identity is no longer limited to a login or authentication. It becomes a permission framework, a recognition foundation, a trust filter, and a condition for responsible action.
It enables consistent user recognition, proper role and rights enforcement, consent compliance, data exposure limitation, and makes AI truly actionable.
That's the difference between a simple chatbot and a trustworthy intelligent agent.
From transactional identifier to relational key
Whether it's a citizen card, a membership card, or a loyalty card, the digital identifier has long been conceived as a transactional tool: accumulating points, offering discounts, identifying a user in minimal fashion.
This model is now showing its limits. What users are looking for today is recognition, relevance, and a continuous, coherent relationship.
This was one of the key takeaways from NRF 2026: traditional loyalty programs are running out of steam. Accumulating points no longer creates attachment. Winning brands are transforming their loyalty card into a key that unlocks an enriched experience — not just another promotional channel.
When unified, secured, and governed, digital identity allows this identifier to evolve into a universal relational key — a key that recognizes the user across all touchpoints, respects their rights and consents, and enables a lasting relationship beyond the transaction.
In the age of AI, the identifier is no longer a program. It becomes a relational key.
What we're seeing in the field
Whether we're talking about cities, retail networks, associations, or complex organizations, the signals are converging. The organizations moving fastest share the same understanding: before deploying AI, you need to structure the relationship. And the relationship starts with identity.
In several recent deployments — with cities of over 100,000 citizens and retail networks — we've observed the same pattern: organizations that had structured their digital identity upfront were able to deploy AI agents within weeks. Others, who started by putting a conversational agent on their website, now have to step back to lay the foundations — sometimes after a trust incident.
The website remains important, but it is no longer — and perhaps never was — a single source of truth. In reality, content contradicts itself, business systems evolve in parallel, and the AI agent can't tell the difference on its own. The true source of truth isn't a channel: it's a governance framework that reconciles data, detects inconsistencies, and knows when to bring in a human when confidence in the information demands it.
The relationship is now maintained through recognition, continuity, and trust.
Digital identity as the new foundation of trust
As AI becomes embedded in daily interactions, organizations will increasingly be judged not on the presence of AI, but on the trust it inspires.
That trust rests on clear digital identity, responsible data governance, explicit rules, and an accountable relationship with users.
Digital identity thus becomes the foundation of any AI strategy that is sustainable, responsible, and value-creating.
B-CITI will soon publish a white paper exploring these issues in depth.
To be notified of its release and get early access to the 'AI & Identity' diagnostic,
sign up here →
—
