What happens when the people driving technological innovation are miles ahead of the people expected to live with it? Hotwire’s latest Frontier Tech Confidence Tracker suggested that we’re heading straight into a crisis of legitimacy.
Polling over 8,000 members of the public and 730 business leaders across five European markets, the study revealed a deep split. Most business leaders (79%) think the public is positive about tech adoption, but less than half (46%) of the public actually is.
Additionally, only 36% of the public trust companies using frontier tech without transparency, compared to 67% of business leaders.
While business leaders are charging ahead with enthusiasm for frontier technologies (index score: 77), the public is hesitant (score: 48). The gap between those building the technology and those expected to use it is widening. At the centre of that divide is consent: people are being left out of decisions that increasingly shape how they live and work.
The consent gap: tech without transparency
AI is already embedded in everyday consumer experiences, from pricing to content curation, CV screening to customer service. But it’s often invisible. According to Ofcom, only 30% of adults feel they can confidently judge whether an image, audio or video was generated by AI. As a result, an inquiry for media literacy was launched to examine how we’re keeping pace.
And yet, most brands haven’t adapted their communications or user experience to match this new reality. The result? People feel excluded from decisions that increasingly affect their lives. AI is being rolled out, not brought in.
AI isn’t a calculator
Some in tech argue that AI doesn’t need to be explained any more than a calculator does. But that analogy doesn’t hold. Calculators don’t determine your loan approval or moderate your content. AI does.
These aren’t neutral tools; they’re systems making subjective judgements. When algorithms influence high-stakes outcomes, people need to understand why. Without explainability, there is no accountability. And without accountability, trust breaks down.
Technocrats vs. trust
The report also reveals a shift in who the public wants to hear from. Business leaders still look to tech founders for guidance. But the public has moved on. Trust is returning to scientists, researchers, and independent experts. Pew Research shows only a minority of Americans trust companies to use AI responsibly, and sentiment in the UK is mirroring this.
The leaders driving AI adoption today — Zuckerberg, Musk, Altman, Bezos — are outliers by design. High-functioning, often neurodivergent, their strengths lie in abstract thinking and system-level problem solving. That’s a superpower in science and innovation. But, it means they may see the world very differently from those living in it.
This matters. In any functioning democracy, progress demands participation. If only the most technically or commercially literate shape the future, we’re not building a better society — we’re building one that fewer people recognise.
Time to talk differently
If progress no longer equals trust, communications must evolve. The UK government’s Data and AI Tracker Survey confirms that people want more than disclosure. They want dialogue. They want a voice.
Here’s how brands can start:
Explain what matters. Offer simple, human explanations for how AI impacts outcomes.
Bring in validators. Collaborate with trusted academic or regulatory voices to audit and explain AI use.
Elevate the credible. Use scientists, engineers, and independent experts as messengers.
Create real forums. Build space for feedback and conversation, not just announcements.
Because when people feel decisions are made about them, not with them, you lose more than your reputation, you lose legitimacy.
A future that includes everyone
AI has the potential to revolutionise education, medicine, and work. But if it feels imposed rather than understood, it will face resistance. Business still has a choice. Transparency, not perfection, is the gateway to trust. And trust is what makes innovation sustainable.
So, ask yourself:
Do people know when they’re interacting with AI?
Do they understand what it’s doing?
Do they feel part of the future being built?
If not, the system isn’t progressing. It’s just concentrating power. And history shows us where that leads.
PRmoment Leaders
PRmoment Leaders is our new subscription-based learning programme and community, built by PRmoment specifically for the next generation of PR and communications leaders to learn, network, and lead.
PRmoment LeadersIf you enjoyed this article, sign up for free to our twice weekly editorial alert.
We have six email alerts in total - covering ESG, internal comms, PR jobs and events. Enter your email address below to find out more: