When Brits Say They Trust AI More Than Government, What Does That Really Mean?
- THEMIS 5.0

- Nov 13
- 3 min read
Recent UK research on traditional media in the age of AI, by OnePoll for 72point, shows something striking: about 44% of UK adults say they trust artificial intelligence to deliver more truthful and factual information than they trust the government (38%), and social media influencers (24%). While traditional news outlets remain the most trusted at 55%.

At first glance, this is surprising. AI systems are often met with scepticism, people worry about bias, transparency, and accountability. Yet here we see a growing perception that machine-generated content may be more reliable than the statements of public institutions.
So what’s going on behind the numbers? And what does this mean for those of us working to build trustworthy AI through initiatives like THEMIS 5.0.
Why people might trust AI more — and what the risks are
1. Perceived neutrality and efficiency: People often view AI as less political, less driven by personal agendas or partisanship, than human institutions. The study found that many Britons now see AI as a more reliable source of factual information than friends, family, influencers, or even government officials.
2. Declining confidence in institutions: At the same time, institutional trust remains low. In the same poll, 47% of respondents said they believe public institutions “no longer serve them effectively,” pointing to disillusionment with traditional governance structures.
3. The black-box flip side: However, trusting AI doesn’t mean understanding it. A separate Tony Blair Institute and Ipsos study found that 38% of UK adults cited a lack of trust in AI-generated content as a major barrier to adoption. People may rely on AI outputs, but still feel uneasy about how those outputs are produced.
So while the headline “Brits trust AI more than government” grabs attention, the real story is that trust in AI is conditional, it depends on transparency, reliability, and alignment with human values.
How THEMIS 5.0 fits in: from blind faith to value-based trust
At THEMIS, we believe that trust in AI can’t be earned through compliance alone. It must be built through alignment, ensuring that systems reflect the values, expectations, and needs of the people who use them.
Through our work across the port, healthcare, and media sectors, THEMIS develops risk-based methodologies to help organisations evaluate when and how AI can be trusted.
This approach includes:
Value alignment over blind faith: The public’s growing trust in AI is an opportunity, but it must be tempered with responsibility. THEMIS helps organisations go beyond “it works” toward understanding who it works for and why.
Risk-based, contextual assessment: Trustworthiness isn’t universal; it’s contextual. THEMIS applies a risk-based framework that considers the stakes, roles, and vulnerabilities in each setting. Trust in AI for diagnosing cancer is very different from trust in AI for moderating online speech.
Empowerment through understanding: Studies show that trust increases when people understand how AI operates (Tony Blair Institute, 2025). THEMIS focuses on participatory evaluation and literacy, giving people the tools to question, assess, and interact with AI systems confidently.
What this means for policymakers, developers, and practitioners
For policymakers: Regulation must go beyond safety to address transparency, responsiveness, and human control.
For AI developers: Trust isn’t automatic, it must be designed in, through explainability, fairness, and user control.
For organisations adopting AI: Don’t assume trust because a tool is certified. Engage with users, communicate openly, and provide ways to challenge or appeal AI-driven decisions.
A new kind of trust
That Britons may now trust AI more than their government should give us pause, not because machines are “winning,” but because trust itself is shifting. People are seeking reliability and fairness wherever they can find it.
Projects like THEMIS 5.0 show that true trust in AI doesn’t come from technological prowess alone. It comes from participation, transparency, and shared values.
In the end, the goal isn’t to make AI more human, it’s to make AI more trustworthy to humans.




Comments