top of page
Screenshot 2025-03-14 091724.png

Building Trust in AI: Why Governance, Transparency, and Public Confidence Matter

  • Writer: THEMIS 5.0
    THEMIS 5.0
  • Dec 11
  • 3 min read

Map of the word with colours showing EU at 53% trust in AI regulation, and the U.S. at 37% and China at 27%.
Figure: Global Trust in AI Regulation

Artificial intelligence (AI) is no longer a niche technology, it’s woven into medicine, education, public services, content moderation, and even democratic discourse. Yet, alongside AI’s promise comes a pressing question that THEMIS is uniquely positioned to explore...


How do we build and sustain trust in AI? Not just in the technology itself, but in the regulatory systems that govern its safe and ethical use.

A Global Snapshot of Trust in AI Regulation

Recent global research (Pew Research) reveals stark differences in public confidence about who should regulate AI:


  • Across 25 countries, a median 53% of adults trust the EU to regulate AI effectively, while 37% trust the U.S. and only 27% trust China to do the same.

  • Trust in the EU varies within Europe. Germany and the Netherlands show higher trust levels than Greece and Italy, but overall it still outpaces trust in other major powers.

  • People with a positive view of the EU and those who are more excited than worried about AI tend to place more confidence in regulation.


This data highlights a global trust gap. Citizens are cautious about unregulated AI, but they still see the EU as a more credible regulator than the U.S. or China.


Why Trust in AI Matters

Trust isn’t just a nice-to-have, it’s central to whether societies will adopt and benefit from AI systems:


  • Studies (KPMG) show that people equipped with AI training and education report greater confidence in using and managing AI responsibly.

  • Trust influences public willingness to support AI in high-impact domains like healthcare, policing, and election systems. Research (arXiv) finds that as trust declines, risk perception rises, and so does demand for regulation. arXiv

  • Algorithm transparency and accountability are shown to mitigate skeptical attitudes and promote trust, signaling the importance of clear standards in AI governance.


Without trust, even beneficial innovations like AI-assisted diagnostics or automated public services can face resistance, and in some cases, backlash.


Challenges at the Intersection of Trust and Regulation


1. Healthcare: Promise and Complexity

AI could transform healthcare diagnostics and treatment, improving outcomes and reducing costs. But trust here hinges on security and regulatory clarity. Euractiv’s reporting highlights how the healthcare sector grapples with balancing innovation with complex legal and safety requirements. Patients and practitioners alike need assurances that data is secure and AI tools are rigorously vetted. (Pew Research)


2. Public Anxiety and Uneven Confidence

Beyond statistics, we’re seeing AI anxiety rise worldwide, people express concerns about privacy, job loss, misinformation, bias, and harmful decisions made by AI models. Polls show that citizens are more comfortable trusting their own governments or the EU than distant powers like the U.S. or China to regulate AI well. This means policymakers must address both the technical risks and the perceptions of harm to build durable trust. (Daily Sabah)


3. Fragmented Regulatory Approaches

AI governance is currently patchy:


  • The EU’s AI Act aims for a harmonised, risk-based approach, placing obligations on developers and deployers to ensure safety, transparency, and human oversight.

  • The U.S. still relies heavily on sector-specific regulation, which can lead to gaps and fragmented protections.

  • China’s centralized model enables rapid deployment but has limited external accountability and less public visibility into decision-making.


These differences matter because public trust is closely tied to regulatory clarity, accountability, and transparency. Trust erodes when people feel the rules are opaque or inconsistent.


Pathways to Strengthen Trust in AI

So what can policymakers, civil society, and international institutions do?


  • Prioritise Transparency and Accountability

Mechanisms like independent auditing, explainable AI standards, and clear data governance requirements help demystify how AI systems operate. Research shows that transparency can directly mitigate public skepticism.


  • Invest in Public Education and Literacy

Public understanding of AI is uneven. Less familiarity often correlates with higher anxiety and lower trust. Education campaigns and accessible explainers can help citizens engage with AI more confidently.


  • Harmonise Regulation Across Jurisdictions

Global challenges like cross-border data flows and multinational AI services require coherent rules. Collaborative frameworks, building on EU norms and encouraging comparability across regions, can reduce trust gaps.


  • Embed Ethical Principles into Practice

Ethics should be more than aspiration, principles like fairness, inclusivity, and human rights need operational backing in law, procurement, and public deployments. Research emphasises that ethical frameworks grounded in practical compliance build resilience and societal confidence.


Trust as a Policy Priority

AI’s future will be shaped not only by code and capital, but by confidence , how much people believe that institutions and technologies will protect their rights, promote fairness, and serve the public good.


For THEMIS and like-minded organisations advancing democratic oversight and societal trust, the task is clear...

Strengthen governance that is transparent, participatory, and accountable. Only then can societies fully harness the potential of AI.

Comments


bottom of page