top of page
Screenshot 2025-03-14 091724.png

American vs. European Trust in AI: A Tale of Two Approaches

  • Writer: THEMIS 5.0
    THEMIS 5.0
  • Sep 20
  • 4 min read

AI was meant to reshape 2025. Some said by now it would be writing most of our code, making complex decisions, aiding in governance, healthcare, journalism, and more. While we’re not quite there, AI is absolutely part of everyday life, and with that comes a surge in use, rising concern, and a gap between trust and capability.


Graphoc of the US and EU flag with the word AI imprinted on top of them.

Are there Differences in American vs European Trust in AI

At THEMIS 5.0, our mission is to help people assess and improve the trustworthiness of AI systems so that AI can deliver its benefits without undermining human values, fairness, or transparency. To illustrate why our work matters, let’s compare what recent U.S. data shows with what we’re learning (and building) in Europe, and how THEMIS is part of the solution. We'll come to a conclusion on the differences of American vs. European Trust in AI.


U.S. Rapid adoption, incomplete confidence

Recent polling and tracking (Ipsos, Pew Research, etc.) tell a striking story in the U.S.:


  • AI awareness and usage have grown fast. Roughly half of Americans who are familiar with AI now report using it. (see Ipsos)

  • Generational gap. Younger Americans adopt new AI tools (chatbots, image generation, productivity assistants) at significantly higher rates than older age groups. (see Ipsos)

  • Trust is mixed. Two in three U.S. AI users admit they don’t fully trust the tools they rely on, but they use them anyway (see Ipsos)

  • Concerns are real and multifaceted. Majorities of Americans worry about AI’s impact on jobs, creativity, and even human relationships. (see Pew Research Center)

  • AI angst. Nobody is sure where AI is heading, and that uncertainty fuels both excitement and unease. (see Pew Research Center)


In short: Americans are adopting AI quickly, even if their trust is shaky.


EU: Cautious progress, emphasis on governance

Europe paints a complementary but different picture:


  • Adoption is more cautious. Only 13.5% of EU enterprises with 10+ employees used AI in 2024, though the figure rises to 41% among large enterprises. (see Eurostat)

  • Trust depends on governance. Surveys show Europeans want AI to be transparent, explainable, and subject to strong oversight before they feel comfortable. (see Deloitte)

  • Privacy is paramount. Nearly nine in ten Europeans are concerned about their digital privacy, and many say they would be more willing to adopt AI if their data were clearly protected. (see Euronews)

  • Regulation leads. The EU’s AI Act reflects a precautionary stance: innovation is welcome, but only within guardrails that protect rights and ethics. (see European Commission)


Key differences: Americans vs. Europeans

Putting the two together reveals several important contrasts:

Feature

U.S. Tendencies

European Tendencies

Speed of adoption

Often fast, especially among technologically savvy users; willingness to use even when trust is not fully established.

More measured; uptake shaped by institutional & regulatory contexts; concerns often precede use.

Trust Gap

Many are using AI despite low or conditional trust; trust is very much “works-for-me (with caveats)”.

Trust tends to be seen as something to be earned: through transparency, safeguards, ethical and legal oversight.

Regulatory / ethical infrastructure

Fragmented; policy and standardisation often lag behind practice; consumer protection and legal regimes are still catching up.

More structured: EU values and legislation (AI Act etc.) are pushing for ethical, legal, technical standards; many co-creation, stakeholder engagement processes.

User involvement

Users often adopt with less input into design or feedback mechanisms.

More participatory: European projects, public bodies, NGOs pushing for user involvement, citizens’ expectations, co-creation.

These differences matter. They mean that AI systems developed or deployed without meaningful trustworthiness checks risk backlash, misuse, or failing to meet the expectations of diverse users - in both the U.S. and Europe.


Why We Need THEMIS 5.0 to Close the Trust Gap

THEMIS 5.0 is precisely designed to address the gaps revealed in these data. Here is how:


  1. Human-Centred Trustworthiness Optimisation Ecosystem: THEMIS builds tools that let users evaluate how trustworthy an AI system is along dimensions such as fairness, technical accuracy & robustness, transparency, and moral or ethical norms. The goal is not just to audit once, but to allow continuous improvement via feedback loops.

  2. Co-creation and user involvement: Across multiple European countries, THEMIS engages users, experts, civil society, developers to define what trust means in different contexts, what values and risks matter, and how tools should behave. For example, workshops (over 270 participants across 8 countries) have shaped the THEMIS framework.

  3. Piloting in critical sectors: THEMIS isn’t hypothetical. It is applying its methods in real-world sectors like healthcare, media, logistics/port management. These are domains where decisions from AI can be high-stakes and where trust, transparency, and fairness are especially critical.

  4. Alignment with regulation and standards: THEMIS translates ethical, legal, and technical standards (including those from the EU AI Act, ethics guidelines, and emerging norms) into concrete evaluation tools. It helps organisations demonstrate and improve trustworthiness in ways that are compliant, understandable, and accountable.

  5. Modules for explainability, risk management, and continuous improvement: Key components in THEMIS include a “Trustworthiness Assessor”, user profiling tools, optimization suggesters, anomaly detection and decision-impact assessment. These allow users both to understand why AI behaves the way it does and to engage in improving it.


Why Now?

AI tools are now widely used, but far from perfectly trusted. As in the U.S., many users globally are adopting AI even while uncertain about its fairness, reliability, or respect for their values, a risky “use-before-trust” moment where errors, bias, or broken promises could erode confidence deeply. In Europe, legislation like the AI Act is pushing accountability, fairness, and transparency to the forefront, meaning organisations urgently need practical tools to meet both regulatory and ethical expectations. At the same time, users increasingly demand a voice in shaping how AI works: they expect systems not just to function, but to reflect their values, give them oversight, and avoid harm. And in high-stakes domains such as healthcare, media, and logistics, where AI decisions directly affect lives, democratic discourse, and resource allocation, trustworthiness is not optional, it is essential.


Yes there may be a difference between how Americans and Europeans are engaging with AI. Americans tend to move faster in adoption; Europeans tend to insist more on trust, regulation, and user values. But both perspectives show us that adoption alone is not enough: what matters is how AI is used and who it respects. The future of AI should be not just about what we can do, but what we should do, and THEMIS 5.0 helps build the path from now to that future.

 
 
 

Comments


bottom of page