top of page
Screenshot 2025-03-14 091724.png

When AI Makes a Mistake Why Do We Judge It Differently?

  • Writer: THEMIS 5.0
    THEMIS 5.0
  • 2 days ago
  • 3 min read

Imagine this: You’re in a hospital waiting room. Two people give you the wrong diagnosis, one is a human doctor, the other an AI system. Most of us would forgive the human after all even doctors make mistakes, but we lose faith in the AI instantly.


Graphic of a human like robot pointing to a digital X-ray of a mouth.
What if an AI Doctor Makes a Mistake?

This double standard is everywhere. We want AI to be perfect, friendly, and transparent, yet when it comes close to those ideals, we start worrying it might be too powerful, too human-like. This tension - the trust gap between humans and AI - is fast becoming one of the defining challenges of Europe’s AI future.


Why AI Doesn’t Automatically Earn Human Trust

Humans forgive other humans for errors, misjudgments, and even lapses in concentration. We understand the messy mix of intuition, expertise, and emotion behind each decision. AI, on the other hand, is judged on a much harsher scale:


  • A minor error can cause instant distrust.

  • A lack of explanation feels threatening rather than neutral.

  • Even flawless performance sometimes triggers suspicion... “it’s too good to be true.”


Recent UK surveys reveal this paradox clearly. Nearly half of consumers say they would accept health advice from AI systems, but only if the systems explain their reasoning and leave room for human oversight. In other words, people aren’t rejecting AI wholesale. They just want AI to communicate like a trusted partner rather than a black box.


Europe’s Push for Trustworthy AI

The EU has been working hard to respond to these concerns. In July 2025, the European Commission published the General-Purpose AI Code of Practice, covering transparency, copyright, and safety for advanced AI models. Companies like OpenAI and Google signed on; Meta did not, citing ambiguity and risks of overreach.


Meanwhile, Germany missed the August 2 deadline to appoint national authorities to enforce the EU AI Act, showing that even as rules tighten, the path to real-world trust remains bumpy. Regulation can enforce accountability, but it can’t magically make people feel comfortable with AI in their everyday lives.


How our Work Helps

This is where the THEMIS project steps in with our trustworthiness assessment solution. Rather than focusing only on laws or labels, TOP gives everyday users practical tools to understand and shape AI systems according to their own values and needs.


Think of it as a 'trust coach' which can help help assess AI experiences:

  • Transparency Made Simple: Clear explanations about how an AI system reached its conclusions without technical jargon.

  • Human Oversight on Demand: Easy ways for users to escalate to a human decision-maker if they feel uncertain.

  • Personalised Trust Settings: Users can decide how cautious or fast they want the AI to be, much like adjusting privacy settings on a smartphone.


By aligning AI behaviour with individual preferences, Themis turns trust from an abstract ideal into a daily, interactive experience.


Bridging the Gap: From Regulation to Relationships

Europe’s AI Code of Practice and AI Act create the guardrails. But as recent headlines show, the real frontier of trust lies in relationships, not just rules.


If people can understand, question, and even customise the AI they use, whether it’s a medical assistant, a financial advisor, or a news recommender, the trust gap begins to close.

That’s the promise of THEMIS, opening the black box of Ai for everyone.

Comments


bottom of page