top of page
  • Writer's pictureTHEMIS 5.0

THEMIS 5.0 Paving the Way for Trustworthy AI Systems

Artificial Intelligence (AI) research has long been dedicated to mirroring human behaviour within socio-technical systems, aiming to replace human decision-making through the development and integration of AI systems. However, a crucial concern emerges when the emphasis on machine autonomy leads to the creation of AI systems that are subjective and prone to bias, eroding user trust and potentially resulting in biased and harmful decision-making.

In response to this challenge, developers are incorporating ethical reflection processes into AI system design, often involving collaboration with ethicists. Notable initiatives such as IBM's Fairness 360 and Google's What-If tool strive to evaluate technical fairness in AI systems. Yet, a consensus among technology companies and thought leaders suggests that building trustworthiness involves empowering human users to make design choices, giving them control over AI technology.

human hand and AI robot hand touching fingers
Need to give humans control over AI

Measuring the trustworthiness of AI technology requires considering human performance and satisfaction metrics, along with factors related to decision support needs within the business context of the socio-technical system. Legal, moral, and ethical principles of human users, organisational responsibility, and liability must also be taken into account. Prioritising the implementation of these principles becomes a competitive advantage for organisations embedding AI systems, fostering a culture of safety and effective decision-making.

Enter THEMIS 5.0, a ground breaking European research and innovation project co-creating an AI-driven human-centered trustworthiness optimisation ecosystem and framework. Comprising cloud-based AI services, THEMIS 5.0 aims to engage users through AI-driven interactive dialogues and help them assess how trustworthy they think a particular AI decision is. In other words, an AI-driven conversational agent (chatbot) will help ensure transparency by providing human-interpretable explanations of the decision-making process being assessed, while gathering information about user values and business goals to ensure the results are relevant to the user needs. This continuous interaction facilitates trustworthiness improvement cycles, allowing for human-centered assessments by users and corrective actions by developers and providers.

Flowchart showing process for Experiment-driven, human-centric ‘by design’ decision support
Experiment-driven, human-centric ‘by design’ decision support

THEMIS 5.0 empowers users by:

  • Effectively eliciting decision support needs, moral values, and key success factors based on psychological and behavioural analysis.

  • Human-centered evaluation and optimization of trustworthiness related to fairness, technical accuracy, and robustness.

  • Enhanced explainability through anomaly detection indicators.

  • Human-centered actions for optimizing trustworthiness based on risk mitigation approaches.

To implement THEMIS 5.0, the Trustworthy AI Act is translated into technical specifications, incorporating standards related to the lifecycle of trustworthy AI systems. Following a European human-centric approach, THEMIS 5.0 engages in strong co-creation processes to align legal, ethical, and robust AI components. Co-creation spans eight European countries to ensure broad acceptance, focusing on developers, users, and online service providers from critical applications and industrial sectors.

Finally, THEMIS 5.0 will undergo pilot testing in three well-defined use cases within healthcare, transportation, and media sectors. Diverse communities of users from Greece, Bulgaria, and Spain will actively participate, testing and evaluating the results to ensure the success and applicability of THEMIS 5.0 in real-world scenarios.

Be part of the solution and join us by signing up on our website for news and notifications in the blue box at the bottom of our website > HOME | THEMIS 5.0 (

53 views0 comments


bottom of page