top of page
Screenshot 2025-03-14 091724.png

New Paper: From Principle to Practice, a New Methodology for Operationalising Trustworthy AI

  • Writer: THEMIS 5.0
    THEMIS 5.0
  • 7 days ago
  • 2 min read

As artificial intelligence systems continue to shape decisions that affect our lives. from health care and media to maritime operations, the demand for trustworthy AI (TAI) has never been greater. But translating ethical principles like fairness, transparency, and accountability into concrete, operational tools remains a critical challenge.


A new paper published in Electronics 2025 offers a breakthrough solution: the Trustworthiness Optimisation Process (TOP). Developed by researchers from the National Technical University of Athens and the University of Piraeus, and supported by the THEMIS 5.0 project, this methodology bridges the gap between high-level guidelines and real-world AI practice.


Graphic taken from the paper that demonstrates the TOP process as detailed in the text.

What is TOP?

TOP is a four-stage, risk-based methodology designed to help AI developers, risk managers, and decision-makers assess and enhance the trustworthiness of AI systems across their entire lifecycle. Unlike previous approaches that often focus on compliance or individual aspects like fairness or explainability, TOP offers a systematic, modular process that:

  • Aligns with existing AI lifecycle phases and ISO risk standards,

  • Incorporates both quantitative metrics and qualitative documentation tools ("cards"),

  • Enables stakeholder participation and human oversight at every stage,

  • Supports multi-criteria decision-making to navigate trade-offs between trust dimensions.


Why Does It Matter?

The European AI Act, the NIST AI Risk Management Framework, and ENISA’s cybersecurity practices all emphasize the need for operational tools that help implement trustworthiness principles in practice. TOP does just that, offering a structured, flexible, and extensible method for evaluating risks, identifying appropriate mitigation strategies, and making those strategies transparent and auditable.


The paper not only introduces the methodology but also demonstrates its application through real-world case studies in three critical sectors: healthcare, media, and ports. A detailed experimental case using the Adult Income dataset shows how fairness issues can be assessed and mitigated through a combination of IBM’s AIF360 toolkit and TOP’s structured process.


What’s New?

  • Cards as documentation tools: Use case, data, model, and method cards allow for rich, contextualised system documentation.

  • Integration with risk frameworks: TOP aligns with ISO 31000, ISO/IEC 42001, and ISO/IEC 23894, making it easier to embed in existing compliance structures.

  • Multi-criteria optimisation: Using decision-support tools like VIKOR, TOP helps resolve conflicts between trustworthiness characteristics (e.g., fairness vs. accuracy).

  • Human-in-the-loop by design: The process ensures domain experts, developers, legal professionals, and end users remain at the centre of the trustworthiness conversation.


What's Next for Trustworthy AI?

The authors are now applying TOP in real-world AI deployments across maritime, media, and healthcare domains as part of the THEMIS 5.0 project. Future directions include improving automation, incorporating symbolic and agentic AI components for enhanced explainability, and adapting the process for large-scale and dynamic AI environments.


Read the Full Paper



Electronics 2025, Volume 14, Issue 7, Article 1454By Mattheos Fikardos, Katerina Lepenioti, Dimitris Apostolou, and Gregoris Mentzas

תגובות


bottom of page