Building Trust in AI: Introducing the Trustworthiness Optimisation Process (TOP)
- THEMIS 5.0
- 2 days ago
- 2 min read
Artificial intelligence (AI) is becoming deeply embedded in critical societal functions, from healthcare and media to port operations and public services. Ensuring that AI systems are not only effective but also trustworthy is no longer optional, it is imperative. Our latest publication, “Trustworthiness Optimisation Process: A Methodology for Assessing and Enhancing Trust in AI Systems,” published in Electronics 2025, offers a new approach to this urgent challenge.

The Trustworthiness Optimisation Process, or TOP, is a structured methodology designed to operationalise trustworthiness throughout the entire AI lifecycle. Rather than treating trust as an abstract or static concept, TOP breaks it down into tangible, actionable components that can be systematically addressed by developers, regulators, and organisations deploying AI.
TOP unfolds across four interconnected stages. It begins with identification, where socio-technical information and stakeholder requirements are thoroughly collected and documented. This ensures that the unique context of the AI system is well understood from the start. The second stage, assessment, provides a rigorous quantitative and risk-based evaluation of the system, highlighting areas of potential concern and laying the groundwork for targeted intervention. The third stage, exploration, invites multidisciplinary investigation of tailored mitigation techniques to address the specific challenges identified. Finally, the enhance stage focuses on the implementation of improvements and continuous monitoring to ensure the system evolves responsibly over time.
A number of key features set TOP apart. It is compatible with the full AI system lifecycle and is designed for extensibility, allowing it to adapt to emerging needs and technologies. It recognises the importance of addressing conflicting requirements and keeping humans at the centre of the process. Multidisciplinary engagement is not just encouraged but built into the structure of the methodology, ensuring that perspectives from ethics, law, engineering, social sciences, and other domains are all part of the solution.
Central to TOP’s implementation are two crucial enablers. First, the use of documentation cards—structured formats for capturing details on use cases, data, models, and methods—promotes transparency and accountability across all stages of system development and deployment. Second, the integration of risk management ensures that identified issues are addressed not just ethically but also in a way that aligns with legal and regulatory expectations.
This work is part of our THEMIS 5.0 project and represents a collective effort to bridge the gap between abstract ethical principles and practical, implementable processes. Our goal is to empower AI stakeholders with tools they can use today to build systems that not only perform, but can be trusted to serve society fairly and responsibly.
We invite you to explore the full publication and engage with our work:
Comments