From Principles to Practice: What the TRUST-AI Workshop Tells Us About the Future of AI Trust in Europe
- THEMIS 5.0

- 3 days ago
- 3 min read
If there is one thing Europe agrees on when it comes to AI, is that trust matters.
But beyond policy frameworks and high-level principles, a more difficult question remains, what does trustworthy AI actually look like in practice?

The TRUST-AI 2025 workshop in Bologna, organised as part of the European Conference on Artificial Intelligence (ECAI), supported by THEMIS 5.0 and its consortium partners, offers one of the clearest windows into how researchers and practitioners across Europe are trying to answer that question. The proceedings bring together a diverse body of work that moves the conversation from theory to implementation.
The scale of interest alone tells a story. The workshop received 60 submissions, with 37 peer-reviewed papers accepted into the proceedings. A level of engagement that reflects a growing urgency. As AI becomes embedded in critical systems, the need to operationalise trust is no longer optional.
Read the Full Proceedings - https://ceur-ws.org/Vol-4132/
What makes TRUST-AI particularly valuable is its positioning. It is not just an academic exercise, it is explicitly designed to bring together researchers and practitioners to explore real-world applications, challenges, and methodologies. In other words, this is where trust stops being an abstract principle and starts becoming something you can build.

Insights: What Does Trustworthy AI Mean in Practice?
One of the clearest insights from the proceedings is that trustworthy AI is not a single problem, it is a system of interconnected challenges. The papers span a wide range of themes, including:
Fairness and bias mitigation, such as evaluating AI across different languages and contexts
Data quality and robustness, recognising that trust begins with the data itself
Hallucination detection and explainability, especially in large language models
Concept drift and system evolution, addressing how trust changes over time
This breadth matters. It shows that trust is not a feature you can add on, it is something that must be embedded across the entire AI lifecycle. And crucially, it reflects a shift in mindset: trustworthy AI is no longer about defining principles, it is about managing trade-offs in real systems.
A strong theme across the workshop is the move toward risk-based approaches to trust.
Several contributions explore how risk can be systematically assessed and managed, drawing on frameworks such as NIST AI RMF and ENISA guidance. These approaches move beyond theory, offering structured ways to identify, evaluate, and mitigate risks in real-world AI deployments. The implication is clear: trust is not static, it must be continuously assessed, updated, and communicated.
Another key insight emerging from the TRUST-AI contributions is the gap between technical trustworthiness and human trust. Papers and position discussions raise important questions:
How do we design AI systems that users can meaningfully challenge or “distrust”?
How do we bridge the gap between functional performance and societal expectations?
How do we ensure that explanations are not just available, but actually useful?
These are not purely technical questions, they are deeply human ones. And they point to a critical shift in the field: trustworthy AI is as much about people and context as it is about models and metrics.
THEMIS 5.0: Turning Research into Capability
What connects much of this work is the role of workshop instigator THEMIS 5.0. Our project focuses on developing tools, methods, and evidence to help organisations evaluate when AI systems can be trusted, across sectors such as media, ports, and healthcare. Its consortium brings together leading research centres, industry actors, and innovation networks across Europe, enabling a uniquely interdisciplinary approach.
Running the TRUST-AI workshop reflects our mission in action, connecting cutting-edge research with practical, deployable approaches to AI trust.
Editors for the Trust-AI proceedings came from SINTEF, University of Piraeus & ICCS, NTUA, University of Southampton, KU Leuven, Centre for IT & IP Law, ATC- Athens Technology Centre, Institute of Philosophy & Technology, and Engineering.
Trust-AI: Why This Matters Now
The timing of these publications is important. Across Europe, the conversation is shifting rapidly from principles to regulation, from compliance to implementation, and from trust as a concept to trust as a capability The TRUST-AI proceedings show that the research community is already working through these challenges, offering early solutions while also exposing the complexity of the task ahead.
Explore the papers now - https://ceur-ws.org/Vol-4132/










Comments