The festive season may be in full swing but it didn't stop the THEMIS consortium gathering at SINTEF's offices in Oslo for a pivotal all-partner meeting on the 10th and 11th of December, 2024. THEMIS, a European research and innovation project dedicated to enabling people to assess trust in AI systems through risk management, is at a critical juncture as it transitions into the second year of its three-year cycle. With official co-creation activities drawing to a close, the meeting focused on charting the pathway forward for the project’s next technical and piloting phases.
Day 1: Reviewing Specifications and Use Cases
The first day commenced with a workshop centred on the project's use cases and the cross-cutting theme of trustworthiness. A key focus was on the innovative “risk card” approach—a method adopted to understand specifications for the various elements of the proposed THEMIS system. This discussion highlighted the fact that while technical assessments address robustness and accuracy, contextual understanding is essential to evaluate risk effectively.
The workshop also delved into the complex relationship between trust and risk. For example, while risks are domain and sector specific, they also create cascading effects in interconnected areas. The THEMIS framework aims to answer a crucial question: starting from an AI system, how do you arrive at effective mitigation strategies? The discussion emphasised the importance of contextualising trustworthiness assessments, focusing on the dynamic between the trustor (e.g., a THEMIS user) and the trustee (e.g., the AI system).
Partners explored how risk assessments could help users determine trustworthiness based on attributes most relevant to them, such as privacy, accuracy, or reliability. It should be noted that THEMIS’s emphasis is not on compliance but rather on empowering individuals to make informed decisions about trust in AI systems. A challenge lies in delineating the scope of the project—deciding what aspects of AI trustworthiness and risk should be included.
An insightful presentation introduced the Trustworthiness Optimisation Process (TOP), a structured approach to identifying and mitigating risks that impact AI trustworthiness. The process involves:
Risk Quantification: Utilising a super decision engine and quantitative assessor to analyse risks.
Mitigation Strategies: Searching for solutions to reduce trustworthiness risks while evaluating how different methods affect system attributes and requirements.
Iterative Assessment: Continuously adjusting parameters and approaches to achieve optimal trustworthiness levels.
Day 2: Further Exploring Trust in AI and Risk
The second day shifted focus to developing a roadmap for THEMIS services. These services will integrate technical, legal, and ethical requirements into specific decision-making processes. A key insight from the discussions was that trust is not context-independent; it is an attitude shaped by subjective factors, including the severity of perceived impacts.
The University of Southampton shared progress on how to integrate AI-specific threats into their established Spyderisk model. This model, which has been developed to manage risks for over 15 years, will be adpated to incorporate AI-related risks. It guides users through risk identification, analysis, evaluation, and mitigation. The model’s ability to map threat propagation and attack paths - illustrating how vulnerabilities in one area can cascade into broader consequences - is especially relevant to THEMIS’ goals.
The meeting also explored the development of an overarching evaluation framework to assess both the THEMIS system as a whole and its individual components. This methodology will include a study of legal and ethical requirements and an impact assessment. Contributions from different project deliverables and input from the External Advisory Board (EAB) are central to this effort. The EAB, which held its first meeting in July, which has already been instrumental in reviewing impact assessment methodologies and improving the THEMIS approach.
An interactive workshop on key exploitable results (KERs) enabled partners to collaborate on identifying how individual contributions align with the project’s three joint KERs. This exercise fostered new collaborations and perspectives among partners, enriching the project’s overall impact.
The meeting concluded with a strategic plan for the coming months, including a carefully curated publication strategy to maximise the impact of THEMIS’ findings and deliverables. By targeting the right stakeholders, the consortium aims to ensure that its results resonate across industries and disciplines, ultimately fostering safer and more reliable AI adoption across diverse sectors.
Keep abreast of our work by subscribing for updates using the form at the bottom of the THEMIS homepage.
Comments