The THEMIS 5.0 Mission
THEMIS draws researchers and practitioners from diverse disciplines to ensure that AI-driven hybrid decision support is trustworthy. In order to achieve this, THEMIS aim to create an ecosystem through which AI-driven hybrid decision making is in accordance with particular human user decision support needs and moral values, whilst additionally adhering with the key success indicators of the embedding socio-technical environment.
Co-create an innovative AI-driven and human-centric trustworthiness optimisation ecosystem
The THEMIS Consortium
BUILT UPON THE PRINCIPLES OF:
THEMIS brings together 16 partners from 9 European countries and 1 Associated country enabling the project to have reach across the whole of Europe and beyond
THEMIS consortium contains a near equal mix of genders with female department heads, experts on gender dimensions in AI and trustworthiness, experience in gender equality projects such as FENCE, as well as a female owned and operated SME.
From trustworthy AI and ethical model development, risk management and anomaly detection, human behavioural and psycho-cognitive expertise, sociotechnical system decision intelligence to legal expertise, co-creation, human-AI conversational technology, ecosystem building, collaboration tools and system integration, 16 partners collectively cover all the fields necessary to make THEMIS a success
Research institutes, large industry and SMEs, non-profit organisation, and AI users from 3 different critical application and industry sectors and universities with open science experience all are represented adding to THEMIS’s richness and diversity
Determined by project needs, the final number of partners allows for efficient coordination and offers good value for money when considering the scope of envisaged action
our ecosystm Empowers all in the AI Decisioning value chain
AI System Developers
Providing AI Services
INCREASING TRUSTWORTHINESS IN AI FOR:
The THEMIS ecosystem is composed of cloud-based AI-services that seamlessly engage with humans by means of AI-driven interactive dialogues.
Specifically, an AI-driven conversational agent will transmit sufficient but not excessive human-interpretable explanations on how the AI system takes a particular set of inputs and reaches a conclusion. At the same time, it will intelligently elicit the knowledge related to human particular decision support needs and moral values, as well as to the key business goals of the embedding socio-technical system.
This interaction will enable the execution of continuous trustworthiness improvement processes where at each trustworthiness improvement cycle, human-centred assessment of the trustworthiness of the AI system will take place and corrective actions will be determined ahead of the next improvement cycle.