top of page
  • Writer's pictureBella Callaway

Navigating the Complexities of Trust in AI: Introducing THEMIS 5.0

Updated: Apr 18

In an era where AI is becoming increasingly integrated into our daily lives, questions about its trustworthiness loom large. How can we exhaustively, and accurately, define trust in the context of AI? Even if trust could be defined, how can we ensure that AI systems adhere to these criteria? There is no clear solution.

 

In order to define trust there must be an understanding of the diverse perspectives and capabilities of individuals, as well as the intricate interplay of societal, policy, and technical factors. Trustworthy AI refers to artificial intelligence systems that are designed, developed, and deployed in a manner that earns the trust of users. In order for users to believe in the technology, it is vital that they understand how the system works as only then they can assess whether it is safe and reliable and therefore trustworthy.

 

Imagine trying to assess the trustworthiness of an AI system across a diverse population with varying levels of understanding, learning capabilities, and perceptions of concepts like bias, fairness, and ethics. Evidently, there is no one size fits all solution. Due to the vastly subjective nature of the term trustworthiness, what one person perceives as fair or ethical, another may view very differently.

 



 

The above diagram shows the NIST Characteristics of Trustworthy AI Systems. Valid & Reliable Is a Necessary Condition of Trustworthiness and is shown as the base for other trustworthiness characteristics. Accountable and transparent is shown as a vertical box because it relates to all other characteristics

 

This is where THEMIS 5.0 is required. A ground-breaking initiative aimed at tackling the complexities of trust in AI, THEMIS recognises that addressing these challenges requires a multidisciplinary approach that goes beyond technical expertise alone. Instead, it draws on insights from social science, decision theory, managerial science, and philosophy to develop AI systems that are not only technically proficient but also socially and ethically responsible.



THEMIS 5.0: System Trust Model

 

One of the key principles underpinning THEMIS 5.0 is the importance of understanding and modelling human behaviour, values, and ethical norms. By doing so in an anonymous manner to protect privacy, researchers can gain valuable insights into how different individuals perceive and interact with AI systems. This, in turn, informs the design and implementation of AI systems that align with diverse societal values and preferences. For AI systems to be truly fair and thus trustworthy, they must be designed in a way that takes these factors into account. This requires not only technical expertise but also a deep understanding of human behaviour and societal dynamics.

 

Moreover, THEMIS recognises the importance of profiling adversaries in the AI operational environment. By understanding potential threats and vulnerabilities, researchers can better assess the accuracy and robustness of AI systems. This holistic approach to risk assessment considers not only technical vulnerabilities but also the societal and policy implications of AI deployment.  Crucially, THEMIS is not just an academic exercise – it is a practical endeavour aimed at developing real-world solutions to complex challenges. By bringing together experts from diverse disciplines and focusing on tangible use cases, THEMIS aims to deliver AI systems that are not only technically advanced but also socially and ethically responsible.

20 views0 comments
bottom of page