top of page
  • Writer's pictureBella Callaway

Unlocking Trust in AI: The Relationship Between Transparency and Regulation

Phone showing a robot reading

Why Transparency is important

We are living in a time of widespread distrust. The influx of easily accessible information on the internet has played a role in spreading this sentiment of scepticism. The sheer abundance of data means that aggregators are relied upon by citizens to distribute information without bias. However, commercial, social and political pressures inevitably lead to data being published with intention, especially on social media. Many internet users view AI as the problem causing disinformation however, AI systems can play a powerful role in helping to detect or verify potential information disorders. For a shift in the public psyche towards trusting AI there must be transparency and regulation.


For a strong foundation of trust to exist in any socio-economic relationship, transparency is key. When it comes to AI, transparency is vital for engendering trust amongst users, stakeholders and society. System developers must create a narrative capable of being translated to non-experts, explaining the most basic functions of the AI system. This narrative will pass through all stakeholders, ensuring a unified understanding on how the AI is monitored and developed at all levels. This in turn will allow for sustainable development of the system. Communication between all actors is vital if citizens are to grow their understanding, embracing new technologies and ways of life.

Transparency is not just about seeing; it is about understanding. The information must be presented in a digestible way that is targeted for its audience. Social networks have become one of the key media arenas yet there is no label of honesty on social media. With such an abundance of information it is often difficult for citizens to make informed decisions about what to consume and how to analyse information. Therefore, through harnessing the power of AI to analyse sets of data that are too large for human capacity, citizens can make more informed choices about media consumption.


The EU is leading the way to create the first widely enforceable regulations that specifical focus on AI. In June 2023 the main legislative body of the European Union, the European Parliament, took a major step towards regulating the potential harmful effects of Artificial Intelligence by the Draft AI Act.

The Act takes a regulatory ‘risk-based’ approach classifying AI systems by the level of risk that they pose to users; for instance AI systems of an ‘unacceptable risk’ level are considered a threat to citizens and will therefore be banned. Further, it would impose a duty upon generative AI systems, such as Chat GPT which has gained notoriety this year, to be more transparent about where the data powering the AI system is sourced. By taking this approach to AI regulation, the EC is distributing responsibility and therefore accountability.

Fig: AP pyramid of risk based on draft AI Act. Source AI Policy Consulting
Fig: AP pyramid of risk based on draft AI Act. Source AI Policy Consulting

The AI Act builds upon legislation that has already been implemented. The introduction of the General Data Protection Regulation (GDPR) into EU law in 2018 had a lasting international impact on the way data is stored and handled. Further, the Digital Markets Act and Digital Services Act (DSA) package came into effect in August 2023, and it was this latter that last week made headlines. Thierry Breton, Commissioner for the internal Market at the European Commission warned Elon Musk that his social media platform ‘X’ was being used as a conduit for disinformation, breaching the ‘very precise obligations regarding content moderation’ set out by the Act. This public level of public scrutiny provides an insight into how regulation can create transparency especially when it comes to social media. It is evident that as AI becomes ever more present in modern life, a multifaceted approach to regulation is vital to ensure trust in these systems.

The Role of THEMIS

The ability of AI systems has developed rapidly, and many systems are now able to successfully contribute to complex decision making. However, the more sophisticated AI systems become, the more emphasis is placed on the machines’ autonomy. Thus, there is a paradox between the utility of the systems and the trust required by organisations and citizens for use. THEMIS sets out to empower citizens to take a more active role in ensuring AI systems meet regulations and standards by enabling them to assess the trustworthiness of AI systems for themselves.

By developing a socio-technical ecosystem which brings together researchers and practitioners from a range of disciplines, THEMIS aims to ensure that AI-driven hybrid decision support is ‘trustworthy-by-design’. This will be achieved through a combination of education and regulation.

The draft AI Act will effectively be translated into technical specifications to embedded legal standards into trustworthy AI systems. Further, a synergy between Reinforcement Learning, Decision Intelligence and human-centred Risk Management aims to innovate the decision-making process when using hybrid-AI systems. Therefore, ensuring not only that legal and economic threats to such systems can be mitigated but also that social threats to the integrity of the system (such as bias). Combating social threats is vital as these are directly connected to human behaviour and societal dynamics, factors which translate into the overall trust that society has in a system.

Join us in our innovation journey and subscribe for updates and opportunities by subscribing to our website.

35 views0 comments


bottom of page