The 21st century has witnessed the emergence of social media as a potent technological force. Media of all kinds have become widely accessible, with content created by anyone for everyone. Whilst in many ways this explosion of information has improved communication and awareness, the manipulation and fabrication of information has become commonplace, ultimately impacting societal trust and even democracy. In January 2024 the World Economic Forum (WEF) ranked the spread of misinformation among the greatest threats to humanity in its Global Risks Report 2024. Managing Director of the WEF, Saadia Zahid, commented that:
“An unstable global order characterised by polarising narratives and insecurity, the worsening impacts of extreme weather and economic uncertainty are causing accelerating risks -including misinformation and disinformation – to propagate.”
With around four billion people voting in upcoming elections in 2024, marking it the biggest year for elections in history, the physical manifestation of digital fabrications will be felt across the world.
Disinformation refers to the deliberate spread of false or misleading content that is published with the intent of manipulating the reader. Unlike misinformation which may be shared unintentionally, disinformation is created with the purpose of causing harm and constructing false narratives to achieve specific goals. This disinformation/misinformation loop erodes trust in media outlets as the onus falls upon readers to distinguish between fact and fiction for themselves. This can often be difficult as disinformation is increasingly being used as a scapegoat for wrongdoing blurring the line between fact and fiction.
The increase in popularity of audio ‘deepfakes’ is a prime example of this loosening grip on reality. Last week, New Hampshire’s attorney-general said it was investigating after complaints of an “artificially generated” voice that sounded like President Biden’s was encouraging voters not to vote in the states presidential primary. In democratic societies, an informed and engaged population is crucial. When citizens are exposed to false narratives, they may make decisions based on inaccurate information therefore compromising the integrity of democratic institutions. Furthermore, disinformation often exploits existing social, political, or cultural divides, contributing to increased polarisation or even tension between societal factions making it difficult to inspire constructive dialogues and solutions to societal problems.
Despite AI often being a buzzword paired with disinformation, news organisations and journalists are increasingly harnessing the power of artificial intelligence-based tools to prevent the publication of unchecked, inaccurate information. AI-powered tools can quickly analyse vast amounts of information to verify facts and detect inconsistencies. Automated fact-checking can assist journalist in verifying claims and statements. Whilst AI tools can be powerful allies in the fight against disinformation, its essential for journalists to use these technologies as a supplement to their existing skill and judgment. Human oversight remains crucial to ensure ethical reporting and to address nuances that AI systems may not fully be able to grasp. Enter THEMIS 5.0.
At its core, THEMIS 5.0 aims to improve users trust in human decisions made with the aid of AI tools. The Human-centric AI Trustworthiness Evaluation and Optimisation Framework defines processes for fairness and technical accuracy, as well as assessing the impact of the decision and the human decision support capabilities. It implements a dynamic ecosystem designed to continuously enhance the trustworthiness of AI systems through iterative evaluation and optimization. Benchmark datasets are crafted to train the AIalgorithms, ensuring highly trustworthy models. The THEMIS technology will comply fully with European legal and ethical standards, including the trailblazing European AI Act which will come into force later this year. This technology will help pave the way for a more trustworthy future as it focuses on placing humans at the heart of decision making thus giving back control over what we understand to be fact not fiction.