top of page
Writer's pictureTHEMIS 5.0

Exploring the Legal and Ethical Impact of the 2024 EU AI Act: A THEMIS 5.0 Perspective

Updated: Nov 5

We are very happy to begin a new blog series discussing the impetus behind the THEMIS 5.0 project: that being the 2024 EU AI Act. In the following weeks, we will attempt a comprehensive, but engaging overview of the act, and how its provisions relate to THEMIS 5.0.

 

The opening of the EU AI Act states that its purpose is to create a uniform legal framework in particular for the development, the placing on the market, the putting into service and the use of AI systems in the Union, in accordance with Union values, to promote the uptake of human centric and trustworthy AI.” As a result, our mission at THEMIS 5.0 is to “Co-create an innovative AI-driven and human-centric trustworthiness optimisation system.” 

European Parliament
In March 2024, the European Parliament passed the world's first AI Act

While we shall further explore the EU AI Act in more detail in later weeks, today we shall see how it impacts our work, drawing from our deliverable D1.3 which provides a ‘Template and Guidance for Legal and Ethical Impact Assessment’. This deliverable as a whole explores how  THEMIS 5.0 intends to develop an AI-based trustworthiness optimisation ecosystem which attempts to optimise fairness, accuracy and robustness in other AI systems. Additionally, it intends to evaluate this goal against ethical and legal considerations taking into account, not just the new EU AI Act, but also the General Data Protection Regulation (GDPR) and other Europe-wide cybersecurity laws. 


The recently approved AI Act is the world’s first law designed specifically to regulate and place legal obligation upon the development and usage of AI technology. The Act regulates AI according to a risk-based approach, increasing or decreasing the scale of obligations upon the technology, depending upon the level of risk that the technology might pose. Of course, any system that is deemed to have no risk has no restrictions or obligations, so as to not stifle innovation and productive usage of this exciting technology. AI systems deemed to be low risk are subject merely to transparency requirements, so that they may be monitored but additionally have broad freedom to operate without interference. The Act mainly intends to place obligations upon high risk AI systems. If the level of risk is deemed acceptable, then the technology may be used providing that relevant safeguards have been put in place, this entails undergoing risk management and conformity assessment procedures (which can be found respectively in Article 9 and 43 of the AI Act). However, if it is deemed that there is an unacceptable level of risk then the use of the technology is expressly prohibited. 


The risk management system is designed to specifically target foreseeable risks to health, safety or fundamental rights, other types of risk, such as misuse or ones that emerge during use of AI should be evaluated, but providers do not have the same legal obligations to tackle them. Furthermore, the AI Act also stipulates that the risk management system only applies to risks that may be reasonably mitigated through the development of AI, and so does not suffocate the development of AI through the placement of unreasonable burdens.


We saw a plethora examples of potential risk in AI systems in our Co-Creation series on D3.1, looking at the insights from co-creation workshops and the concerns surrounding the uptake of AI, especially with regards to accuracy, fairness and robustness. We can explore a couple raised in those Co-Creation workshops here and briefly see how they relate to the AI Act risk management system.


An AI system used in the port management sector could be used in such a way that it would favour ships from one particular company over another. This problem would be an example of misuse, and while it may violate regulations encompassing fair practise, would not be a criteria for high risk in an AI system. This is because it does not follow from a fault in the AI itself, and while the system must be evaluated, the developer is not obliged to take steps to mitigate this particular risk, as to demand such would be unreasonable and stifling to AI development.


Conversely, another risk could be an AI system which was integrated into a medical device being unreliable because it is used on a patient outside the target patient population, resulting in an incorrect diagnosis. This risk may be mitigated if the the AI system is designed to adapt and personalise its results with regard to the individual patient, and if the healthcare professional is properly trained to use the device. Here is a case where the risks directly relates to health, safety and fundamental rights, additionally the risk can be reasonably mitigated or even eliminated through intentional altering of the system. Consequently, the AI developer has an obligation to tailor their system in accordance with the conformity assessment procedures so that the risk is reduced to an acceptable level.


This has been a brief preview into the role of the European Union Artificial Intelligence Act and its provisions to minimise risk. We look forward to exploring the role of the Act and its relation to THEMIS 5.0 more in the following weeks.



Comments


bottom of page