top of page
Writer's pictureTHEMIS 5.0

Exploring the EU AI Act: What You Need to Know

EU AI Act: Chapter 1 - General Provisions


The EU AI Act has generated a lot of discussion and motivated the research of the THEMIS 5.0 project, but what does it actually say? This is what we explore in today’s blog, looking in particular at the general provisions of the law. We hope that this will better inform people of the intentions and provisions that the EU has set out and provide greater context to the framework of THEMIS 5.0. The AI Act has 13 chapters, the first of which lays out the General Provisions of the Act. These General Provisions are divided into four articles: the subject matter of the act, the scope of the act, relevant definitions and securing AI literacy. It is these articles that we will see in more detail today.


European Parliament

Article 1 - Subject Matter

As has been discussed multiple times on this blog, the AI Act states that its purpose is to “promote the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter, including democracy, the rule of law and environmental protection, against the harmful effects of AI systems in the Union and supporting innovation.” This of course aligns with the mission of THEMIS 5.0 to create an “AI driven Human-centred Trustworthiness AI Optimisation System.” The AI Act does not exist to stifle the development of AI, but to guide it in such a manner that it is beneficial to wider society, and not just the interests of its developers. The law seeks to accomplish this goal through several means:


  1. Harmonising EU law for the development, use and transparency of AI systems

  2. Prohibiting certain AI practises

  3. Implementing specific requirement and obligations for AI systems deemed high risk

  4. Creating rules for monitoring, governing and enforcing the AI market

  5. Introducing measures to support AI innovation, with focus on SMEs


Article 2 - Scope


Article 2 lays out the scope of the new law, not just geographically but also with regard to the type and use of AI systems. First, quite evidently the regulations apply within the European Union, but this can also have a significant impact of non-EU countries that have a high amount of trade and interaction with the EU, Britain being a prime example, where companies will have to abide by EU law for market access and national governments may be inclined to follow EU law to maintain harmony. 


The regulations of the Act do not apply at all to any AI used solely for the purposes of defence or national security, as well as for the purposes of scientific research or development. Nor do they prevent EU nations cooperating with third countries or international organisations with regard to law enforcement provided that fundamental rights are protected. While the regulations do apply to AI products that are placed on the market for public use, research, testing and development prior to this is not encumbered by the Act (provided they are tested in ‘real world conditions’ which is further explained in Article 3), as well as the use of any AI system for purely personal and non-professional activities. Additionally, unless they are placed on the market or deemed ‘high risk’, the law does not regulate AI systems released under free and open source licenses. By the way, you can find out more about the laws' risk categories in our blog on this element from last week.


The purpose of the AI Act is to create a baseline for EU law, this should not be considered an inhibitor to member states creating new laws to enhance the rights of workers regarding the use of AI by employers, nor does it prevent collective agreements on the use of AI which are more favourable to workers.


Article 3 - Definitions


The third article deals with defining important concepts when understanding AI and the regulation of it. Of course, we have no intention of boring you by going through them all, but here are a few that should be relevant and of some interest. 


  1. First and most importantly, an ‘AI system’ means a ‘machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions.’

  2. ‘Risk’ is understood to mean ‘the combination of the probability of an occurrence of harm and the severity of that harm.’

  3. ‘Reasonably foreseeable misuse’ means using an AI system in such a way that does not accord with its intended purpose, but can arise from reasonably foreseeable human behaviour or interaction with other systems.

  4. A ‘Conformity Assessment’ is a process demanded by the EU regarding high-risk AI systems that they comply with requirements which can be found in Chapter III, Section 2 of the AI Act.

  5. A ‘Serious Incident’ means any event related to an AI system that directly or indirectly leads to:

    1. the death of a person, or serious harm to a person’s health;

    2. a serious and irreversible disruption of the management or operation of critical infrastructure.

    3. the infringement of obligations under Union law intended to protect fundamental rights;

    4. serious harm to property or the environment;

  6. ‘Testing in Real-World Conditions’ (as alluded to in Article 2) ‘means the temporary testing of an AI system for its intended purpose in real-world conditions outside a laboratory or otherwise simulated environment.’ In essence, real world conditions entail that the testing may have a real impact on individuals in their day-to-day lives.

  7. Finally a ‘Deep Fake’ is a concept that has been commonly discussed in recent times, and is recognised by the EU as an ‘AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful.’


Article 4 - AI Literacy


Finally, Article 4 of the EU AI Act makes provisions to increase the awareness and appreciation of AI within Europe. They understand AI literacy as skills, knowledge and understanding that allow individuals 'to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause.' The article stipulates that 'Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf.' It is hoped that a greater understanding of AI will facilitate greater trustworthiness and empower individuals when interacting with the technology.






3 views0 comments

Comments


bottom of page