top of page
Writer's pictureBella Callaway

Historic Moment as Agreement Reached on the First Global AI Regulation

Updated: Dec 14, 2023



EU Policymakers following the final negotiations on the AI Act [Source: EU Council via X]

 

After nearly three days of rigorous talks, European Union parliamentarians have reached a political agreement on a risk-based framework for regulating artificial intelligence. The legislation was first suggested in April 2021, but it has taken years of difficult three-way negotiations to reach an agreement. This latest development indicates that a pan-EU AI law is unquestionably on the way.

 

What was Discussed?

 

The length and intensity of talks was due to disagreement amongst the governing bodies of the European Union about ‘foundation models. At its core, foundation model refers to a general multi-purpose AI that can be applied in various contexts for applications of both wide and narrow scope. In the June version of the legislation, foundation models were to be tightly regulated regardless of their assigned risk category or how they are used, a decision which worried some stakeholders who thought AI would be too tightly regulated before use had properly taken off . The dispute was prolific in this round of talks especially following the rapid popularisation of generative AI products over the recent months, for example Chat GPT and Microsoft Bing. Tech companies and select member states, namely France, Germany and Italy, advocated for limited regulation of these powerful AI models to provide more freedom for innovation, instead of opting for them to be regulated based on how they are used. This is partly in an effort to protect the European tech industry in the ‘AI race’ especially when regulation of AI competitors in China and America is extremely limited and intermittent.

 

AI-driven surveillance was another major area of disagreement during the three days of talks. This technology could be used to watch members of the public in real time and recognise individual behaviours and emotions as well as patterns of activity, all of which touch upon ethical use questions. The European Parliament is pressing for tougher biometric regulations, citing concerns that the technology could enable widespread surveillance and infringe on citizens' privacy and other rights. However, European countries such as France, which is hosting the Olympics next year, want to employ AI to combat crime and terrorism in an effort to keep people and infrastructure safe; they are pushing actively and putting a lot of pressure on the Parliament to soften their planned restrictions. Despite this potential benefit, several applications have been banned by the Commission as they have also been recognised as a potential threat to citizens’ rights and democracy:


  • Biometric categorisation systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race)

  • Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition database

  • Emotion recognition in the workplace and educational institutions

  • Social scoring based on social behaviour or personal characteristics

  • AI systems that manipulate human behaviour to circumvent their free will

  • AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation)


Companies not complying with the rules will be fined. Fines would range from €35 million or 7% of global annual turnover (whichever is higher) for violations of banned AI applications, €15 million or 3% for violations of other obligations and €7.5 million or 1.5% for supplying incorrect information. However, with many tech companies self-regulating, at least for the foreseeable future, the enforcement of these fines relies heavily on the discretion and compliance of the global tech industry.


This legislation would propel the EU to be the global frontrunner for AI regulation. However, unlike  its somewhat aggressively wide GDPR legislation, the AI Act appears to be far more industry friendly with its more tailored risk-based approach. It is a promising start for AI regulation as illustrated by Ursula von der Leyen, President of the European Commission: “Artificial intelligence is already changing our everyday lives. And this is just the beginning. Used wisely and widely, AI promises huge benefits to our economy and society. Therefore, I very much welcome today's political agreement by the European Parliament and the Council on the Artificial Intelligence Act. The EU's AI Act is the first-ever comprehensive legal framework on Artificial Intelligence worldwide. So, this is a historic moment. The AI Act transposes European values to a new era. By focusing regulation on identifiable risks, today's agreement will foster responsible innovation in Europe. By guaranteeing the safety and fundamental rights of people and businesses, it will support the development, deployment and take-up of trustworthy AI in the EU. Our AI Act will make a substantial contribution to the development of global rules and principles for human-centric AI.”

 

A Risk Based Approach:


The new rules will be applied directly in the same way across all Member States, based on a future-proof definition of AI. They follow a risk-based approach, as stated by the European Commission’s press release on 9 December 2023:


Minimal risk: The vast majority of AI systems fall into the category of minimal risk. Minimal risk applications such as AI-enabled recommender systems or spam filters will benefit from a free-pass and absence of obligations, as these systems present only minimal or no risk for citizens' rights or safety. On a voluntary basis, companies may nevertheless commit to additional codes of conduct for these AI systems.


High-risk: AI systems identified as high-risk will be required to comply with strict requirements, including risk-mitigation systems, high quality of data sets, logging of activity, detailed documentation, clear user information, human oversight, and a high level of robustness, accuracy and cybersecurity. Regulatory sandboxes will facilitate responsible innovation and the development of compliant AI systems. Examples of such high-risk AI systems include certain critical infrastructures for instance in the fields of water, gas and electricity; medical devices; systems to determine access to educational institutions or for recruiting people; or certain systems used in the fields of law enforcement, border control, administration of justice and democratic processes. Moreover, biometric identification, categorisation and emotion recognition systems are also considered high-risk. 


Unacceptable risk: AI systems considered a clear threat to the fundamental rights of people will be banned. This includes AI systems or applications that manipulate human behaviour to circumvent users' free will, such as toys using voice assistance encouraging dangerous behaviour of minors or systems that allow ‘social scoring' by governments or companies, and certain applications of predictive policing. In addition, some uses of biometric systems will be prohibited, for example emotion recognition systems used at the workplace and some systems for categorising people or real time remote biometric identification for law enforcement purposes in publicly accessible spaces (with narrow exceptions).


Specific transparency risk: When employing AI systems such as chatbots, users should be aware that they are interacting with a machine. Deep fakes and other AI generated content will have to be labelled as such, and users need to be informed when biometric categorisation or emotion recognition systems are being used. In addition, providers will have to design systems in a way that synthetic audio, video, text and images content is marked in a machine-readable format, and detectable as artificially generated or manipulated.


What Happens Next?


The political agreement is now subject to formal approval by the European Parliament and the Council and will enter into force 20 days after publication in the Official Journal. The AI Act would then become applicable two years after its entry into force, except for some specific provisions: Prohibitions will already apply after 6 months while the rules on General Purpose AI will apply after 12 months. The legislation will not take effect until 2025 at the earliest.

Whilst the AI Act agreement is welcomed by many citizens across Europe, there are concerns over what regulation will mean for European tech companies, who are in the race against other AI innovation power nations such as America and China. 


Digital Europe’s Director General Cecilia Bonefeld-Dahl said: “We have a deal, but at what cost? We fully supported a risk-based approach based on the uses of AI, not the technology itself, but the last-minute attempt to regulate foundation models has turned this on its head. The new requirements – on top of other sweeping new laws like the Data Act – will take a lot of resources for companies to comply with, resources that will be spent on lawyers instead of hiring AI engineers. We particularly worry about the many SME software companies not used to product legislation – this will be uncharted territory for them. The AI race is not one Europe can miss out on. We need to think long and hard about how we can compensate for this extra burden and give companies here a fighting chance.”


Despite these reservations, if implemented correctly, THEMIS believes the AI Act has the potential to strengthen our work and be a good catalyst for AI adoption and innovation in Europe.


38 views0 comments

Comments


bottom of page