Artificial intelligence (AI) has revolutionised various aspects of our lives, raising concerns about data protection and ethical implications. In response, the EU has introduced the Artificial Intelligence Act (AI Act) and the Artificial Intelligence Liability Directive (AILD) to regulate AI development and protect individuals' rights. These legal frameworks, aim to align with the General Data Protection Regulation (GDPR) to address data privacy concerns. This ground-breaking legal intervention is a positive first step towards better regulation of the AI industry however, is this enough to prevent, deter or even reverse malpractice?
Improving transparency and trust is the crux of the THEMIS project. THEMIS’ innovative AI trustworthiness evaluation methodology will consider the human perspective, in addition to the wider socio-technical systems’ perspective, when utilising a risk management-based approach to evaluate of the level of trustworthiness of a decision made using AI. The proposed THEMIS trustworthiness ecosystem will be designed so that it considers the EU legal framework for trusted AI. This trailblazing project demonstrates the future of AI systems, which are created upon human-centred, legally enforceable frameworks.
Arguably, the greatest achievement of the AI Act is the balance struck between protection and innovation. It has been widely acknowledged that there must be essential safeguards to protect citizens rights but without the risk of overprotection causing unnecessary burdens that restrict develop and growth of technology. In order to achieve this equilibrium, the AI Act takes a risk-based approach with AI systems being classified by four categories. This categorisation ensures that certain regulatory burdens only apply to systems when absolutely necessary so as to prevent a breakdown in innovation due to overprotection.
Diagram demonstrating risk levels set out in the EU AI Act via Andreas Welsch
Whilst the introduction of a legal framework promises to improve scrutiny of AI development and implementation, many issues of transparency persist. Encouraging businesses to be open about how their algorithms operate specifically, what data they use and how they arrive at their conclusions, is one of the main problems with regulating AI.
Big businesses have been reluctant to share this knowledge since a lot of it is confidential, and many innovators find it offensive to give up their laboriously developed techniques. Although the UK has released rules for algorithmic transparency, it appears that much fine-tuning has to be done, as these recommendations have only been in effect for less than a year and are still revised frequently. The EU also provides easily available rules for the moral use of algorithms, although they will surely need to change as companies look into new applications for AI programming. The implementation of the AI Act may even promote competitiveness and innovation by establishing a harmonised regulatory framework across EU member states. By providing legal certainty and clarity, the Act encourages investment and growth in the AI sector, whilst also ensuring a more level playing field for businesses operating within the EU. This could ultimately propel the EU to becoming a more prominent player in the global AI market.
Furthermore, The AI Act prioritises fundamental rights and values such as privacy and non-discrimination. Through its prohibitions on certain AI application deemed high-risk, such as social scoring and biometric identification, the Act upholds individuals’ rights to privacy and data protection. Additionally, its provisions against discriminatory AI algorithms work to safeguard against biases that may perpetuate inequality and injustice.
Can the law catch up to developments in AI? Considering that The AI Act’s most significant impacts lies in its emphasis on transparency and accountability, I would argue that policy stands a chance in influencing AI developments so long as it is properly implemented. By requiring developers to adhere to strict guidelines regarding the transparency of AI systems, users can gain better insights into how these technologies function, thereby fostering trust and confidence. Moreover, the Act’s provisions for accountability ensure that developers are held responsible for the outcomes of their AI systems, mitigating potential risks, and ensuring recourse in the event of harm.
Commentaires