Why Human Oversight in AI Governance Matters More Than Ever
- THEMIS 5.0
- Jul 17
- 2 min read
As Europe takes a historic step toward regulating artificial intelligence with the adoption of the EU AI Act, a critical question is gaining attention: can we ensure fairness in AI systems without meaningful human oversight?

A timely and thought-provoking paper published in Cambridge Forum on AI: Law and Governance - Better together? Human oversight as means to achieve fairness in the European AI Act governance - dives deeply into this question. Authored by Ana Maria Corrêa, Sara Garsia, and Abdullah Elbi of the KU Leuven Centre for IT & IP Law (CiTiP), the paper examines how the EU AI Act approaches the relationship between human oversight and fairness. The authors argue that while the Act gestures toward fairness, it only partially delivers on its promise, leaving important normative dimensions underdeveloped and, in some cases, operationally unsupported.
Human oversight is often touted as a safeguard against the most serious risks posed by high-risk AI systems, including bias, discrimination, and loss of accountability. But what does it really mean to “oversee” an AI system, and does doing so make the system fairer? Drawing from a rich body of interdisciplinary literature, this article outlines three powerful ways in which human oversight can contribute to fairness: by mitigating bias, upholding accountability, and introducing empathy into decision-making processes that would otherwise be rigid, opaque, or unresponsive to individual circumstances.
However, the analysis doesn’t stop at theory. The authors take a critical look at the operational mechanics of the AI Act, identifying several structural weaknesses. Among them: the Act places significant responsibilities on AI providers and deployers but offers limited support for real-world implementation. The absence of meaningful organizational oversight, limited attention to the psychological realities of human-AI interaction, and a lack of clarity around when and how human intervention should take place all limit the law’s potential to achieve its fairness goals.
What emerges from the paper is a nuanced, well-evidenced critique, one that remains constructive. The authors do not dismiss the AI Act; rather, they propose that fairness can become a more powerful guiding principle if human oversight is interpreted and implemented with greater care. They call for closer cooperation between providers and deployers, more thoughtful allocation of responsibilities, and a greater awareness of how design choices can either empower or undermine human decision-makers.
In a regulatory landscape that often prioritizes technical fixes over structural change, this article reminds us that fairness in AI is not just a question of data or algorithms. It is a human issue, one that depends on our collective ability to build systems that reflect our values and respect the rights of all individuals.
This article is essential reading for anyone involved in AI policy, governance, or system design. It offers not only a rigorous legal analysis but also a vision for how to make fairness more than a slogan in the age of automation.
The full text is open access and available here...
Now is the time to engage with the deeper questions behind the rules, and this article is an excellent place to start.
Comments