top of page
  • Writer's pictureBella Callaway

Which face is real? Study finds AI-generated faces are perceived as more real than human ones

© Andrii IURLOV/

As real as they may look, all the faces in the photo above have been generated by AI. How does it feel having difficulty distinguishing between a face created by AI and a photo of a real human? Try to spot fakes here. As hardware and software used to generate them will continue to improve, AI will only get better at fooling us. According to research published in the journal ‘Psychological Science’, white faces generated by AI now look more real than human faces. However, the findings weren’t the same for AI images of people of colour, possibly because AI algorithms are trained on mostly white faces.

Face the Facts

A research team from universities in Australia, Canada and the United Kingdom performed two experiments to assess people’s ability to identify AI-created content. In one of the experiments, 124 volunteers were asked to judge whether a face was AI-generated or real. The results showed that 66 % of AI images were rated as human, compared to 5 % of real images. “If White AI faces are consistently perceived as more realistic, this technology could have serious implications for people of colour by ultimately reinforcing racial biases online,” senior author Dr Amy Dawel explained in an Australian National University (ANU) news release. “This problem is already apparent in current AI technologies that are being used to create professional-looking headshots. When used for people of colour, the AI is altering their skin and eye colour to those of White people.”

Time to Face Up

One major problem is that we usually don’t realise we are being deceived. “Concerningly, people who thought that the AI faces were real most often were paradoxically the most confident their judgements were correct,” added co-author Elizabeth Miller, PhD candidate at ANU. “This means people who are mistaking AI imposters for real people don’t know they are being tricked.” This could lead to serious consequences if action isn’t taken. “AI technology can’t become sectioned off so only tech companies know what’s going on behind the scenes,” Dr Dawel continued. “There needs to be greater transparency around AI so researchers and civil society can identify issues before they become a major problem.” She emphasises the need to raise the public’s awareness to mitigate the risks. “Given that humans can no longer detect AI faces, society needs tools that can accurately identify AI imposters. Educating people about the perceived realism of AI faces could help make the public appropriately sceptical about the images they’re seeing online.” “As the world changes extremely rapidly with the introduction of AI, it’s critical that we make sure that no one is left behind or disadvantaged in any situation – whether due to ethnicity, gender, age, or any other protected characteristic,” co-author Dr Clare Sutherland from the University of Aberdeen told ‘The Guardian’.

Facing the Future with THEMIS

The lack of distinction between the real and the robotic can evoke fear amongst system users and the public, meaning there is a lack of trust in AI system outcomes. This highlights the need for the THEMIS 5.0 project. By taking a human-centred approach, THEMIS aims to improve trustworthiness in decisions made using AI systems. In order to mitigate faults in AI systems such as bias or practicality, THEMIS will assess user profiles that continuously analyse the legal, moral and ethical principles of humans that use the system at all levels. By fostering an open dialogue between system developers, users and promoters the technology can be better understood by all which will in turn improve trust. The narrative that AI is trying to 'fool us' could be replaced by a belief that AI provides an invaluable tool that can help us all make more informed decisions.

Note: Part of this article was originally published by Cordis Europe (7/12/23). To view the original article please visit the Cordis Newsfeed:

25 views0 comments


bottom of page