Hyperrealism of AI-Generated Faces

Hyperrealism of AI-Generated Faces

Artificial intelligence (AI) has made remarkable strides in recent years, reaching a level of realism that can often fool people into thinking they are interacting with another human being. This hyperrealism is particularly evident in the images of white faces generated by the popular StyleGAN2 algorithm, which have been found to look more “human” than actual people’s faces.

In a study published in Psychological Science, researchers asked 124 participants to determine whether various faces were real or generated by AI. Half of the pictures were real faces, while the other half were AI-generated. Interestingly, participants were systematically wrong in their judgments, more often labeling AI-generated faces as real. On average, about two out of three AI-generated faces were mistaken as human. This phenomenon, known as “hyperrealism,” highlights the ability of AI to create incredibly realistic faces that are difficult to distinguish from real ones.

Even more intriguing is the fact that the participants who were the worst at identifying AI impostors were actually the most confident in their guesses. This paradoxical finding suggests that those who are most susceptible to being tricked by AI are often unaware that they are being deceived. This misplaced confidence can have serious consequences, particularly in the realm of cybersecurity, where people may unknowingly hand over sensitive information to cybercriminals hiding behind hyperrealistic AI identities.

The issue of hyperrealism in AI-generated faces also highlights a racial bias. In the study, the researchers found that only white AI-generated faces appeared hyperreal, while faces of color did not. This bias stems from the fact that AI algorithms are often trained on images of mostly white faces, leading to the underrepresentation and misrepresentation of racial diversity. The implications of this bias are significant, as it can perpetuate discrimination and have real-world consequences, such as the less accurate detection of Black people by self-driving cars.

The prominence of hyperrealistic AI faces raises questions about our ability to accurately detect and protect ourselves from AI-generated content. The study identified several features that make white AI faces look hyperreal, including proportionate and familiar features, as well as a lack of distinctive characteristics that would make them appear “odd” compared to other faces. These features are often misinterpreted as signs of “humanness,” contributing to the hyperrealism effect.

As AI technology continues to advance rapidly, it remains to be seen how long these findings will hold true. It is also important to note that different algorithms may yield different results, as they may differ in how they generate AI faces compared to human faces. Detection technology designed to identify AI faces has shown mixed results, with some claiming high accuracy, while others perform just as poorly as human participants. This further emphasizes the need for ongoing research and development in AI detection technologies.

To protect ourselves from misidentifying AI-generated content as real, it is crucial to be aware of the limitations of human perception in distinguishing AI faces from real ones. By recognizing our own biases and limitations, we can be more cautious and skeptical of what we see online. Additionally, public policy measures can play a role in managing the risks associated with AI. One approach is to require the declaration of AI use, although this may not always be effective in combating deception. Another option is to focus on authenticating trusted sources, similar to “Made in” labels, to help users identify reliable media.

The rapid advancement of AI image generation poses new challenges in our increasingly digital world. While AI has the potential to revolutionize many aspects of our lives, it is crucial that we address the biases and risks associated with hyperrealistic AI faces. By promoting diversity and mitigating bias in AI algorithms, and by fostering public awareness and policy measures, we can harness the power of AI while minimizing its potential harm.


Written By

Jiri Bílek

In the vast realm of AI and U.N. directives, Jiri crafts tales that bridge tech divides. With every word, he champions a world where machines serve all, harmoniously.