Two expert researchers have published a study on faces generated by artificial intelligence. In their study, many of the participants were unable to recognize which of them was truly real.
Differentiating a human face from one generated by AI is increasingly difficult
To carry out the study, they had the help of 315 participants who had to determine if the faces presented were real or not. In one part of the study, 'fake' faces were identified only 48.2% of the time. In another of the tests, before getting down to work, the participants obtained some knowledge and help on the key points to identify faces. Although here they were correct 59% of the time, there was no major improvement in the results.
Images used in the study
Another of the tests carried out dealt with scoring from 1 to 7 how reliable the face shown was. The participants selected as more 'reliable' those faces generated by AI. Although they showed more confidence about the faces that smiled, it should be noted that 65.5% of the real faces and 58.8% of the false ones smiled.
A technique to show the most reliable face
The AI-generated faces were created through the Generative Adversarial Network (GAN). In this technique, two neural networks face each other to show examples until the network trains itself and thus be able to generate better and better content. You start with a random array of pixels and gradually learn how to generate a face. A discriminator, an element used in the field of distributed computing, is responsible for learning to detect the face, and if it detects it, it penalizes the network that generates the face. So until little by little it ends up being indistinguishable for the discriminator, and therefore, for the human being.
Participants had to identify a total of 400 real images and 400 AI-generated images of people of different race and gender. Interestingly, in the study, white male faces were the least accurately classified.
The authors mentioned in the study that those dedicated to creating this type of technology should consider whether the benefits outweigh the risks. There are currently great efforts to improve the detection of deepfakes and the like, such as the C2PA (Coalition for Content Provenance and Authenticity) endorsed by technology companies such as Adobe, Arm or Microsoft. Differentiating an artificial face from a real one is becoming really difficult for many, assuming a serious problem, especially for the proliferation of fake news. That is why there must be increasingly precise and complex detection systems.