we “hunted” less than half of the algorithmically generated fake images

  • 49

No matter how hard you look, no matter how good you are at keeping faces, and no matter how hard you train, face it, you probably can’t beat the game. deepfake. After conducting several experiments with fictional portraits, generated by artificial intelligence (AI), a group of researchers has concluded that we “hunt” fake images less than half the time.

His experiments show that we are only right 48.2% of the time, which means that — at least from a theoretical and statistical point of view — we would have a better chance of hitting the nail on the head if we played it heads and tails. and flip a coin.

For your study the researchers ran several experiments with 800 images, all portraits of people against neutral backgrounds. Some were real, others synthetic, made by StyleGAN2an AI algorithm presented by Nvidia Two years ago. During their first analysis, 315 people were asked to look at 128 photos and assess whether they were real or fake. deepkaes. The result? On average, they were correct 48.2% of the time, very close, they admit, to what would be achieved by sheer luck.

In a second test, the scientists slightly changed the rules of the game: the participants were given some notions on how to identify a false face and were informed, after each analysis, whether they had succeeded or not. With that extra help, his accuracy jumped to 59%. Mind you, the observers fared better in the first half of the experiment than in the second.

More “reliable” than real faces

“Overall performance remained only slightly above chance. The lack of improvement over time suggests that the impact of feedback is limited, presumably because some synthetic faces simply do not contain perceptually detectable artifacts,” the researchers explain. in your articlepublished in Proceedings of the National Academy of Sciences.

Beyond the times we detect a deepfake, the experiment leaves some curious conclusions. Observations reveal, for example, that it is more difficult for us to hit when what we have in front of us is a white face. It’s not by chance. As the researchers point out, the reason is probably that the training database StyleGAN2 It has a greater number of photographs of Caucasians, which allows it to generate “more realistic” images.

Another interesting question is how reliable the faces made with artificial intelligence are to us. Do they give us a sense of security? And if so, is this greater than the one that the faces of people of flesh and blood arouse in us? To find out for sure, the scientists asked 223 people to rate the “reliability” of 128 portraits, all taken from the experiment’s 800-image base. Then they asked them to rate them from one to seven.

The result does not leave us very well stopped. On average, the real faces received a score of 4.48. The deepfakes, 4.82. What’s more, the four faces that raised the most suspicions during the experiment were real and the best three were synthetic, the result of StyleGAN2.

Differentiating between a real image and a deepfake was practically impossible.  So far, according to Adobe

The reason? The researchers believe that it has little to do with the expression of the faces and they did not detect great variations related to race —not so with sex, which leaves women better off. For them the key is in the nature of the deepfakes and psychology.

“Synthesized faces tend to look more like average faces, which, in turn, are considered more reliable,” they explain. GAN networkslike the one used in the experiment, basically learn to generate faces that are as realistic as possible thanks to their own internal functioning: first they create a random image and then, thanks to a base of real photos, they refine the result until one of their networks neurons, the “discriminator”, is incapable of appreciating whether it is false.

With the data on the table, scientists warn of the serious risks of deepfakes. “Synthetically generated faces are not only photorealistic, they are almost indistinguishable from real faces and are considered more reliable,” they say. Although that level of fidelity is certainly a “win for the field of infographics”, in other areas they believe it may be more controversial.

“Easy access to these high-quality fake images has caused and will continue to cause a number of problems, including more convincing fake profiles and, as synthetic audio and video generation continues to improve, issues of intimate non-consensual images, fraud and scam campaigns. disinformation, with serious implications“, the research team reflects.

This Jensen Huang is not real, it is virtual: this is how NVIDIA fooled us all for 14 seconds with a spectacular deepfake

They are certainly not the first to warn of the risks of deepfakes. The images and recordings generated by AI are already used to commit fraud, deceit or violate the privacy of people by falsifying videos and photos with which pieces with a clear sexual content are later created. It affects individuals, but also, experts emphasize, to the democracy.

Especially as the content becomes more realistic and arouses a greater degree of trust on the part of the public. Its threat lies both in the falsified material and in the shadow of doubt that it casts over the real one. “Perhaps most pernicious is the consequence that, in a digital world where any image or video can be faked, the authenticity of any inconvenient or unwanted recording can be called into question,” the study authors conclude.

No matter how hard you look, no matter how good you are at keeping faces, and no matter how hard…

No matter how hard you look, no matter how good you are at keeping faces, and no matter how hard…

Leave a Reply

Your email address will not be published.