researchers fall back on techniques from astronomy to computer generated Deepfake "pictures -which can look identical to real photos at first glance.

By analyzing images of faces that are normally used to investigate distant galaxies, astronomers can measure how a person's eyes reflect light, which can indicate signs of image manipulation.

"It is not a panacea because we have false-positive and false-negative results," says Kevin Pimbabl, director of the Center for Data Science, Artificial Intelligence and Modeling at the University of Hull, UK. He presented research at the National Astronomy Meeting of the Royal Astronomical Society on July 15th. "But this research offers a potential method of an important step forward to possibly add to the tests that can be used to find out whether a picture is real or fake."

expressed photos

Promotes in artificial intelligence (AI) make it increasingly difficult to make the difference between real pictures, videos and audio and to be recognized by algorithms . Deepfakes replace the characteristics of a person or the surrounding area with others and can make it look like individuals have said or did things that they have not done. Authorities warn that this technology militarizes and to spread misinformation, For example during elections .

real photos should have a "consistent physics", explains Pimbablet, "so that the reflections they see in the left eyeball should be very similar, if not necessarily identical, to the reflections in the right eye apple". The differences are subtle, so the researchers used techniques that were developed to analyze light in astronomical images.

The work that has not yet been published formed the basis for the master's thesis by Adejumoke Owolabi. Owolabi, a data scientist at the University of Hull, UK, moved into real pictures from the Flickr-Faces-HQ Dataset and created fake faces with a image generator. Subsequently, Owolabi analyzed the reflections of light sources in the eyes in the pictures using two astronomical measurements: the CAS system and the gini index. The CAS system quantifies the concentration, asymmetry and smoothness of the light distribution of an object. This technique has made it possible to characterize astronomers for decades, including PIMBBLLET. The gini index measures the inequality of light distribution in pictures of galaxies.

 A series of clearer and annotated images of deeper eyes that show inconsistent reflections in every eye.

By comparing the reflections in the eyebäuses of a person, Owolabi was able to correctly predict about 70% of the time whether the image was fake. Ultimately, the researchers found that the Gini index was better than the CAS system to predict whether an image was manipulated.

Brant Robertson, astrophysicist at the University of California, Santa Cruz, welcomes research. "However, if you can calculate a value that quantifies how realistically a DeepfaKe image may appear, you can also train the AI ​​model, to create even better Deepfakes by optimizing this value," he warns.

ZHIWU Huang, AI researcher at the University of Southampton, UK, says that his own research has not identified any inconsistent light patterns in the eyes of Deepfake images. But "while the specific technique of using inconsistent reflections in the eye corner may not be widely used, such techniques could be helpful to analyze subtle anomalies in lighting, shadows and reflections in different parts of an image," he says. "The recognition of inconsistencies in the physical properties of light could complement existing methods and improve the total accuracy of deeper paws."