Data processing and transfer methods improve every year. A scientist from the Perm Polytechnic University describes a modern technology that allows easily manipulating audio and video content – and gives advice on how to protect ourselves from reputation damages.
The term “deepfake”, being a combination of “deep learning” and “fake”, is an AI-based image synthesis technology.
“This is a trendy term with not much real substance behind all the hype. The term ‘deep learning’ is sort of a fake in itself because there is no such thing as deep learning. What is usually called ‘machine learning’ is just an ability to pick weight coefficients for some algorithms – algorithms complicated for a non-professional but still developed by humans. ‘Deep’ simply means that there is a great number of coefficients,” explains Daniil Kurushin (Ph. D. in Engineering), Associated Professor of Information Technology and Automated Systems at the Electrical Engineering Department of the Perm Polytechnic University.
Surprisingly, this technology currently allows easily manipulating audio and video content and, as it enhances, it is becoming more and more difficult to expose a fake. Any person can be portrayed in a bad light, subjected to deceit or defamation.
“The only way to make creating a deepfake challenging is to restrict publications of one’s own images in the public domain. However, since online malefactors are particularly interested in public people and celebrities who often appear in the media, reducing the possibility of such violations requires smart government regulation of the internet. It would be wrong to assume that in the past, it was impossible to create a fake or an extremely believable photo montage that can’t be exposed. Deepfake capacities were described in Robert A. Heinlein’s Double Star. Although the novel is set in a fantastic world, the technology of creating fakes was inspired by the author’s contemporary reality (a well-prepared actor with heavy makeup). In the epilogue to Master and Margarita, Mikhail Bulgakov explained Woland and his gang’s escapades as being a deepfake. These days, you don’t need an actor because computer graphics can do the job perfectly and this technology is becoming more accessible.”
It is often difficult to tell a deepfake from real content without professional tools. And still, it is quite possible to expose a fake using a scientific approach.
“Try not to react to a publication, especially if it involves manipulation and appeals to emotions. If there is a lot of talking and speculation about something, it is most likely a fake and you should ignore it. Wait until the creator gives themselves away or contradictory facts in this story emerge. It is important to remember that reality is a system and its components affect each other,” Daniil Kurushin adds.
The scientist stresses that the deepfakes themselves are not as dangerous as people whose reaction may be fast and emotional as well as journalists and bloggers who fail to double-check what they publish. That causes an emotionally charged viral text or a video clip released by a malefactor to quickly spread on the internet.