California programmers were the first to demonstrate that a deepfake detector can be tricked. This was reported by the press service of Cornell University. The programmers refused to publish their code in the open access, so that it could not be used to distort reality for illegal purposes.
Deepfake technology, or deepfake, allows you to change the face of any subject to someone else’s. At the same time, the new version of the image looks quite believable. With the help of a deepfake, you can create realistic footage of events that did not happen in reality.
With reference to the work of American specialists in computer engineering, a new neural network is a fairly simple way “cheating” any deepfake detectors. This algorithm was presented at the Winter Conference on the Applications of Computer Vision, which took place in January this year (WACV 2021). The researchers insist that their solution demonstrates the ineffectiveness of even the most advanced video authentication tools available.
The method of deception is to embed special “opposing” – adversarial examples in each frame of the video. The effect is truly discouraging: even a hastily concocted deepfake is detected by the detector as a real recording. Although the unnatural behavior of a person in the video is obvious to the naked eye. Researchers have demonstrated an example of such deception in practice.
Importantly, the “snag” works for video transcoding and on many of the most advanced detectors. In fact, experts conclude that due to the peculiarities of the algorithms that detect forgery, it is really easy to circle them all.
Such developments can become much more perfect, which again calls into question the very concept of trust in the video sequence. More precisely, it raises concerns that manipulation of public opinion may turn out to be much easier for attackers than law enforcement agencies and the people themselves would like. And if at first the wave of deepfakes was perceived only as entertainment, then a little later the danger of technology became obvious.
People are used to trusting what they see with their eyes, and a well-made deepfake can be indistinguishable from a real video. Sometimes it is enough not even to replace a person’s face, but to synchronize lip movements with fake speech. The scope of possible use of such manipulations is truly limitless. And detectors of such fakes are being developed by hundreds of teams around the world.