Deepfakes are hyper-realistic videos that have been digitally altered to depict people saying and doing things that have never occurred. They combine “deep learning” and “fake.” Deepfakes use neural networks to learn to mimic a person’s facial expressions, mannerisms, voice, and inflections by analyzing large data samples.
The process involves feeding footage of two people into a deep-learning algorithm to train it to swap faces. In other words, deepfakes use facial mapping technology and AI that switches a person’s face on a video for another person’s face.
Deepfakes first gained attention in 2017 when a Reddit user posted videos of famous people in impolite sexual encounters. Since then, “deepfakes” have become increasingly popular. Deepfakes are challenging to spot because they employ real video and audio and can be designed to spread quickly on social media. As a result, many viewers believe the video they are watching to be authentic.
Methods to make deepfakes
There are two main methods to make deepfakes. The first is usually adopted for ‘face-swapping’ (i.e., placing one person’s face onto someone else’s) and requires thousands of face shots of the two people to be run through an AI algorithm called an encoder. The encoder then identifies and learns the differences between the two faces and distills them to their shared features, compressing the images. The faces in the compressed images are then taught to be recovered by a second AI algorithm called a decoder: one decoder recovers the first person’s face, and another recovers the second person’s face. Then, by giving encoded images to the ‘wrong’ decoder, the face swap is performed on as many video frames as possible to make a convincing deepfake.
The second vital method to make deepfakes is a generative adversarial network (GAN). A GAN pits two AI algorithms against each other to create brand-new images. One algorithm, the generator, is fed with random data and generates a new idea. The second algorithm, the discriminator, checks the image and data to see if it corresponds with known data (i.e., known images or faces). This battle between the two algorithms essentially forces the generator into creating incredibly realistic photos (e.g., of celebrities) that attempt to fool the discriminator.
These images have been used to create fake yet realistic images of people, with often harmful consequences. For example, a McAfee team used a GAN to fool a facial recognition system like those currently used for airport passport verification. McAfee relied on state-of-the-art, open-source facial-recognition algorithms, usually quite similar to one another, thereby raising concerns about the security of facial-recognition systems.
Benefits of deepfakes
In addition to movies, educational media, digital communications, games, entertainment, social media, healthcare, material science, and various business sectors like e-commerce and fashion, deepfake technology has many practical applications. Deepfake technology has many advantages for the movie business. For instance, it can be used to create digital voices for actors whose voices were lost due to illness or to update film footage rather than reshoot it. Moviemakers can reenact classic movie scenes, make new films with actors who have passed away, use special effects and advanced face editing in post-production, and turn amateur videos into polished ones.
Deepfake technology also enables automatic and lifelike voice dubbing for films in any language, enhancing the viewing experience for diverse audiences. David Beckham broke down language barriers in a 2019 global malaria awareness campaign by using technology that changed his appearance and voice to make him appear multilingual.
Similarly, deepfake technology can overcome the language barrier during video conference calls by translating speech while also changing mouth and facial expressions to enhance eye contact and make it appear as though everyone is speaking the same language. Deepfakes is a technology that enables digital doubles of people, enhanced telepresence, and natural-sounding intelligent assistants in multiplayer games and virtual chat environments. Better online interactions and human relationships are a result of this.
Technology can also be beneficial in the social and medical sectors. By “bringing back to life” a deceased friend digitally and enabling a grieving loved one to say goodbye, deepfakes can assist people in coping with the loss of loved ones. Additionally, it allows transgender people to see themselves more accurately as their preferred gender and can digitally recreate an amputee’s lost limb.
Even people with Alzheimer’s can benefit from deep-fake technology by interacting with a younger face they may remember. To accelerate the development of new materials and medical treatments, researchers are also investigating using GANs to identify X-rays anomalies. Businesses are intrigued by the potential of brand-applicable deepfake technology because it has the potential to drastically change advertising and e-commerce.
Brands can, for instance, hire supermodels who aren’t supermodels and use models of various heights, weights, and skin tones to display clothing. Deepfakes also enable incredibly personalized content that transforms users into models. For example, the technology allows for virtual fittings so that users can see how an outfit will look on them before buying it. They can also produce targeted fashion ads that change depending on the viewer, time, and weather.
The technology enables people to create digital copies of themselves and have these personal avatars travel with them through e-stores, try on a bridal gown or suit in digital form, and then virtually experience a wedding venue. Being able to quickly try on clothes online is an obvious potential use for it. Additionally, AI can offer distinctive synthetic voices that distinguish brands and products to make branding easier.
Threats of deepfakes
Deepfakes pose a severe threat to our society, political system, and economy because they put pressure on journalists trying to distinguish between real and fake news, jeopardize national security by spreading propaganda and meddling in elections, undermine public confidence in government information, and raise cybersecurity concerns for both individuals and businesses.
Due to deepfakes, the journalism sector will likely experience severe problems with consumer trust. Deepfakes are more dangerous than “traditional” fake news because they are more challenging to detect and cause false information to be believed. The technology enables the production of news videos that appear to be legitimate but aren’t, endangering the credibility of journalists and the media.
Additionally, gaining access to video footage that a witness of an incident shot can give a news outlet a competitive edge. However, if the provided footage is fake, the risk increases. Reuters discovered 30 counterfeit videos about the incident, mostly old videos from other events posted with new captions, during the 2019 spike in tensions between India and Pakistan. With the rise of deepfakes, the issue of misattributed video footage—such as a real protest march or a violent skirmish captioned to suggest it occurred somewhere else—will only get worse. Reuters discovered a video while searching for eyewitness accounts of the mass shooting in Christchurch, New Zealand. The video purported to capture the moment police killed a suspect.
They soon realized, however, that it was from a different incident in the United States of America, and the Christchurch shooting suspect had not been killed. The intelligence community is worried that deep fakes will be used to undermine election campaigns and endanger national security by spreading political propaganda. Foreign interference in American politics is a threat that U.S. intelligence officials have repeatedly raised, especially in the months before elections. In today’s disinformation wars, putting someone else’s words in a viral video is potent because such altered videos can easily sway voter opinion.
A deepfake video of a politician using racial slurs or accepting a bribe, a presidential candidate confessing to a crime, alerting another nation to the impending war, a government official appearing in a questionable situation, admitting to a covert conspiracy, or American soldiers committing war crimes like murdering civilians abroad could be created by a foreign intelligence agency. Other nation-states might decide to act out their foreign policies based on unreality, which could result in international conflicts, even though such faked videos would probably cause domestic unrest, riots, and disruptions of elections.