Editor’s Note: With the potential for deepfakes to create malicious propaganda and other forms of fraud becoming increasingly significant in today’s digitally-driven world of communications, understanding the technology behind deepfakes may be beneficial for data and legal discovery professionals as they seek to evaluate and authenticate electronically stored information. In this article extract from Cointelegraph author Sharaz Jagati, several industry experts, to include Steve McNew of FTI Consulting, consider and comment on the deep truths of deepfakes.
Deep Truths of Deepfakes — Tech That Can Fool Anyone
An extract from an article by Sharaz Jagati via Cointelegraph
From a technical standpoint, visual deepfakes are devised through the use of machine learning tools that are able to decode and strip down the images of all the facial expressions related to the two individuals into a matrix consisting of certain key attributes, such as the position of the target’s nose, eyes, and mouth. Additionally, finer details, such as skin texture and facial hair, are given less importance and can be thought of as secondary.
The deconstruction, in general, is performed in such a way that it is mostly always possible to fully recreate the original image of each face from its stripped elements. Additionally, one of the primary aspects of creating a quality deepfake is how well the final image is reconstructed — such that any movements in the face of the imitator are realized in the target’s face as well.
To elaborate on the matter, Matthew Dixon, an assistant professor and researcher at the Illinois Institute of Technology’s Stuart School of Business, told Cointelegraph that both face and voice can be easily reconstructed through certain programs and techniques, adding that:
“Once a person has been digitally cloned it is possible to then generate fake video footage of them saying anything, including speaking words of malicious propaganda on social media. The average social-media follower would be unable to discern that the video was fake.”
Similarly, speaking on the finer aspects of deepfake technology, Vlad Miller, CEO of Ethereum Express — a cross-platform solution that is based on an innovative model with its own blockchain and uses a proof-of-authority consensus protocol — told Cointelegraph that deepfakes are simply a way of synthesizing human images by making use of a machine learning technique called GAN, an algorithm that deploys a combination of two neural networks.
The first generates the image samples, while the second distinguishes the real samples from the fake ones. GAN’s operational utility can be compared to the work of two people, such that the first person is engaged in counterfeiting while the other tries to distinguish the copies from the originals. If the first algorithm offers an obvious fake, the second will immediately determine it, after which the first will improve its work by offering a more realistic image.
Regarding the negative social and political implications that deepfake videos can have on the masses, Steve McNew, an MIT trained blockchain/cryptocurrency expert and senior managing director at FTI Consulting, told Cointelegraph:
“Online videos are exploding as a mainstream source of information. Imagine social media and news outlets frantically and perhaps unknowingly sharing altered clips — of police bodycam video, politicians in unsavory situations or world leaders delivering inflammatory speeches — to create an alternate truth. The possibilities for deepfakes to create malicious propaganda and other forms of fraud are significant.”
- Discovering Deepfakes? Fighting Fake News with Blockchain
- No More Paperwork: Estonia Edges Toward Digital Government