4

Unmasking Deepfakes: Defending Against a Growing Threat

 1 year ago
source link: https://www.informationweek.com/security-and-risk-strategy/unmasking-deepfakes-defending-against-a-growing-threat-
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Unmasking Deepfakes: Defending Against a Growing Threat

Organizations guarding against deepfake videos must rely on a compendium of tools and follow their own visual instinct to root out threats.
Graphic illustration showing business person manipulating deepfake technology.
Credit: Kirill Ivanov via Alamy Stock

Deepfakes -- synthetic media which use artificial intelligence and deep learning to manufacture fake images or videos of individuals that never existed -- are proving increasingly tricky to spot.

The often viral nature of these digital confections can make them dangerous, and deepfakes are becoming harder to detect.

There is a race between more sophisticated methods of creating them and likewise for methods to detect and discover them. 

While many deepfakes are lighthearted, technology has democratized access to technology that requires a mere 30 seconds or a handful, rather than thousands, of images that can deceive.

Everyone’s identities, even those with a small digital footprint, are now at risk of impersonation and fraud. 

“The capacity for abuse and disinformation compounds with deep learning’s effect on deepfake development as bad actors spread convincing lies or make compelling, defamatory impersonations,” warns Ricardo Amper, founder and CEO at Incode.

Deepfake Production Goes Automated

Bud Broomhead, CEO at Viakoo, explains creating deepfakes has become relatively automated and available, with several open source projects like DeepFaceLab, FaceSwap, and FSGAN that continue to be improved upon.

“Commercial tools like Adobe AfterEffects and Reallusion’s iClone can also be used, despite their commercial license restricting their use for creating malicious deepfakes,” he adds.

For video or audio where the origins cannot be established, there are some methods that help to detect deepfakes, such as looking for anomalies in the backgrounds, lighting effects, or motion effects.

“For example, looking at the reflections from the eyes has been used to determine if the video is real or a deepfake,” Broomhead says. “Similarly, looking at high resolution around a face might show some blurring or pixelation from it being a deepfake.”

For audio files, analyzing and filtering the audio might show background noise differences from what the original would show.  

Apart from automated tools, there should be some basic checks done on a video or audio file that is suspected of being a deepfake.

“Looking at the metadata to see file details such as when it was created, who authored it, file size, and so forth might give some clues,” Broomhead notes. ”Checking to see if other versions or sources of the suspected file are available is another way.” 

Spotting Deepfakes

John Bambenek, principal threat hunter at Netenrich, agrees sometimes there are audio or visual effects that are abnormal, including weird lighting or shadows, pixelation, unnatural movement of facial features.

“There are tools to attempt to detect these or try to find the presence of ‘real’ and ‘synthetic’ audio or visual material in the same file,” he says. “The same truth about authentication of audio or visual content is true about authentication in the technical systems of identity.”

Amper says while the technology is maturing rapidly toward lifelike, intelligent impersonations, the human eye can still spot blurring around the ears or hairline, unnatural blinking patterns, or differences in image resolution.

“Color amplification tools that visualize blood flow or ML algorithms trained on spectral analysis are equally effective at detecting and vetting extreme behavior,” he says.

He says although contemporary deepfakes are extremely well-done and increasingly hard to recognize, digital identity verification and liveness detection can authenticate a person’s unique identity markers.

Once a user has been confirmed as the genuine owner of the real-world identity they are claiming, deep convolutional neural networks can be trained and leveraged for biometric liveness checks including textural analysis, geometry calculation, or traditional challenge-response mechanisms to verify if the person presented on screen is real.

Active liveness detection asks users to follow instructions such as “rotate your head” and advanced passive liveness detection solutions use selfies to offset against video replays and reprojections.

“The challenge becomes robust training data for AI to generalize well enough and cope against attacks,” Amper says.

Defending Against Deepfakes

Broomhead says at a basic level, the origin of the video or audio recordings should be known, and advises organizations to follow a “chain-of-custody” process to ensure the contents have not been manipulated. 

“For example, if it is surveillance video, there should be a service assurance solution used that can analyze the bitstream coming from the camera device so it can be compared to the bitstream of the stored or shared video,” he says.

Maintaining cyber hygiene and using asset discovery and threat assessment on video/audio devices and the networks and storage they use can reduce the possibility of a threat actor having access to plant and distribute deepfakes.

AI’s Role in Defense and Detection

Just as generative AI is being used in harmful or illicit activities, AI is already being used to detect deepfakes.

“The responsibility lies with AI leaders to ensure safety and consent by design and support legislation that educates in this direction,” Amper says. “In today’s climate of doubt, many harmless and beneficial applications serve as symbols of creativity, providing us with a rare opportunity to restore trust.

Bambenek cautions that ultimately, detecting deepfakes will not be technically possible.

“That said, the primary victim isn’t high-profile,” he says. “These are most often used for creating synthetic revenge porn where the victims have no real ability to respond or protect themselves from the harassment generated.”

What to Read Next:

What Does the Arms Race for Generative AI Mean for Security?

Why IT Departments Need to Consider Deepfakes

AI: It’s the New Security Frontier


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK