Face swap in the mainstream: What deepfakes are and how to detect them

Community Reports

LOS ALAMOS, N.M. (KRQE) – A virtual discussion on artificial intelligence will be held on Monday, Jan. 11 led by Los Alamos National Laboratory scientist Juston Moore. He will discuss “deepfakes,” which are fake images, videos, audio and other media created by artificial intelligence algorithms.

Moore is the research scientist in the Advanced Cyber Systems group at Los Alamos National Laboratory, and the discussion is part of the ongoing series called Science on Tap. “I think it’s important to realize that deepfakes aren’t the first version of misinformation. We’ve always been able to edit photo and video, and that’s existed for a long time,” Moore said.

Moore said the original versions of deepfakes appeared sometime between 2013 and 2015 where they were primarily used for malicious applications like pornography. Deepfakes have also been used for entertainment purposes which is how people are most likely to come across them. Within the span of five to six years, however, the realism of those videos has improved dramatically, to the point where Moore says it’s becoming increasingly more difficult to determine what is real and what is AI-generated.

Those fun, viral videos have been appearing recently, but in a few years could bring more dangerous implications. “There’s two problems with deepfakes. One of them is that they enable people with very little sophistication and very little resources to do this quickly with pretty low-end computers,” Moore said. “The second problem is that the forensic techniques to identify manipulation of deepfakes are not as mature. The generation is getting better, faster than the detection side.”

Moore said there are currently a few things people can look for when trying to determine if something is an artificially generated image. He said the most common form of deepfakes are face-swaps where a person in a photo or video has someone else’s face is superimposed on their own face. The teeth are blurry, the eyes don’t line up and make them look strange and a lot of times there are morphing artifacts around the face because the superimposed image does not perfectly align with the original face. Often times facial hair is another difficult thing to hide in these instances.

These are all things Moore and his team are researching, and while it’s important to be cautious of what’s online, chances are a lot of what people might come across aren’t going to look very realistic. During the webinar, Moore hopes to talk more broadly about generative AI which is artificial intelligence that give the computer the ability to have some sense of imagination. Deepfakes are one potential malicious use of that technology, but he will also discuss the positive aspects of it and some of the work being done at the lab.

Most of all, he wants to create a virtual discussion to start talking about these topics and their implications. “I’m looking forward to questions and spending time in an informal discussion with the community,” Moore said.

To register for the event, visit the Bradbury Science Museum website.

Read Next:

Copyright 2021 Nexstar Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Latest Video

Now Trending on KRQE.com

Albuquerque Hourly Forecast

Don't Miss

MORE IN DON'T MISS

Photo Galleries

MORE PHOTO GALLERIES

News Resources

MORE NEWS RESOURCES