AI is constantly in the news these days, identifying prospects for the technology doing both good and bad. One topic that’s generating a lot of buzz is the use of AI for creating “deepfakes,” a term originally coined in 2017. Deepfakes uses neural networks to combine and superimpose existing images and videos onto source images or videos using a deep learning technique known as generative adversarial networks (GANs). Three of the most common deepfakes techniques are known as “lip-sync,” “face swap,” and “puppet-master.” These techniques, however, can create a disconnect that may be uncovered by a clever algorithm as a way to combat deepfakes. Growing fear over the potential onslaught of deepfakes has encouraged a slate of research efforts to detect deepfakes and other image and video-tampering techniques—finding a way to use AI to combat deepfakes and fake news, rather than creating them.
[Related Article: Deep Learning Finds Fake News with 97% Accuracy]
In this article, I will provide a brief overview of several examples of these new efforts, and tie things up with an example of how AI also is being used to setback the proliferation of fake news.
AI to Combat Deepfakes
Here are a few of the most recent attempts for fighting deepfakes:
- Head and Face Gestures – UC Berkeley graduate student Shruti Agarwal and her thesis adviser Hany Farid, a professor in the school’s Department of Electrical Engineering and Computer Sciences, are working on a new approach for detecting deepfakes. Their AI algorithm detects face-swapped videos based on head and face quirks. People tend to have unique head movements such as a statement of fact coupled with a nod of the head, and also face gestures such as smirking when making a point. A neural network trained on video data comprised of the head and face quirks of an individual would be able to flag videos that contain head gestures that don’t belong to that person. The UC Berkeley researchers tested their model by training the neural network on actual videos of world leaders. The algorithm was able to detect deepfakes videos with 92% accuracy. The problem with the head movement detector is that it must be trained separately for every individual. This is fine for public figure such as politicians and celebrities, it’s less than ideal for general-purpose deepfakes detection. A current paper on this area of detection by a group of researchers from USC is “Recurrent Convolutional Strategies for Face Manipulation Detection in Videos,” by Ekraam Sabir et al.
- Unnatural Eye Movement – A group of researchers from the University at Albany, SUNY’s Computer Science Department recently published a paper titled “In Ictu Oculi: Exposing AI Generated Fake Face Videos by Detecting Eye Blinking,” by Yuezun Li et al. New developments in deep GANs have significantly improved the quality and efficiency in generating realistically-looking fake face videos. In this paper, the researchers describe a new method to expose fake face videos generated with neural networks. The method is based on detection of eye blinking in the videos, which is a physiological signal that is not well presented in the synthesized fake videos. The team tested their methodology using benchmarks of eye-blinking detection data sets. They also found promising performance on detecting videos generated with deepfakes techniques. Sadly, with the technology becoming more advanced every day, it’s just a matter of time until someone manages to create deepfakes that can blink naturally.
From “In Ictu Oculi: Exposing AI Generated Fake FaceVideos by Detecting Eye Blinking,” by Yuezun Li, et al.
- Pixel Artifacts – Research led by Amit Roy-Chowdhury’s Video Computing Group at the University of California, Riverside (UCR) has developed a deep neural network architecture that can identify manipulated images at the pixel level with high precision. Roy-Chowdhury is a Professor of Electrical and Computer Engineering at UCR. When forgers tamper with an image or video they try to make it look realistic, but the process nevertheless leaves behind some artifacts that a well-trained deep learning algorithm can detect. The paper that highlights this research, “Hybrid LSTM and Encoder–Decoder Architecture for Detection of Image Forgeries,” by Chowdhury and his team, was published in the July 2019 issue of IEEE Transactions on Image Processing. It can be shown that pixels around the boundaries of objects that are artificially inserted into or removed from an image contain special characteristics, such as unnatural smoothing and feathering. The UCR researchers trained their model on a large data set containing annotated examples of untampered and tampered images. The neural network was able to assemble common patterns that define the difference between the boundaries of manipulated and non-manipulated objects in images. Although the research centered on images, it can potentially be extended to videos as well.
While it’s reassuring to see these and other efforts to use AI to combat the scourge of deepfakes, they are up against continuing advances of technology. As deepfakes and its successors continue to become more sophisticated, it’s unclear whether detection and defense methods will be able to keep up. In a very real sense, “bad AI” is competing with “good AI.” Only time will deem the winner.
Can AI Overcome Fake News?
AI also is being used to fight fake news. For example, Fairfax, VA based CVP, a leading technology consulting company, has developed an automated fake news detector that can quickly predict whether an article is likely to be real or fake news.
The controversial topic of fake news is an emerging problem across news and social media. CVP’s team of over 40 data scientists worked to show that AI could help with this problem. The company used a data set consisting of 7,000 news articles, where 50% were from the mainstream media and the other 50% were from known purveyors of fake news. The company used Natural Language Processing (NLP) to clean up the data and perform tasks like excluding common words, and then trained several open-source machine learning algorithms and created a model that was over 90% accurate in identifying articles from real vs. fake sources.
“I was actually kind of floored that it was so accurate after just a couple runs,” said Cal Zemelman, CVP’s Director of Data Science & Engineering. “I figured this was a complex problem that wouldn’t be straightforward for the model to pick up on, but it turned out I was pessimistic.” Using a technique called “explainable AI,” CVP’s data scientists used a library called SHAP on the machine learning model to actually explain why it thinks the model made the decisions it did.
SHAP stands for “SHapley Additive exPlanations” and is a unified approach to explain the output of any machine learning model. SHAP connects game theory with local explanations, uniting several previous methods and representing the only possible consistent and locally accurate additive feature attribution method based on expectations. See this extensive GitHub site for all the SHAP Python code libraries, Jupyter notebooks, a nice write-up of the technology, and citations for several academic papers.
[Related Article: Is AI the Future of Digital Influencers?]
The biggest factor that the model keyed-in on was that fake news writers tend to state opinions as facts and don’t bother to quote or attribute things to people. The explanations told CVP that the lack of the word “said” was a huge indicator for detecting fake news because the authors seemed to rarely write about who said what. The company found other terms such as “president” were generally correlated with real news, while the words “share” and “article” were associated with fake news, likely because the fake news authors have an important goal of ensuring their message is widely shared on social media. CVP has released the source code for the application on GitHub.