Sunday, February 28, 2021
Home Science Which Face is Real? Using Frequency Analysis to Identify “Deep-Fake” Images

Which Face is Real? Using Frequency Analysis to Identify “Deep-Fake” Images

Which Face Is Real

This technique exposes pretend pictures created by laptop algorithms reasonably than by people.

They look deceptively actual, however they’re made by computer systems: so-called deep-fake pictures are generated by machine studying algorithms, and people are just about unable to distinguish them from actual photographs. Researchers on the Horst Görtz Institute for IT Security at Ruhr-Universität Bochum and the Cluster of Excellence “Cyber Security in the Age of Large-Scale Adversaries” (Casa) have developed a brand new technique for effectively figuring out deep-fake pictures. To this finish, they analyze the objects within the frequency area, a longtime sign processing method.

Frequency Analysis Fake Images

Frequency evaluation revelas typical artefacts in computer-generated pictures. Credit: © RUB, Marquard

The group offered their work on the International Conference on Machine Learning (ICML) on 15 July 2020, one of many main conferences within the area of machine studying. Additionally, the researchers make their code freely accessible on-line*, in order that different teams can reproduce their outcomes.

Interaction of two algorithms ends in new pictures

Deep-fake pictures — a portmanteau phrase from “deep learning” for machine studying and “fake” — are generated with the assistance of laptop fashions, so-called Generative Adversarial Networks, GANs for brief. Two algorithms work collectively in these networks: the primary algorithm creates random pictures based mostly on sure enter knowledge. The second algorithm wants to resolve whether or not the picture is a pretend or not. If the picture is discovered to be a pretend, the second algorithm offers the primary algorithm the command to revise the picture — till it not acknowledges it as a pretend.

Bochum Fake Image Research Team

Members of the Bochum-based analysis group embody: Thorsten Holz, Lea Schönherr, Joel Frank and Thorsten Eisenhofer (left to proper). Credit: RUB, Marquard

In latest years, this method has helped make deep-fake pictures increasingly genuine. On the web site**, customers can verify in the event that they’re ready to distinguish fakes from unique photographs. “In the era of fake news, it can be a problem if users don’t have the ability to distinguish computer-generated images from originals,” says Professor Thorsten Holz from the Chair for Systems Security.

For their evaluation, the Bochum-based researchers used the information units that additionally kind the premise of the above-mentioned web page “Which face is real”. In this interdisciplinary challenge, Joel Frank, Thorsten Eisenhofer and Professor Thorsten Holz from the Chair for Systems Security cooperated with Professor Asja Fischer from the Chair of Machine Learning in addition to Lea Schönherr and Professor Dorothea Kolossa from the Chair of Digital Signal Processing.

Frequency evaluation reveals typical artifacts

To date, deep-fake pictures have been analyzed utilizing complicated statistical strategies. The Bochum group selected a unique strategy by changing the photographs into the frequency area utilizing the discrete cosine rework. The generated picture is thus expressed because the sum of many alternative cosine capabilities. Natural pictures consist primarily of low-frequency capabilities.

Image Frequeny Analysis

Images of individuals remodeled into the frequency area: the higher left nook represents low-frequency picture areas, the decrease proper nook represents high-frequency areas. On the left, you’ll be able to see the transformation of a photograph of an actual individual: the frequency vary is evenly distributed. The transformation of the computer-generated picture (proper) comprises a attribute grid construction within the high-frequency vary – a typical artifact. Credit: © RUB, Lehrstuhl für Systemsicherheit

The evaluation has proven that pictures generated by GANs exhibit artifacts within the high-frequency vary. For instance, a typical grid construction emerges within the frequency illustration of pretend pictures. “Our experiments showed that these artifacts do not only occur in GAN generated images. They are a structural problem of all deep learning algorithms,” explains Joel Frank from the Chair for Systems Security. “We assume that the artifacts described in our study will always tell us whether the image is a deep-fake image created by machine learning,” provides Frank. “Frequency analysis is therefore an effective way to automatically recognize computer-generated images.”


Reference: “Leveraging Frequency Analysis for Deep Fake Image Recognition” by Joel Frank, Thorsten Eisenhofer, Lea Schonherr, Asja Fischer, Dorothea Kolossa and Thorsten Holz, 2020, International Conference on Machine Learning (ICML).


* Code accessible at GitHub.

** Website:

Source hyperlink

Leave a Reply

Most Popular

Recent Comments

%d bloggers like this: