Deep Fakes – The real unreality threat

Deep Fakes – The real unreality threat

Deep Fakes – The real unreality threat

The current evolution of technology has increased the use of various Artificial Intelligence tools/softwares for the spreading information or any message in the mass public. The social media platforms in today’s era provides with a plethora of opportunities to propagate one’s ideology and thoughts to the public and the filters for the same are essentially lesser and hence the threat of false news and information is ever present and has been combated by the help of Artificial Intelligence tools. One of the new ways to propagate the messages by leaders of various states in by the use of video conferencing, since the COVID-19 pandemic the govts have also moved towards the increased use of screen and technology, as seen in India during the 2020 Lockdown, Prime Minister Modi used to come live on television to address the citizen of the protocols and steps taken by the govt to combat the novel coronavirus and the same was very effective and efficient in spreading the information. However it is difficult to know if the video was fake or real, this is where the Deep Fakes technology comes into the picture. Deep fakes is a type of Artificial Intelligence tool which creates continuing images, audio and video hoax. This technology when in the wrong hands of anti state organizations/individuals  can cause catastrophic events causing unrest in states. The use of this technology has been seen on various platforms of the states varying from politics to IT-security and has been gaining popularity in recent times. As the video is difficult to alter as compared to images hence the mechanism for the same works on a complex algorithm in structure. 

What is Deep Fake ?

The algorithm of deepflakes works by the use of two different algorithms namely – Generator and Discriminator.The generator, which creates the phony multimedia content, asks the discriminator to determine whether the content is real or artificial. As the generator gets better at creating fake video clips, the discriminator gets better at spotting them. Conversely, as the discriminator gets better at spotting fake videos, the generator gets better at creating them. 

Until recently, video content has been more difficult to alter in any substantial way. Because deepfakes are created through AI, however, they don’t require the considerable skill that it would take to create a realistic video otherwise. Unfortunately, this means that just about anyone can create a deepfake to promote their chosen agenda. For example, a deep fake could be used to spread false information via a presidential candidate. Microsoft, however, has worked on an AI-powered deep fake detection software for this purpose. The tool can automatically analyze videos and photos to provide a confidence score that the media has been manipulated and have potential to distort reality, deceive viewers and could also influence voters. 

The Real Havoc

In the recent Russian invasion into Ukraine the President Zelensky has been seen sending various messages to the world leaders and people of Ukraine, however it is believed some of the videos are a result of deepfakes and has been created by anti state factors to cause further havoc and unrest during the invasion. It gets harder as the technology improves. In 2018, US researchers discovered that deepfake faces don’t blink normally. No surprise there: the majority of images show people with their eyes open, so the algorithms never really learn about blinking. At first, it seemed like a silver bullet for the detection problem. But no sooner had the research been published, than deep fakes appeared with blinking. Such is the nature of the game: as soon as a weakness is revealed, it is fixed. Poor-quality deep fakes are easier to spot. The lip synching might be bad, or the skin tone patchy. There can be flickering around the edges of transposed faces. And fine details, such as hair, are particularly hard for deepfakes to render well, especially where strands are visible on the fringe. Badly rendered jewelry and teeth can also be a giveaway, as can strange lighting effects, such as inconsistent illumination and reflections on the iris. Governments, universities and tech firms are all funding research to detect deep fakes. 

What is the Solution ?

Ironically, AI may be the answer. Artificial intelligence already helps to spot fake videos, but many existing detection systems have a serious weakness: they work best for celebrities, because they can train on hours of freely available footage. Tech firms are now working on detection systems that aim to flag up fakes whenever they appear. Another strategy focuses on the provenance of the media. Digital watermarks are not foolproof, but a blockchain online ledger system could hold a tamper-proof record of videos, pictures and audio so their origins and any manipulations can always be checked.

Author – Mr. Abhishek Singh, Research Associate – Policy and Advocacy, CyberPeace Foundation

Leave your comments

5 × two =