#FactCheck - An edited video of Bollywood actor Ranveer Singh criticizing PM getting viral
Research Wing
Innovation and Research
PUBLISHED ON
Apr 27, 2024
10
Executive Summary:
An alleged video is making the rounds on the internet featuring Ranveer Singh criticizing the Prime Minister Narendra Modi and his Government. But after examining the video closely it revealed that it has been tampered with to change the audio. In fact, the original videos posted by different media outlets actually show Ranveer Singh praising Varanasi, professing his love for Lord Shiva, and acknowledging Modiji’s role in enhancing the cultural charms and infrastructural development of the city. Differences in lip synchronization and the fact that the original video has no sign of criticizing PM Modi show that the video has been potentially manipulated in order to spread misinformation.
Claims:
The Viral Video of Bollywood actor Ranveer Singh criticizing Prime Minister Narendra Modi.
Upon receiving the Video we divided the video into keyframes and reverse-searched one of the images, we landed on another video of Ranveer Singh with lookalike appearance, posted by an Instagram account named, “The Indian Opinion News''. In the video Ranveer Singh talks about his experience of visiting Kashi Vishwanath Temple with Bollywood actress Kriti Sanon. When we watched the Full video we found no indication of criticizing PM Modi.
Taking a cue from this we did some keyword search to find the full video of the interview. We found many videos uploaded by media outlets but none of the videos indicates criticizing PM Modi as claimed in the viral video.
Ranveer Singh shared his thoughts about how he feels about Lord Shiva, his opinions on the city and the efforts undertaken by the Prime Minister Modi to keep history and heritage of Varanasi alive as well as the city's ongoing development projects. The discrepancy in the viral video clip is clearly seen when we look at it closely. The lips are not in synchronization with the words which we can hear. It is clearly seen in the original video that the lips are in perfect synchronization with the words of audio. Upon lack of evidence to the claim made and discrepancies in the video prove that the video was edited to misrepresent the original interview of Bollywood Actor Ranveer Singh. Hence, the claim made is misleading and false.
Conclusion:
The video that claims Ranveer Singh criticizing PM Narendra Modi is not genuine. Further investigation shows that it has been edited by changing the audio. The original footage actually shows Singh speaking positively about Varanasi and Modi's work. Differences in lip-syncing and upon lack of evidence highlight the danger of misinformation created by simple editing. Ultimately, the claim made is false and misleading.
Claim: A viral featuring Ranveer Singh criticizing the Prime Minister Narendra Modi and his Government.
Your iPhone isn’t just a device: it’s a central hub for almost everything in your life. From personal photos and videos to sensitive data, it holds it all. You rely on it for essential services, from personal to official communications, sharing of information, banking and financial transactions, and more. With so much critical information stored on your device, protecting it from cyber threats becomes essential. This is where the iOS Lockdown Mode feature comes in as a digital bouncer to keep cyber crooks at bay.
Apple introduced the ‘lockdown’ mode in 2022. It is a new optional security feature and is available on iPhones, iPads, and Mac devices. It works as an extreme and optional protection mechanism for a certain segment of users who might be at a higher risk of being targeted by serious cyber threats and intrusions into their digital security. So people like journalists, activists, government officials, celebrities, cyber security professionals, law enforcement professionals, and lawyers etc are some of the intended beneficiaries of the feature. Sometimes the data on their devices can be highly confidential and it can cause a lot of disruption if leaked or compromised by cyber threats. Given how prevalent cyber attacks are in this day and age, the need for such a feature cannot be overstated. This feature aims at providing an additional firewall by limiting certain functions of the device and hence reducing the chances of the user being targeted in any digital attack.
How to Enable Lockdown Mode in Your iPhone
On your iPhone running on iOS 16 Developer Beta 3, you just need to go to Settings - Privacy and Security - Lockdown Mode. Tap on Turn on Lockdown Mode, and read all the information regarding the features that will be unavailable on your device if you go forward, and if you’re satisfied with the same all you have to do is scroll down and tap on Turn on Lockdown Mode. Your iPhone will get restarted with Lockdown Mode enabled.
Easy steps to enable lockdown mode are as follows:
Open the Settings app.
Tap Privacy & Security.
Scroll down, tap Lockdown Mode, then tap Turn On Lockdown Mode.
How Lockdown Mode Protects You
Lockdown Mode is a security feature that prevents certain apps and features from functioning properly when enabled. For example, your device will not automatically connect to Wi-Fi networks without security and will disconnect from a non-secure network when Lockdown Mode is activated. Many other features may be affected because the system will prioritise security standards above the typical operational functions. Since lockdown mode restricts certain features and activities, one can exclude a particular app or website in Safari from being impacted and limited by restrictions. Only exclude trusted apps or websites if necessary.
The advent of AI-driven deepfake technology has facilitated the creation of explicit counterfeit videos for sextortion purposes. There has been an alarming increase in the use of Artificial Intelligence to create fake explicit images or videos for sextortion.
What is AI Sextortion and Deepfake Technology
AI sextortion refers to the use of artificial intelligence (AI) technology, particularly deepfake algorithms, to create counterfeit explicit videos or images for the purpose of harassing, extorting, or blackmailing individuals. Deepfake technology utilises AI algorithms to manipulate or replace faces and bodies in videos, making them appear realistic and often indistinguishable from genuine footage. This enables malicious actors to create explicit content that falsely portrays individuals engaging in sexual activities, even if they never participated in such actions.
Background on the Alarming Increase in AI Sextortion Cases
Recently there has been a significant increase in AI sextortion cases. Advancements in AI and deepfake technology have made it easier for perpetrators to create highly convincing fake explicit videos or images. The algorithms behind these technologies have become more sophisticated, allowing for more seamless and realistic manipulations. And the accessibility of AI tools and resources has increased, with open-source software and cloud-based services readily available to anyone. This accessibility has lowered the barrier to entry, enabling individuals with malicious intent to exploit these technologies for sextortion purposes.
The proliferation of sharing content on social media
The proliferation of social media platforms and the widespread sharing of personal content online have provided perpetrators with a vast pool of potential victims’ images and videos. By utilising these readily available resources, perpetrators can create deepfake explicit content that closely resembles the victims, increasing the likelihood of success in their extortion schemes.
Furthermore, the anonymity and wide reach of the internet and social media platforms allow perpetrators to distribute manipulated content quickly and easily. They can target individuals specifically or upload the content to public forums and pornographic websites, amplifying the impact and humiliation experienced by victims.
What are law agencies doing?
The alarming increase in AI sextortion cases has prompted concern among law enforcement agencies, advocacy groups, and technology companies. This is high time to make strong Efforts to raise awareness about the risks of AI sextortion, develop detection and prevention tools, and strengthen legal frameworks to address these emerging threats to individuals’ privacy, safety, and well-being.
There is a need for Technological Solutions, which develops and deploys advanced AI-based detection tools to identify and flag AI-generated deepfake content on platforms and services. And collaboration with technology companies to integrate such solutions.
Collaboration with Social Media Platforms is also needed. Social media platforms and technology companies can reframe and enforce community guidelines and policies against disseminating AI-generated explicit content. And can ensure foster cooperation in developing robust content moderation systems and reporting mechanisms.
There is a need to strengthen the legal frameworks to address AI sextortion, including laws that specifically criminalise the creation, distribution, and possession of AI-generated explicit content. Ensure adequate penalties for offenders and provisions for cross-border cooperation.
Proactive measures to combat AI-driven sextortion
Prevention and Awareness: Proactive measures raise awareness about AI sextortion, helping individuals recognise risks and take precautions.
Early Detection and Reporting: Proactive measures employ advanced detection tools to identify AI-generated deepfake content early, enabling prompt intervention and support for victims.
Legal Frameworks and Regulations: Proactive measures strengthen legal frameworks to criminalise AI sextortion, facilitate cross-border cooperation, and impose offender penalties.
Technological Solutions: Proactive measures focus on developing tools and algorithms to detect and remove AI-generated explicit content, making it harder for perpetrators to carry out their schemes.
International Cooperation: Proactive measures foster collaboration among law enforcement agencies, governments, and technology companies to combat AI sextortion globally.
Support for Victims: Proactive measures provide comprehensive support services, including counselling and legal assistance, to help victims recover from emotional and psychological trauma.
Implementing these proactive measures will help create a safer digital environment for all.
Misuse of Technology
Misusing technology, particularly AI-driven deepfake technology, in the context of sextortion raises serious concerns.
Exploitation of Personal Data: Perpetrators exploit personal data and images available online, such as social media posts or captured video chats, to create AI- manipulation violates privacy rights and exploits the vulnerability of individuals who trust that their personal information will be used responsibly.
Facilitation of Extortion: AI sextortion often involves perpetrators demanding monetary payments, sexually themed images or videos, or other favours under the threat of releasing manipulated content to the public or to the victims’ friends and family. The realistic nature of deepfake technology increases the effectiveness of these extortion attempts, placing victims under significant emotional and financial pressure.
Amplification of Harm: Perpetrators use deepfake technology to create explicit videos or images that appear realistic, thereby increasing the potential for humiliation, harassment, and psychological trauma suffered by victims. The wide distribution of such content on social media platforms and pornographic websites can perpetuate victimisation and cause lasting damage to their reputation and well-being.
Targeting teenagers– Targeting teenagers and extortion demands in AI sextortion cases is a particularly alarming aspect of this issue. Teenagers are particularly vulnerable to AI sextortion due to their increased use of social media platforms for sharing personal information and images. Perpetrators exploit to manipulate and coerce them.
Erosion of Trust: Misusing AI-driven deepfake technology erodes trust in digital media and online interactions. As deepfake content becomes more convincing, it becomes increasingly challenging to distinguish between real and manipulated videos or images.
Proliferation of Pornographic Content: The misuse of AI technology in sextortion contributes to the proliferation of non-consensual pornography (also known as “revenge porn”) and the availability of explicit content featuring unsuspecting individuals. This perpetuates a culture of objectification, exploitation, and non-consensual sharing of intimate material.
Conclusion
Addressing the concern of AI sextortion requires a multi-faceted approach, including technological advancements in detection and prevention, legal frameworks to hold offenders accountable, awareness about the risks, and collaboration between technology companies, law enforcement agencies, and advocacy groups to combat this emerging threat and protect the well-being of individuals online.
In the hyper-connected era, something as mundane as charging your phone can become a gateway to cyberattacks. A recent experience of Assam Chief Minister Himanta Biswa Sarma has reignited fears of an emerging digital menace called juice jacking. Sarma, who was taking an Emirates flight from Delhi to Dubai, used an international charger and cable provided by another passenger on board. As he afterwards reported on X (formerly Twitter), the passenger got off while he slept and so could not return the borrowed items. Though most people admired the CM's humility and openness, cybersecurity experts and citizens were quick to point out a possible red flag, that it could be a juice-jacking attempt. Whether by design or not, the scene calls out to the concealed risks of using unfamiliar charging equipment, particularly for those who hold sensitive roles.
What Is Juice Jacking?
Juice jacking takes advantage of the multi-purpose nature of USB connectors, which can carry both electrical energy and information. Attackers hack USB ports or cables to either:
Insert harmful payloads (malware, spyware, ransomware) during power transfer, or
Create unauthorised data pathways for silent information exfiltration.
Types of Juice Jacking Attacks
Data Theft (Exfiltration Attack): The USB cable or port is rigged to silently extract files, media, contacts, keystrokes, or login information from the attached phone.
Malware Injection (Payload Attack): The USB device is set to impersonate a Human Interface Device (HID), such as a keyboard. It sends pre-defined commands (shell scripts, command-line inputs) to the host, loading backdoors or spying tools.
Firmware Tampering: In more sophisticated cases, attackers implement persistent malware at the bootloader or firmware level, bypassing antivirus protection and living through factory resets.
Remote Command-and-Control Installation: Certain strains of malware initiate backdoors to enable remote access to the device over the internet upon reconnection to a live network.
Why the Assam CM’s Incident Raised Flags
Whereas CM Sarma's experience was one of thanks, the digital repercussions of this scenario are immense:
High-value targets like government officials, diplomats, and corporate executives tend to have sensitive information.
A hacked cable can be used as a spy tool, sending information or providing remote access.
With the USB On-The-Go (OTG) feature in contemporary Android and iOS devices, an attacker can run autorun scripts and deploy payloads at device connect/disconnect.
If device encryption is poor or security settings are incorrectly configured, attackers may gain access to location, communication history, and app credentials.
Technical Juice Jacking Indicators
The following are indications that a device could have been attacked:
Unsolicited request for USB file access or data syncing on attaching.
The device is acting strangely, launching apps or entering commands without user control.
Installation of new apps without authorisation.
Data consumption increases even if no browsing is ongoing.
CyberPeace Tech-Policy Advisory: Preventing Juice Jacking
Hardware-Level Mitigation
Utilise USB Data Blockers: Commonly referred to as "USB condoms," such devices plug the data pins (D+ and D-), letting only power (Vcc and GND) pass through. This blocks all data communication over USB.
Charge-Only Cables: Make use of cables that physically do not have data lines. These are specifically meant to provide power only.
Carry a Power Bank: Use your own power source, if possible, for charging, particularly in airports, conferences, or flights.
Operating System(OS) Level Protections
iOS Devices:
Enable USB Restricted Mode:
Keep USB accessories from being able to connect when your iPhone is locked.
Settings → Face ID & Passcode → USB Accessories → Off
Android Devices:
Disable USB Debugging:
Debugging makes device access available for development, but it can be taken advantage of. If USB Debugging is turned on, and someone connects your phone to a computer, they might be able to access your data, install apps, or even control your phone, especially if your phone is unlocked. Hence, it should be kept off.
Settings → Developer Options → USB Debugging → Off
Set USB Default to 'Charge Only'
Settings → Connected Devices → USB Preferences → Default USB Configuration → Charge Only
3) Behavioural Recommendations
Never take chargers or USB cables from strangers.
Don't use public USB charging points, particularly at airports or coffee shops.
Turn full-disk encryption on on your device. It is supported by most Android and all iOS devices.
Deploy endpoint security software that can identify rogue USB commands and report suspicious behaviour.
Check cables or ports physically, many attack cables are indistinguishable from legitimate ones (e.g., O.MG cables).
Conclusion
"Juice jacking is no longer just a theoretical or obscure threat. In the age of highly mobile, USB-charged devices, physical-layer attacks are becoming increasingly common, and their targets are growing more strategic. The recent case involving the Assam Chief Minister was perhaps harmless, but it did serve to underscore a fundamental vulnerability in daily digital life. As mobile security becomes more relevant to individuals and organisations worldwide, knowing about hardware-based attacks like juice jacking is essential. Security never needs to be sacrificed for convenience, particularly when an entire digital identity might be at risk with just a single USB cable.
Your institution or organization can partner with us in any one of our initiatives or policy research activities and complement the region-specific resources and talent we need.