#FactCheck -AI-Generated Video Falsely Shows Car Stuck on Delhi–Jaipur Highway Signboard
Executive Summary
A shocking video showing a car hanging from a highway signboard is going viral on social media. The clip allegedly shows a black Mahindra Thar stuck on an overhead direction signboard on the Delhi–Jaipur Highway (NH-48). Social media users are widely sharing the video, claiming it shows a real road accident. However, a research by CyberPeace found the viral claim to be false. Our findings reveal that the circulating video is not real but AI-generated.
Claim
Social media users are sharing the clip as footage of an actual road accident. A viral post on X (formerly Twitter) claims that the incident took place on the Delhi–Jaipur Highway, showing a black Mahindra & Mahindra Thar lodged in a highway signboard.
- https://x.com/SenBaijnath/status/2024098520006029504
- https://archive.ph/cmr5e

Fact Check
On closely examining the viral video, several inconsistencies were observed that are commonly associated with AI-generated content. For instance, it appears highly improbable for a heavy vehicle to get stuck precisely at the center of a signboard at such a height. Despite the scale of the alleged incident, traffic on the highway below continues moving normally without any disruption. Additionally, the text visible on the right side of the signboard appears distorted and unusually written. To further verify the authenticity of the video, we analysed it using the AI detection tool Hive Moderation, which indicated a 99.9% probability that the video was AI-generated.

Another AI image detection tool, WasitAI, also found that the visuals in the viral clip were largely AI-generated.

Conclusion
Based on our research and available evidence, it is clear that the viral video showing a Mahindra Thar hanging from a highway signboard is not real but AI-generated.
Related Blogs

Introduction
The advent of AI-driven deepfake technology has facilitated the creation of explicit counterfeit videos for sextortion purposes. There has been an alarming increase in the use of Artificial Intelligence to create fake explicit images or videos for sextortion.
What is AI Sextortion and Deepfake Technology
AI sextortion refers to the use of artificial intelligence (AI) technology, particularly deepfake algorithms, to create counterfeit explicit videos or images for the purpose of harassing, extorting, or blackmailing individuals. Deepfake technology utilises AI algorithms to manipulate or replace faces and bodies in videos, making them appear realistic and often indistinguishable from genuine footage. This enables malicious actors to create explicit content that falsely portrays individuals engaging in sexual activities, even if they never participated in such actions.
Background on the Alarming Increase in AI Sextortion Cases
Recently there has been a significant increase in AI sextortion cases. Advancements in AI and deepfake technology have made it easier for perpetrators to create highly convincing fake explicit videos or images. The algorithms behind these technologies have become more sophisticated, allowing for more seamless and realistic manipulations. And the accessibility of AI tools and resources has increased, with open-source software and cloud-based services readily available to anyone. This accessibility has lowered the barrier to entry, enabling individuals with malicious intent to exploit these technologies for sextortion purposes.

The proliferation of sharing content on social media
The proliferation of social media platforms and the widespread sharing of personal content online have provided perpetrators with a vast pool of potential victims’ images and videos. By utilising these readily available resources, perpetrators can create deepfake explicit content that closely resembles the victims, increasing the likelihood of success in their extortion schemes.
Furthermore, the anonymity and wide reach of the internet and social media platforms allow perpetrators to distribute manipulated content quickly and easily. They can target individuals specifically or upload the content to public forums and pornographic websites, amplifying the impact and humiliation experienced by victims.
What are law agencies doing?
The alarming increase in AI sextortion cases has prompted concern among law enforcement agencies, advocacy groups, and technology companies. This is high time to make strong Efforts to raise awareness about the risks of AI sextortion, develop detection and prevention tools, and strengthen legal frameworks to address these emerging threats to individuals’ privacy, safety, and well-being.
There is a need for Technological Solutions, which develops and deploys advanced AI-based detection tools to identify and flag AI-generated deepfake content on platforms and services. And collaboration with technology companies to integrate such solutions.
Collaboration with Social Media Platforms is also needed. Social media platforms and technology companies can reframe and enforce community guidelines and policies against disseminating AI-generated explicit content. And can ensure foster cooperation in developing robust content moderation systems and reporting mechanisms.
There is a need to strengthen the legal frameworks to address AI sextortion, including laws that specifically criminalise the creation, distribution, and possession of AI-generated explicit content. Ensure adequate penalties for offenders and provisions for cross-border cooperation.
Proactive measures to combat AI-driven sextortion
Prevention and Awareness: Proactive measures raise awareness about AI sextortion, helping individuals recognise risks and take precautions.
Early Detection and Reporting: Proactive measures employ advanced detection tools to identify AI-generated deepfake content early, enabling prompt intervention and support for victims.
Legal Frameworks and Regulations: Proactive measures strengthen legal frameworks to criminalise AI sextortion, facilitate cross-border cooperation, and impose offender penalties.
Technological Solutions: Proactive measures focus on developing tools and algorithms to detect and remove AI-generated explicit content, making it harder for perpetrators to carry out their schemes.
International Cooperation: Proactive measures foster collaboration among law enforcement agencies, governments, and technology companies to combat AI sextortion globally.
Support for Victims: Proactive measures provide comprehensive support services, including counselling and legal assistance, to help victims recover from emotional and psychological trauma.
Implementing these proactive measures will help create a safer digital environment for all.

Misuse of Technology
Misusing technology, particularly AI-driven deepfake technology, in the context of sextortion raises serious concerns.
Exploitation of Personal Data: Perpetrators exploit personal data and images available online, such as social media posts or captured video chats, to create AI- manipulation violates privacy rights and exploits the vulnerability of individuals who trust that their personal information will be used responsibly.
Facilitation of Extortion: AI sextortion often involves perpetrators demanding monetary payments, sexually themed images or videos, or other favours under the threat of releasing manipulated content to the public or to the victims’ friends and family. The realistic nature of deepfake technology increases the effectiveness of these extortion attempts, placing victims under significant emotional and financial pressure.
Amplification of Harm: Perpetrators use deepfake technology to create explicit videos or images that appear realistic, thereby increasing the potential for humiliation, harassment, and psychological trauma suffered by victims. The wide distribution of such content on social media platforms and pornographic websites can perpetuate victimisation and cause lasting damage to their reputation and well-being.
Targeting teenagers– Targeting teenagers and extortion demands in AI sextortion cases is a particularly alarming aspect of this issue. Teenagers are particularly vulnerable to AI sextortion due to their increased use of social media platforms for sharing personal information and images. Perpetrators exploit to manipulate and coerce them.
Erosion of Trust: Misusing AI-driven deepfake technology erodes trust in digital media and online interactions. As deepfake content becomes more convincing, it becomes increasingly challenging to distinguish between real and manipulated videos or images.
Proliferation of Pornographic Content: The misuse of AI technology in sextortion contributes to the proliferation of non-consensual pornography (also known as “revenge porn”) and the availability of explicit content featuring unsuspecting individuals. This perpetuates a culture of objectification, exploitation, and non-consensual sharing of intimate material.
Conclusion
Addressing the concern of AI sextortion requires a multi-faceted approach, including technological advancements in detection and prevention, legal frameworks to hold offenders accountable, awareness about the risks, and collaboration between technology companies, law enforcement agencies, and advocacy groups to combat this emerging threat and protect the well-being of individuals online.

Introduction
In the dynamic intersection of pop culture and technology, an unexpected drama unfolded in the virtual world, where the iconic Taylor Swift account has been temporarily blocked on X . The incident sent a shockwave through the online community, sparking debates and speculation about the misuse of deepfake technology.
Taylor Swift's searches on social media platform X have been restored after a temporary blockage was lifted following outrage over her explicit AI images. The social media site, formerly known as Twitter, temporarily restricted searches for Taylor Swift as a temporary measure to address a flood of AI-generated deepfake images that went viral across X and other platforms.
X has mentioned it is actively removing the images and taking appropriate actions against the accounts responsible for spreading them. While Swift has not spoken publicly about the fake images, a report stated that her team is "considering legal action" against the site which published the AI-generated images.
The Social Media Frenzy
As news of temporary blockages spread like wildfire across social media platforms, users engaged in a frenzy of reactions. The fake picture was re-shared 24,000 times, with tens of thousands of users liking the post. This engagement supercharged the deepfake image of Taylor Swift, and by the time the moderators woke up, it was too late. Hundreds of accounts began reposting it, which started an online trend. Taylor Swift's AI video reached an even larger audience. The source of the photograph wasn't even known to begin with. The revelations are causing outrage. American lawmakers from across party lines have spoken. One of them said they were astounded, while another said they were shocked.
AI Deepfake Controversy
The deepfake controversy is not new. There are lot of cases such as Rashmika Mandana, Sachin Tendulkar, and now Taylor Swift have been the victims of such misuse of Deepfake technology. The world is facing a concern about the misuse of AI or deepfake technology. With no proactive measures in place, this threat will only worsen affecting privacy concerns for individuals. This incident has opened a debate among users and industry experts on the ethical use of AI in the digital age and its privacy concerns.
Why has the Incident raised privacy concerns?
The emergence of Taylor Swift's deepfake has raised privacy concerns for several reasons.
- Misuse of Personal Imagery: Deepfake uses AI and its algorithms to superimpose one person’s face onto another person’s body, the algorithms are processed again and again till the desired results are obtained. In the case of celebrities or higher-position people, it's very easy for crooks to get images and generate a deepfake. In the case of Taylor Swift, her images are misused. The misuse of Images can have serious consequences for an individual's reputation and privacy.
- False narrative and Manipulation: Deepfake opens the door for public reaction and spreads false narratives, causing harm to reputation, and affecting personal and professional life. Such false narratives through deepfakes may influence public opinion and damage reputation making it challenging for the person to control it.
- Invasion of Privacy: Creating a deepfake involves gathering a significant amount of information about their targets without their consent. The use of such personal information for the creation of AI-generated content without permission raises serious privacy concerns.
- Difficulty in differentiation: Advanced Deepfake technology makes it difficult for people to differentiate between genuine and manipulated content.
- Potential for Exploitation: Deepfake could be exploited for financial gain or malicious motives of the cyber crooks. These videos do harm the reputation, damage the brand name, and partnerships, and even hamper the integrity of the digital platform upon which the content is posted, they also raise questions about the platform’s policy or should we say against the zero-tolerance policy on posting the non-consensual nude images.
Is there any law that could safeguard Internet users?
Legislation concerning deepfakes differs by nation and often spans from demanding disclosure of deepfakes to forbidding harmful or destructive material. Speaking about various countries, the USA including its 10 states like California, Texas, and Illinois have passed criminal legislation prohibiting deepfake. Lawmakers are advocating for comparable federal statutes. A Democrat from New York has presented legislation requiring producers to digitally watermark deepfake content. The United States does not criminalise such deepfakes but does have state and federal laws addressing privacy, fraud, and harassment.
In 2019, China enacted legislation requiring the disclosure of deepfake usage in films and media. Sharing deepfake pornography became outlawed in the United Kingdom in 2023 as part of the Online Safety Act.
To avoid abuse, South Korea implemented legislation in 2020 criminalising the dissemination of deepfakes that endanger the public interest, carrying penalties of up to five years in jail or fines of up to 50 million won ($43,000).
In 2023, the Indian government issued an advisory to social media & internet companies to protect against deepfakes that violate India'sinformation technology laws. India is on its way to coming up with dedicated legislation to deal with this subject.
Looking at the present situation and considering the bigger picture, the world urgently needs strong legislation to combat the misuse of deepfake technology.
Lesson learned
The recent blockage of Taylor Swift's searches on Elon Musk's X has sparked debates on responsible technology use, privacy protection, and the symbiotic relationship between celebrities and the digital era. The incident highlights the importance of constant attention, ethical concerns, and the potential dangers of AI in the digital landscape. Despite challenges, the digital world offers opportunities for growth and learning.
Conclusion
Such deepfake incidents highlight privacy concerns and necessitate a combination of technological solutions, legal frameworks, and public awareness to safeguard privacy and dignity in the digital world as technology becomes more complex.
References:
- https://www.hindustantimes.com/world-news/us-news/taylor-swift-searches-restored-on-elon-musks-x-after-brief-blockage-over-ai-deepfakes-101706630104607.html
- https://readwrite.com/x-blocks-taylor-swift-searches-as-explicit-deepfakes-of-singer-go-viral/
.webp)
Introduction
Deepfake have become a source of worry in an age of advanced technology, particularly when they include the manipulation of public personalities for deceitful reasons. A deepfake video of cricket star Sachin Tendulkar advertising a gaming app recently went popular on social media, causing the sports figure to deliver a warning against the widespread misuse of technology.
Scenario of Deepfake
Sachin Tendulkar appeared in the deepfake video supporting a game app called Skyward Aviator Quest. The app's startling quality has caused some viewers to assume that the cricket legend is truly supporting it. Tendulkar, on the other hand, has resorted to social media to emphasise that these videos are phony, highlighting the troubling trend of technology being abused for deceitful ends.
Tendulkar's Reaction
Sachin Tendulkar expressed his worry about the exploitation of technology and advised people to report such videos, advertising, and applications that spread disinformation. This event emphasises the importance of raising knowledge and vigilance about the legitimacy of material circulated on social media platforms.
The Warning Signs
The deepfake video raises questions not just for its lifelike representation of Tendulkar, but also for the material it advocates. Endorsing gaming software that purports to help individuals make money is a significant red flag, especially when such endorsements come from well-known figures. This underscores the possibility of deepfakes being utilised for financial benefit, as well as the significance of examining information that appears to be too good to be true.
How to Protect Yourself Against Deepfakes
As deepfake technology advances, it is critical to be aware of potential signals of manipulation. Here are some pointers to help you spot deepfake videos:
- Look for artificial facial movements and expressions, as well as lip sync difficulties.
- Body motions and Posture: Take note of any uncomfortable body motions or discrepancies in the individual's posture.
- Lip Sync and Audio Quality: Look for mismatches between the audio and lip motions.
- background and Content: Consider the video's background, especially if it has a popular figure supporting something in an unexpected way.
- Verify the legitimacy of the video by verifying the official channels or accounts of the prominent person.
Conclusion
The popularity of deepfake videos endangers the legitimacy of social media material. Sachin Tendulkar's response to the deepfake in which he appears serves as a warning to consumers to remain careful and report questionable material. As technology advances, it is critical that individuals and authorities collaborate to counteract the exploitation of AI-generated material and safeguard the integrity of online information.
Reference
- https://www.news18.com/tech/sachin-tendulkar-disturbed-by-his-new-deepfake-video-wants-swift-action-8740846.html
- https://www.livemint.com/news/india/sachin-tendulkar-becomes-latest-victim-of-deepfake-video-disturbing-to-see-11705308366864.html