Navigating the Path to CyberPeace: Insights and Strategies
Featured #factCheck Blogs

Executive Summary
Amid the ongoing tensions involving the United States, Israel, and Iran, a video of a cargo ship engulfed in flames is being widely shared across social media platforms. The clip shows a vessel burning intensely at sea, with users claiming that Iran targeted the ship with a drone for attempting to cross the Strait of Hormuz without permission. Some users have also claimed that the destroyed vessel was a Pakistani-flagged oil tanker hit by Iranian missiles. However, research by CyberPeace found the claim to be false. Our verification also reveals that the viral video is being misrepresented.
Claim
Social media users, including an X (formerly Twitter) account named “IranDefenceForce,” shared the video claiming that Iran targeted an oil tanker in the Strait of Hormuz for allegedly violating restrictions.

Fact Check
A keyword-based news search led us to multiple credible reports mentioning a statement by Iran’s Foreign Minister Abbas Araghchi. According to reports, Iran had allowed ships from “friendly countries” including India, China, Russia, Iraq, and Pakistan to pass through the Strait of Hormuz.

A March 26, 2026 report by The Hindu stated that Araghchi also emphasized Iran’s assertion of sovereignty over the strategic waterway connecting the Persian Gulf and the Gulf of Oman. The same statement was also shared via the official X handle of the Iranian Consulate in Mumbai. During a frame-by-frame analysis of the viral video, we noticed the word “SAFEEN” written on a part of the ship. Using this clue, we conducted a targeted news search and found a report by Reuters dated March 4, 2026.

According to the report, a Malta-flagged container ship named Safeen Prestige was damaged in an attack while heading toward the Strait of Hormuz. Shipping sources cited in the report stated that the vessel was struck around 1109 GMT while sailing eastward, approximately two nautical miles north of Oman. The ship had reportedly departed from Sharjah Port in the United Arab Emirates but was damaged before reaching its destination. Its last known location was in the Persian Gulf. Additionally, earlier this month, another cargo vessel named Mayuri Naree was also attacked near Iran’s Qeshm Island. As per Reuters, an explosion caused a fire in the engine room, after which 20 crew members were rescued by the Omani navy, while three remained missing.
Conclusion
The viral video does not show Iran targeting a Pakistani oil tanker for violating restrictions in the Strait of Hormuz. In reality, the clip features the Malta-flagged container ship Safeen Prestige, which was damaged in an unidentified attack in the Persian Gulf. The claim being circulated on social media is misleading.

Executive Summary
A video is being widely shared on social media showing a police officer driving an e-rickshaw, while two other policemen are seen in the back seat. Users sharing the clip claim that, due to a shortage of petrol, this is a new initiative by the Uttar Pradesh Police. However, research by CyberPeace found the viral claim to be false. Our research also confirms that the video is not real but AI-generated.
Claim
An Instagram user shared the viral video claiming that due to fuel shortages, Uttar Pradesh Police has started patrolling using e-rickshaws.
- Post link: https://www.instagram.com/reel/DWepKWXAeiE/
- Archive: https://archive.ph/QBNXs

Fact Check
To verify the claim, we first conducted a keyword search on Google but found no credible media reports supporting this claim.

Next, we extracted keyframes from the viral video and performed a reverse image search using Google Lens. During this process, we found the same video uploaded on an Instagram channel on March 28, 2026. The uploader clearly mentioned that the video was created purely for entertainment purposes.

We further analyzed the video using AI detection tools. When scanned with Hive Moderation, the results indicated that the video is approximately 94% AI-generated.

In the next step, we also tested the clip using DeepAI. According to its analysis, the video is about 97% AI-generated.

Conclusion
Our research clearly shows that the viral video is not authentic. It is an AI-generated clip created for entertainment purposes, and the claim that Uttar Pradesh Police has started e-rickshaw patrolling due to petrol shortage is false.

A news graphic bearing the Navbharat Times logo is being widely circulated on social media. The graphic claims that religious preacher Devkinandan Thakur made an extremely offensive and casteist remark targeting the ‘Shudra’ community. Social media users are sharing the graphic and claiming that the statement was actually made by Devkinandan Thakur. Cyber Peace Foundation’s research and verification found that the claim being shared online is misleading. Our research found that the viral news graphic is completely fake and that Devkinandan Thakur did not make any such casteist statement.
Claim
A viral news graphic claims that Devkinandan Thakur made a derogatory and caste-based statement about Shudras.On 17 January 2026, an Instagram user shared the viral graphic with the caption, “This is probably the formula of Ram Rajya.”The text on the graphic reads: “People of Shudra castes reproduce through sexual intercourse, whereas Brahmins give birth to children after marriage through the power of their mantras, without intercourse.” The graphic also carries Devkinandan Thakur’s photograph and identifies him as a ‘Kathavachak’ (religious storyteller).

Fact Check:
To verify the claim, we first searched for relevant keywords on Google. However, no credible or verified media reports were found supporting the claim. In the next stage of verification, we found a post published by NBT Hindi News (Navbharat Times) on X (formerly Twitter) on 17 January 2026, in which the organisation explicitly debunked the viral graphic. Navbharat Times clarified that the graphic circulating online was fake and also shared the original and authentic post related to the news.

Further research led us to Devkinandan Thakur’s official Facebook account, where he posted a clarification on 17 January 2026. In his post, he stated that anti-social elements are creating fake ‘Sanatani’ profiles and spreading false news, misusing the names of reputed media houses and platforms to mislead and divide people. He described the viral content as part of a deliberate conspiracy and fake agenda aimed at weakening unity. He also warned that AI-generated fake videos and fabricated statements are increasingly being used to create confusion, mistrust and division.
Devkinandan Thakur urged people not to believe or share any post, news or video without verification, and advised checking information through official websites, verified social media accounts or trusted sources.

Conclusion
The viral news graphic attributing a casteist statement to Devkinandan Thakur is completely fake.Devkinandan Thakur did not make the alleged remark, and the graphic circulating with the Navbharat Times logo is fabricated.

An image showing a damaged statue of Mahatma Gandhi, broken into two pieces, is being widely shared on social media. The image shows Gandhi’s statue with its head separated from the body, prompting strong reactions online.
Social media users are claiming that the incident occurred in Bangladesh, alleging that Mahatma Gandhi’s statue was deliberately vandalised there. The image is being described as a recent incident and is being circulated across platforms with provocative and inflammatory captions.
Cyber Peace Foundation’s research and verification found that the claim being shared online is misleading. Our rsearch revealed that the viral image is not from Bangladesh. The image is actually from Chakulia in Uttar Dinajpur district of West Bengal, India
Claim:
Social media users claim that Mahatma Gandhi’s statue was vandalised in Bangladesh, and that the viral image shows a recent incident from the country.One Facebook user shared the video on 19 January 2026, making derogatory remarks and falsely linking the incident to Bangladesh. The post has since been widely shared on social media platforms. (Archived links and screenshots are available.)

Fact Check:
Our research revealed that the viral image is not from Bangladesh. The image is actually from Chakulia in Uttar Dinajpur district of West Bengal, India. To verify the claim, we conducted a reverse image search using Google Lens on key frames from the viral video. This led us to a report published by ABP Live Bangla on 16 January 2026, which featured the same visuals. Link and screenshot

According to ABP Live Bangla, the statue of Mahatma Gandhi was damaged during a protest in Chakulia. The statue’s head was found separated from the body. While a portion of the broken statue remained at the site on Thursday night, it was reported missing by Friday morning. The report further stated that extensive damage was observed at BDO Office No. 2 in Golpokhar. Gandhi’s statue, located at the entrance of the administrative building, was found broken, and ashes were discovered near the premises. Government staff were seen clearing scattered debris from the site.
The incident reportedly occurred during a SIR (Special Intensive Revision) hearing at the BDO office, which was disrupted due to vandalism. In connection with the violence and damage to government property, 21 people have been arrested so far. In the next stage of verification, we found the same footage in a 16 January 2026 report by local Bengali news channel K TV, which also showed clear visuals of the damaged Mahatma Gandhi statue. Link and screenshot.

Conclusion:
The viral image of Mahatma Gandhi’s broken statue does not depict an incident from Bangladesh. The image is from Chakulia in West Bengal’s Uttar Dinajpur district, where the statue was damaged during a protest.

A photo featuring Bollywood actor Abhishek Bachchan and actress Aishwarya Rai is being widely shared on social media. In the image, the Kedarnath Temple is clearly visible in the background. Users are claiming that the couple recently visited the Kedarnath shrine for darshan.
Cyber Peace Foundation’s research found the viral claim to be false. Our research revealed that the image of Abhishek Bachchan and Aishwarya Rai is not real, but AI-generated, and is being misleadingly shared as a genuine photograph.
Claim
On January 14, 2026, a user on X (formerly Twitter) shared the viral image with a caption suggesting that all rumours had ended and that the couple had restarted their life together. The post further claimed that both actors were seen smiling after a long time, implying that the image was taken during their visit to Kedarnath Temple.
The post has since been widely circulated on social media platforms

Fact Check:
To verify the claim, we first conducted a keyword search on Google related to Abhishek Bachchan, Aishwarya Rai, and a Kedarnath visit. However, we did not find any credible media reports confirming such a visit.
On closely examining the viral image, several visual inconsistencies raised suspicion about it being artificially generated. To confirm this, we scanned the image using the AI detection tool Sightengine. According to the tool’s analysis, the image was found to be 84 percent AI-generated.

Additionally, we scanned the same image using another AI detection tool, HIVE Moderation. The results showed an even stronger indication, classifying the image as 99 percent AI-generated.

Conclusion
Our research confirms that the viral image showing Abhishek Bachchan and Aishwarya Rai at Kedarnath Temple is not authentic. The picture is AI-generated and is being falsely shared on social media to mislead users.

A video circulating widely on social media shows a child throwing stones at a moving train, while a few other children can also be seen climbing onto the engine. The video is being shared with a communal narrative, with claims that the incident took place in India.
Cyber Peace Foundation’s research found the viral claim to be misleading. Our research revealed that the video is not from India, but from Bangladesh, and is being falsely linked to India on social media.
Claim:
On January 15, 2026, a Facebook user shared the viral video claiming it depicted an incident from India. The post carried a provocative caption stating, “We are not afraid of Pakistan outside our borders. We are afraid of the thousands of mini-Pakistans within India.” The post has been widely circulated, amplifying communal sentiments.

Fact Check:
To verify the authenticity of the video, we conducted a reverse image search using Google Lens by extracting keyframes from the viral clip. During this process, we found the same video uploaded on a Bangladeshi Facebook account named AL Amin Babukhali on December 28, 2025. The caption of the original post mentions Kamalapur, which is a well-known railway station in Bangladesh. This strongly indicates that the incident did not occur in India.

Further analysis of the video shows that the train engine carries the marking “BR”, along with text written in the Bengali language. “BR” stands for Bangladesh Railways, confirming the origin of the train. To corroborate this further, we searched for images related to Bangladesh Railways using Google’s open tools. We found multiple images on Getty Images showing train engines with the same design and markings as seen in the viral video. The visual match clearly establishes that the train belongs to Bangladesh Railways.

Conclusion
Our research confirms that the viral video is from Bangladesh, not India. It is being shared on social media with a false and misleading claim to give it a communal angle and link it to India.

A video is being widely shared on social media showing devotees seated in a boat appearing stunned as a massive, multi-hooded snake—resembling the mythical Sheshnag—suddenly emerges from the middle of a water body.
The video captures visible panic and astonishment among the devotees. Social media users are sharing the clip claiming that it is from Vrindavan, with some portraying the sight as a divine or supernatural event. However, research conducted by the Cyber Peace Foundation found the viral claim to be false. Our research revealed that the video is not authentic and has been generated using artificial intelligence (AI).
Claim
On January 17, 2026, a user shared the viral video on Instagram with the caption suggesting that God had appeared again in the age of Kalyug. The post claims that a terrifying video from Vrindavan has surfaced in which devotees sitting in a boat were shocked to see a massive multi-hooded snake emerge from the water. The caption further states that devotees are hailing the creature as an incarnation of Sheshnag or Vasuki Nag, raising religious slogans and questioning whether the sight represents a divine sign. (The link to the post, its archive link, and screenshots are available.)
- https://www.instagram.com/reel/DTngN9FkoX0/?igsh=MTZvdTN1enI2NnFydA%3D%3D
- https://archive.ph/UuAqB
Fact Check:
Upon closely examining the viral video, we suspected that it might be AI-generated. To verify this, the video was scanned using the AI detection tool SIGHTENGINE, which indicated that the visual is 99 per cent AI-generated.

In the next step of the research , the video was analysed using another AI detection tool, HIVE Moderation. According to the results obtained, the video was found to be 62 per cent AI-generated.

Conclusion
Our research clearly establishes that the viral video claiming to show a multi-hooded snake in Vrindavan is not real. The clip has been created using artificial intelligence and is being falsely shared on social media with religious and sensational claims.

Assembly elections are due to be held in Assam later this year, with polling likely in April or May. Ahead of the elections, a video claiming to be an Aaj Tak news bulletin is being widely circulated on social media.
In the viral video, Aaj Tak anchor Rajiv Dhoundiyal is allegedly seen stating that a leaked intelligence report has issued a warning for the ruling Bharatiya Janata Party (BJP) in Assam. The clip claims that according to this purported report, the BJP may suffer significant losses in the upcoming Assembly elections. Several social media users sharing the video have also claimed that the alleged intelligence report signals the possible removal of Assam Chief Minister Himanta Biswa Sarma from office.
However, an investigation by the Cyber Peace Foundation found the viral claim to be false. Our probe clearly established that no leaked intelligence report related to the Assam Assembly elections exists.
Further, Aaj Tak has neither published nor broadcast any such report on its official television channel, website, or social media platforms. The investigation also revealed that the viral video itself is not authentic and has been created using deepfake technology.
Claim
On social media platform Facebook, a user shared the viral video claiming that the BJP has been pushed on the back foot following organisational changes in the Congress—appointing Priyanka Gandhi Vadra as chairperson of the election screening committee and Gaurav Gogoi as the Assam Pradesh Congress Committee president. The post further claims that an Intelligence Bureau report predicts that the current Assam government will not return to power.
(Link to the post, archive link, and screenshots are available.)
FactCheck:
To verify the claim, we first searched for reports related to any alleged leaked intelligence assessment concerning the Assam Assembly elections using relevant keywords. However, no credible or reliable reports supporting the claim were found. We then reviewed Aaj Tak’s official website, social media pages, and YouTube channel. Our examination confirmed that no such news bulletin has been published or broadcast by the network on any of its official platforms.
- https://www.facebook.com/aajtak/?locale=hi_IN
- https://www.instagram.com/aajtak/
- https://x.com/aajtak
- https://www.youtube.com/channel/UCt4t-jeY85JegMlZ-E5UWtA
To further verify the authenticity of the video, its audio was scanned using the deepfake voice detection tool HIVE Moderation.
The analysis revealed that the voice heard in the video is 99 per cent AI-generated, clearly indicating that the audio is not genuine and has been artificially created using artificial intelligence.

Additionally, the video was analysed using another AI detection tool, Aurigin AI, which also identified the viral clip as AI-generated.

Conclusion:
The investigation clearly establishes that there is no leaked intelligence report predicting BJP’s defeat in the Assam Assembly elections. Aaj Tak has not published or broadcast any such content on its official platforms. The video circulating on social media is not authentic and has been created using deepfake technology to mislead viewers.

A photograph showing a massive crowd on a road is being widely shared on social media. The image is being circulated with the claim that people in the United States are staging large-scale protests against President Donald Trump.
However, CyberPeace Foundation’s research has found this claim to be misleading. Our fact-check reveals that the viral photograph is nearly eight years old and has been falsely linked to recent political developments.
Claim:
Social media users are sharing a photograph and claiming that it shows people protesting against US President Donald Trump.An X (formerly Twitter) user, Salman Khan Gauri (@khansalman88177), shared the image with the caption:“Today, a massive protest is taking place in America against Donald Trump.”
The post can be viewed here, and its archived version is available here.

FactCheck:
To verify the claim, we conducted a reverse image search of the viral photograph using Google. This led us to a report published by The Mercury News on April 6, 2018.
The report features the same image and states that the photograph was taken on March 24, 2018, during the ‘March for Our Lives’ rally in Washington, DC. The rally was organized to demand stricter gun control laws in the United States. The image shows a large crowd gathered on Pennsylvania Avenue in support of gun reform.
The report further notes that the Associated Press, on March 30, 2018, debunked false claims circulating online which alleged that liberal billionaire George Soros and his organizations had paid protesters $300 each to participate in the rally.

Further research led us to a report published by The Hindu on March 25, 2018, which also carries the same photograph. According to the report, thousands of Americans across the country participated in ‘March for Our Lives’ rallies following a mass shooting at a school in Florida. The protests were led by survivors and victims, demanding stronger gun laws.
The objective of these demonstrations was to break the legislative deadlock that has long hindered efforts to tighten firearm regulations in a country frequently rocked by mass shootings in schools and colleges.

Conclusion
The viral photograph is nearly eight years old and is unrelated to any recent protests against President Donald Trump.The image actually depicts a gun control protest held in 2018 and is being falsely shared with a misleading political claim.By circulating this outdated image with an incorrect context, social media users are spreading misinformation.

Social media users are widely sharing a video claiming to show an aircraft carrier being destroyed after getting trapped in a massive sea storm. In the viral clip, the aircraft carrier can be seen breaking apart amid violent waves, with users describing the visuals as a “wrath of nature.”
However, CyberPeace Foundation’s research has found this claim to be false. Our fact-check confirms that the viral video does not depict a real incident and has instead been created using Artificial Intelligence (AI).
Claim:
An X (formerly Twitter) user shared the viral video with the caption,“Nature’s wrath captured on camera.”The video shows an aircraft carrier appearing to be devastated by a powerful ocean storm. The post can be viewed here, and its archived version is available here.
https://x.com/Maailah1712/status/2011672435255624090

Fact Check:
At first glance, the visuals shown in the viral video appear highly unrealistic and cinematic, raising suspicion about their authenticity. The exaggerated motion of waves, structural damage to the vessel, and overall animation-like quality suggest that the video may have been digitally generated. To verify this, we analyzed the video using AI detection tools.
The analysis conducted by Hive Moderation, a widely used AI content detection platform, indicates that the video is highly likely to be AI-generated. According to Hive’s assessment, there is nearly a 90 percent probability that the visual content in the video was created using AI.

Conclusion
The viral video claiming to show an aircraft carrier being destroyed in a sea storm is not related to any real incident.It is a computer-generated, AI-created video that is being falsely shared online as a real natural disaster. By circulating such fabricated visuals without verification, social media users are contributing to the spread of misinformation.

A video is being shared on social media, falsely attributing it to Australian Prime Minister Anthony Albanese. The video claims that following the Bondi Beach attack, he decided to cancel the visas of Pakistani citizens.
An investigation by the Cyber Peace Foundation revealed that the viral video was created using AI. In the original video, Anthony Albanese was answering questions related to the Climate Change Bill during a press conference. It is important to note that in the attack that took place last Sunday (14 December) at Bondi Beach in Sydney, New South Wales, Australia, 15 people were killed. According to Australian police, the attack targeted the Jewish community. New South Wales Police Commissioner Mal Lanyon stated that the two accused involved in the attack were father and son—one aged 50 and the other 24. Media reports identified them as Sajid and Naved Akram.
Claim:
On 14 December 2025, a user on the social media platform X shared a video claiming, “After the attack by a Pakistani Islamic terrorist, the Australian Prime Minister has decided to cancel the visas of all Pakistanis. The whole world is troubled by this community, and in India it is said that Abdul cannot buy a house in a Hindu neighbourhood.”
The link to the related post, its archived version, and screenshots can be seen below:

Investigation:Upon closely examining the viral video, we suspected it to be AI-generated. Subsequently, we scanned the video using the AI detection tool aurigin.ai. According to the results provided by the tool, the video was found to be AI-generated.

A video clip of journalist Palki Sharma is being widely shared on social media. Along with the video, it is being claimed that during Prime Minister Narendra Modi’s recent Middle East visit, she questioned Jordan’s diplomatic protocol.
In the viral clip, Palki Sharma is allegedly seen asking why Jordan’s King Abdullah II did not come to the airport to receive Prime Minister Modi, and whether this indicated a downgrade in the level of welcome.
However, an investigation by the Cyber Peace Foundation found this claim to be misleading. The probe revealed that while the visuals in the viral video are genuine, the audio has been altered using Artificial Intelligence (AI).
On the social media platform ‘X’, a user named “Ammar Solangi” shared this video on 18 December. The post claimed that the video was related to questions raised about Jordan’s diplomatic protocol during Prime Minister Modi’s visit. According to the post, Palki Sharma questioned why King Abdullah II did not receive Prime Minister Modi at the airport. The archive link of the viral post can be seen here: https://ghostarchive.org/archive/26aK0
Verification
During the investigation, the fact-check desk noticed the ‘Firstpost’ logo in the top-left corner of the viral video. Based on this clue, a customized Google search was conducted, which led to the original news report.
The investigation revealed that the viral video was taken from an episode of journalist Palki Sharma’s show “Vantage with Palki Sharma”, which aired on 17 December.
Analysis of the video showed that the visuals appearing at the 33 minutes 30 seconds timestamp in the original report exactly match those used in the viral clip. However, in the original broadcast, Palki Sharma neither questioned Jordan’s protocol nor made any comment about King Abdullah II not being present at the airport.
In the original video, Palki Sharma says:
“Prime Minister Modi was on a diplomatic tour of Jordan, Ethiopia, and Oman, and in Jordan he was received at the airport by the country’s Prime Minister…” The link to the original report can be seen here: https://www.youtube.com/watch?v=-VYZYe9l6Bs

AI Audio Examination
Further investigation involved separating the audio from the viral video and analyzing it using the AI voice detection tool ‘Resemble AI’. The tool’s results confirmed that fake, AI-generated audio had been added over the real footage in the viral clip to spread a misleading claim. A screenshot of the results from this examination can be seen below.

Conclusion
The video being circulated in the name of journalist Palki Sharma has been tampered with. Her voice has been altered using AI technology, and the claim made regarding the Jordan visit is completely misleading.

Introduction
There has been a recent surge of misinformation all over social media, claiming that every Indian ought to receive an allowance of ₹2,000 under some "Prime Minister's scheme." The message, which has been circulated far and wide on almost all platforms-WhatsApp, Facebook, Telegram, etc.-has urged users to click on an unfamiliar link to claim the allowance in their bank accounts.
It would seem like a very attractive offer, especially at a time when common citizens are coping with rising costs of living. But upon further examination, it turns out to be an outright online scam. NewsMobile fact-checked the claim and confirmed that no such scheme exists. Thus, the message circulating is a scam that aims to mislead common citizens.
Such an incident is not isolated. Over the years, fraudulent posts falsely offering benefits in the name of the government or well-known brands have been on the rise. These scams are not just about misinformation-they take advantage of trust, lure people into clicking, and sharing personal info that poses serious risks to financial and personal security.
Anatomy of the Viral PM Scheme Scam
The viral message received attention and was written in Hindi. It read:
“सभी नागरिकों को PM योजना के तहत दो हज़ार रुपए का भत्ता प्रदान किया गया है अपने bank खाते में प्राप्त करने के लिए click करें."
(English: “All citizens have been provided an allowance of ₹2000 under the PM scheme. Click to receive it in your bank account.”)
Beneath this was an odd link that, upon clicking through investigation, turned out to be not working and invalid. An examination of government sites, official handle accounts, and other such was done and no announcement for any such allowance was found.
This provides a neat explanation of a phishing attempt by which a scammer induces urgency and temptation in order to lure citizens into clicking a malicious link. While the link may no longer be active, it could very well have once redirected users to websites that harvest personal information such as Aadhaar numbers, bank details, or login credentials.
The Broader Problem: Fake Government Scheme Scams
Some scams have been exploiting the hoax gimmick of the ₹2,000 PM scheme into the wider trend. How do the con men work? They leverage the credibility of governmental initiatives to scam citizens. In the past, fake promises were made concerning free gas cylinders, cash allowances, subsidised rations, or even job opportunities.
During the COVID times, for instance, fake vaccination registration links and so-called relief scheme offers went viral, preying on the fears and vulnerabilities of ill-informed citizens. Likewise, false schemes associated with reputed companies such as Amazon, Flipkart, TATA Group, and Hermès have also gone viral, promising free gifts or allowances.
The one thing that makes scams associated with the government very dangerous is the exploitation of people's trust in authority. The common citizen is predisposed to believe the PM scheme or the Government Yojana because of the social credibility accorded to these announcements.
How These Scams Operate
These are scams where the creators intend deception and in the end, gain from defrauding a person. Fraudsters first create clickbait messages that are duly recorded to resemble official communications and often bear the government logos and bear a mix of Hindi-English text with the phrase "Pradhan Mantri Yojana" to make it sound legitimate. The messages then redirect users to bogus websites that really look very much like the government's portals, asking sick persons to enter personal information. Finally, as soon as they have obtained this data, the scammer uses it for identity theft, bank fraud, or sells it on the dark web. Social engineering does play a large role in these scams: here terms of urgency like limited time, last chance, and whatnot get created with the aim of pushing the targets to act on these without thinking. For maximum reach, victims are also asked to forward the message to their friends and family, causing the scammer to go viral across WhatsApp, Facebook, and Telegram.
Risks to Citizens
Risks are serious and manifold to falling prey to these scams. The immediate kind of risk is financial loss: divulging bank account details, an OTP, or credentials may constitute providing attackers the power to drain funds therefrom. Another prevalent kind of identity theft occurs through hijacked Aadhaar, PAN, or personal information that subsequently finds its way into fake loans or SIM activations. Apart from monetary losses, opening malicious links might also make devices infected with spyware or ransomware, thereby invading privacy and security. Victims tend to experience a form of psychological trauma due to feelings of betrayal or humiliation of being deceived, thus discouraging them from reporting, which in turn enables such scams to go undetected.
Best Practices for Prevention
It is prudent to exercise good cyber hygiene and be on the lookout for such scams. The citizens should verify each statement against government-authorised websites like https://www.mygov.in or through press statements of the ministries prior to believing it. One should not click on suspicious links offering money, gifts, or subsidies. Red flags like poor grammar, an unofficial domain name, or too-good-to-be-true offers can enable one to identify the scam in time. Two-factor authentication, antivirus software updates, and securing devices can drastically lower the threat from the technical angle. Equally important is the reporting of issues: always report any suspicious activities to cybercrime.gov.in or to the nearest cyber cell so that the authorities may trace some pattern and issue advisories accordingly. Finally, one can do some good by sharing verified fact checks within their circles to build added strength against misinformation and scams.
Policy and Community Role
While individual awareness is important, collective action must be taken against these fake government scheme scams. Platforms such as WhatsApp, Facebook, and X (Twitter) must tune up fraudsters' message detection mechanisms. In the meantime, Government Bodies must alert citizens periodically on new scams through their official handles/schemes and through community outreach.
Civil society and fact-checking agencies play an important role in dispelling frequently viral hoaxes. This work must be amplified to reach people's consciousness in regional languages for the very reason that in these terrain zones, forwarded messages are much more trusted.
Conclusion
The viral ₹2,000 PM scheme scam is a reminder that everything that is viral online cannot be trusted in toto. The scammers of the day are inventing newer scams to gain trust, spread misinformation, and extort innocent citizens.
The best defence will be awareness and alertness. Citizens must verify any claims through official channels before clicking on a link, sharing their data, or even acting upon it in any way. With proper cyber hygiene and avoiding suspicious messages, we can counterattack by reducing the percentage of impact that these scams may have and collaboratively build a secure digital environment.
As India pushes itself further into a digital ecosystem, both empowering and being resilient to cyber fraud is not a state of individual security, but a national agenda.
References
- https://www.newsmobile.in/nm-fact-checker/fact-check-viral-post-claiming-pm-scheme-offering-rs-2000-allowance-is-a-scam/
- https://timesofindia.indiatimes.com/business/financial-literacy/investing/beware-of-deepfake-scams-fraudsters-using-ai-videos-to-push-schemes-promising-unrealistic-returns-red-flags-to-watch-out-for/articleshow/124085155.cms
- https://www.business-standard.com/finance/personal-finance/invest-rs-21-000-to-earn-rs-20-lakh-monthly-viral-videos-of-fm-are-fake-125082000517_1.html
- https://www.pib.gov.in/PressReleasePage.aspx?PRID=2124728
.webp)
Executive Summary:
A viral video claims to show a massive cumulonimbus cloud over Gurugram, Haryana, and Delhi NCR on 3rd September 2025. However, our research reveals the claim is misleading. A reverse image search traced the visuals to Lviv, Ukraine, dating back to August 2021. The footage matches earlier reports and was even covered by the Ukrainian news outlet 24 Kanal, which published the story under the headline “Lviv Covered by Unique Thundercloud: Amazing Video”. Thus, the viral claim linking the phenomenon to a recent event in India is false.
Claim:
A viral video circulating on social media claims to show a massive cloud formation over Gurugram, Haryana, and the Delhi NCR region on 3rd September 2025. The cloud appears to be a cumulonimbus formation, which is typically associated with heavy rainfall, thunderstorms, and severe weather conditions.

Fact Check:
After conducting a reverse image search on key frames of the viral video, we found matching visuals from videos that attribute the phenomenon to Lviv, a city in Ukraine. These videos date back to August 2021, thereby debunking the claim that the footage depicts a recent weather event over Gurugram, Haryana, or the Delhi NCR region.


Further research revealed that a Ukrainian news channel named 24 Kanal, had reported on the Lviv thundercloud phenomenon in August 2021. The report was published under the headline “Lviv Covered by Unique Thundercloud: Amazing Video” ( original in Russian, translated into English).

Conclusion:
The viral video does not depict a recent weather event in Gurugram or Delhi NCR, but rather an old incident from Lviv, Ukraine, recorded in August 2021. Verified sources, including Ukrainian media coverage, confirm this. Hence, the circulating claim is misleading and false.
- Claim: Old Thundercloud Video from Lviv city in Ukraine Ukraine (2021) Falsely Linked to Delhi NCR, Gurugram and Haryana.
- Claimed On: Social Media
- Fact Check: False and Misleading.