#FactCheck: IAF Shivangi Singh was captured by Pakistan army after her Rafale fighter jet was shot down
Executive Summary:
False information spread on social media that Flight Lieutenant Shivangi Singh, India’s first female Rafale pilot, had been captured by Pakistan during “Operation Sindoor”. The allegations are untrue and baseless as no credible or official confirmation supports the claim, and Singh is confirmed to be safe and actively serving. The rumor, likely originating from unverified sources, sparked public concern and underscored the serious threat fake news poses to national security.
Claim:
An X user posted stating that “ Initial image released of a female Indian Shivani singh Rafale pilot shot down in Pakistan”. It was falsely claimed that Flight Lieutenant Shivangi Singh had been captured, and that the Rafale aircraft was shot down by Pakistan.


Fact Check:
After doing reverse image search, we found an instagram post stating the two Indian Air Force pilots—Wing Commander Tejpal (50) and trainee Bhoomika (28)—who had ejected from a Kiran Jet Trainer during a routine training sortie from Bengaluru before it crashed near Bhogapuram village in Karnataka. The aircraft exploded upon impact, but both pilots were later found alive, though injured and exhausted.

Also we found a youtube channel which is showing the video from the past and not what it was claimed to be.

Conclusion:
The false claims about Flight Lieutenant Shivangi Singh being captured by Pakistan and her Rafale jet being shot down have been debunked. The image used was unrelated and showed IAF pilots from a separate training incident. Several media also confirmed that its video made no mention of Ms. Singh’s arrest. This highlights the dangers of misinformation, especially concerning national security. Verifying facts through credible sources and avoiding the spread of unverified content is essential to maintain public trust and protect the reputation of those serving in the armed forces.
- Claim: False claims about Flight Lieutenant Shivangi Singh being captured by Pakistan and her Rafale jet being shot down
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

The World Economic Forum reported that AI-generated misinformation and disinformation are the second most likely threat to present a material crisis on a global scale in 2024 at 53% (Sept. 2023). Artificial intelligence is automating the creation of fake news at a rate disproportionate to its fact-checking. It is spurring an explosion of web content mimicking factual articles that instead disseminate false information about grave themes such as elections, wars and natural disasters.
According to a report by the Centre for the Study of Democratic Institutions, a Canadian think tank, the most prevalent effect of Generative AI is the ability to flood the information ecosystem with misleading and factually-incorrect content. As reported by Democracy Reporting International during the 2024 elections of the European Union, Google's Gemini, OpenAI’s ChatGPT 3.5 and 4.0, and Microsoft’s AI interface ‘CoPilot’ were inaccurate one-third of the time when engaged for any queries regarding the election data. Therefore, a need for an innovative regulatory approach like regulatory sandboxes which can address these challenges while encouraging responsible AI innovation is desired.
What Is AI-driven Misinformation?
False or misleading information created, amplified, or spread using artificial intelligence technologies is AI-driven misinformation. Machine learning models are leveraged to automate and scale the creation of false and deceptive content. Some examples are deep fakes, AI-generated news articles, and bots that amplify false narratives on social media.
The biggest challenge is in the detection and management of AI-driven misinformation. It is difficult to distinguish AI-generated content from authentic content, especially as these technologies advance rapidly.
AI-driven misinformation can influence elections, public health, and social stability by spreading false or misleading information. While public adoption of the technology has undoubtedly been rapid, it is yet to achieve true acceptance and actually fulfill its potential in a positive manner because there is widespread cynicism about the technology - and rightly so. The general public sentiment about AI is laced with concern and doubt regarding the technology’s trustworthiness, mainly due to the absence of a regulatory framework maturing on par with the technological development.
Regulatory Sandboxes: An Overview
Regulatory sandboxes refer to regulatory tools that allow businesses to test and experiment with innovative products, services or businesses under the supervision of a regulator for a limited period. They engage by creating a controlled environment where regulators allow businesses to test new technologies or business models with relaxed regulations.
Regulatory sandboxes have been in use for many industries and the most recent example is their use in sectors like fintech, such as the UK’s Financial Conduct Authority sandbox. These models have been known to encourage innovation while allowing regulators to understand emerging risks. Lessons from the fintech sector show that the benefits of regulatory sandboxes include facilitating firm financing and market entry and increasing speed-to-market by reducing administrative and transaction costs. For regulators, testing in sandboxes informs policy-making and regulatory processes. Looking at the success in the fintech industry, regulatory sandboxes could be adapted to AI, particularly for overseeing technologies that have the potential to generate or spread misinformation.
The Role of Regulatory Sandboxes in Addressing AI Misinformation
Regulatory sandboxes can be used to test AI tools designed to identify or flag misinformation without the risks associated with immediate, wide-scale implementation. Stakeholders like AI developers, social media platforms, and regulators work in collaboration within the sandbox to refine the detection algorithms and evaluate their effectiveness as content moderation tools.
These sandboxes can help balance the need for innovation in AI and the necessity of protecting the public from harmful misinformation. They allow the creation of a flexible and adaptive framework capable of evolving with technological advancements and fostering transparency between AI developers and regulators. This would lead to more informed policymaking and building public trust in AI applications.
CyberPeace Policy Recommendations
Regulatory sandboxes offer a mechanism to predict solutions that will help to regulate the misinformation that AI tech creates. Some policy recommendations are as follows:
- Create guidelines for a global standard for including regulatory sandboxes that can be adapted locally and are useful in ensuring consistency in tackling AI-driven misinformation.
- Regulators can propose to offer incentives to companies that participate in sandboxes. This would encourage innovation in developing anti-misinformation tools, which could include tax breaks or grants.
- Awareness campaigns can help in educating the public about the risks of AI-driven misinformation and the role of regulatory sandboxes can help manage public expectations.
- Periodic and regular reviews and updates to the sandbox frameworks should be conducted to keep pace with advancements in AI technology and emerging forms of misinformation should be emphasized.
Conclusion and the Challenges for Regulatory Frameworks
Regulatory sandboxes offer a promising pathway to counter the challenges that AI-driven misinformation poses while fostering innovation. By providing a controlled environment for testing new AI tools, these sandboxes can help refine technologies aimed at detecting and mitigating false information. This approach ensures that AI development aligns with societal needs and regulatory standards, fostering greater trust and transparency. With the right support and ongoing adaptations, regulatory sandboxes can become vital in countering the spread of AI-generated misinformation, paving the way for a more secure and informed digital ecosystem.
References
- https://www.thehindu.com/sci-tech/technology/on-the-importance-of-regulatory-sandboxes-in-artificial-intelligence/article68176084.ece
- https://www.oecd.org/en/publications/regulatory-sandboxes-in-artificial-intelligence_8f80a0e6-en.html
- https://www.weforum.org/publications/global-risks-report-2024/
- https://democracy-reporting.org/en/office/global/publications/chatbot-audit#Conclusions

Introduction
In the boundless world of the internet—a digital frontier rife with both the promise of connectivity and the peril of deception—a new spectre stealthily traverses the electronic pathways, casting a shadow of fear and uncertainty. This insidious entity, cloaked in the mantle of supposed authority, preys upon the unsuspecting populace navigating the virtual expanse. And in the heart of India's vibrant tapestry of diverse cultures and ceaseless activity, Mumbai stands out—a sprawling metropolis of dreams and dynamism, yet also the stage for a chilling saga, a cyber charade of foul play and fraud.
The city's relentless buzz and hum were punctuated by a harrowing tale that unwound within the unassuming confines of a Kharghar residence, where a 46-year-old individual's brush with this digital demon would unfold. His typical day veered into the remarkable as his laptop screen lit up with an ominous pop-up, infusing his routine with shock and dread. This deceiving popup, masquerading as an official communication from the National Crime Records Bureau (NCRB), demanded an exorbitant fine of Rs 33,850 for ostensibly browsing adult content—an offence he had not committed.
The Cyber Deception
This tale of deceit and psychological warfare is not unique, nor is it the first of its kind. It finds echoes in the tragic narrative that unfurled in September 2023, far south in the verdant land of Kerala, where a young life was tragically cut short. A 17-year-old boy from Kozhikode, caught in the snare of similar fraudulent claims of NCRB admonishment, was driven to the extreme despair of taking his own life after being coerced to dispense Rs 30,000 for visiting an unauthorised website, as the pop-up falsely alleged.
Sewn with a seam of dread and finesse, the pop-up which appeared in another recent case from Navi Mumbai, highlights the virtual tapestry of psychological manipulation, woven with threatening threads designed to entrap and frighten. In this recent incident a 46-year-old Kharghar resident was left in shock when he got a pop-up on a laptop screen warning him to pay Rs 33,850 fine for surfing a porn website. This message appeared from fake website of NCRB created to dupe people. Pronouncing that the user has engaged in browsing the Internet for some activities, it delivers an ultimatum: Pay the fine within six hours, or face the critical implications of a criminal case. The panacea it offers is simple—settle the demanded amount and the shackles on the browser shall be lifted.
It was amidst this web of lies that the man from Kharghar found himself entangled. The story, as retold by his brother, an IT professional, reveals the close brush with disaster that was narrowly averted. His brother's panicked call, and the rush of relief upon realising the scam, underscores the ruthless efficiency of these cyber predators. They leverage sophisticated deceptive tactics, even specifying convenient online payment methods to ensnare their prey into swift compliance.
A glimmer of reason pierced through the narrative as Maharashtra State cyber cell special inspector general Yashasvi Yadav illuminated the fraudulent nature of such claims. With authoritative clarity, he revealed that no legitimate government agency would solicit fines in such an underhanded fashion. Rather, official procedures involve FIRs or court trials—a structured route distant from the scaremongering of these online hoaxes.
Expert Take
Concurring with this perspective, cyber experts facsimiles. By tapping into primal fears and conjuring up grave consequences, the fraudsters follow a time-worn strategy, cloaking their ill intentions in the guise of governmental or legal authority—a phantasm of legitimacy that prompts hasty financial decisions.
To pierce the veil of this deception, D. Sivanandhan, the former Mumbai police commissioner, categorically denounced the absurdity of the hoax. With a voice tinged by experience and authority, he made it abundantly clear that the NCRB's role did not encompass the imposition of fines without due process of law—a cornerstone of justice grossly misrepresented by the scam's premise.
New Lesson
This scam, a devilish masquerade given weight by deceit, might surge with the pretence of novelty, but its underpinnings are far from new. The manufactured pop-ups that propagate across corners of the internet issue fabricated pronouncements, feigned lockdowns of browsers, and the spectre of being implicated in taboo behaviours. The elaborate ruse doesn't halt at mere declarations; it painstakingly fabricates a semblance of procedural legitimacy by preemptively setting penalties and detailing methods for immediate financial redress.
Yet another dimension of the scam further bolsters the illusion—the ominous ticking clock set for payment, endowing the fraud with an urgency that can disorient and push victims towards rash action. With a spurious 'Payment Details' section, complete with options to pay through widely accepted credit networks like Visa or MasterCard, the sham dangles the false promise of restored access, should the victim acquiesce to their demands.
Conclusion
In an era where the demarcation between illusion and reality is nebulous, the impetus for individual vigilance and scepticism is ever-critical. The collective consciousness, the shared responsibility we hold as inhabitants of the digital domain, becomes paramount to withstand the temptation of fear-inducing claims and to dispel the shadows cast by digital deception. It is only through informed caution, critical scrutiny, and a steadfast refusal to capitulate to intimidation that we may successfully unmask these virtual masquerades and safeguard the integrity of our digital existence.
References:
- https://www.onmanorama.com/news/kerala/2023/09/29/kozhikode-boy-dies-by-suicide-after-online-fraud-threatens-him-for-visiting-unauthorised-website.html
- https://timesofindia.indiatimes.com/pay-rs-33-8k-fine-for-surfing-porn-warns-fake-ncrb-pop-up-on-screen/articleshow/106610006.cms
- https://www.indiatoday.in/technology/news/story/people-who-watch-porn-receiving-a-warning-pop-up-do-not-pay-it-is-a-scam-1903829-2022-01-24

Executive Summary:
A video gone viral on Facebook claims Union Finance Minister Nirmala Sitharaman endorsed the government’s new investment project. The video has been widely shared. However, our research indicates that the video has been AI altered and is being used to spread misinformation.

Claim:
The claim in this video suggests that Finance Minister Nirmala Sitharaman is endorsing an automotive system that promises daily earnings of ₹15,00,000 with an initial investment of ₹21,000.

Fact Check:
To check the genuineness of the claim, we used the keyword search for “Nirmala Sitharaman investment program” but we haven’t found any investment related scheme. We observed that the lip movements appeared unnatural and did not align perfectly with the speech, leading us to suspect that the video may have been AI-manipulated.
When we reverse searched the video which led us to this DD News live-stream of Sitharaman’s press conference after presenting the Union Budget on February 1, 2025. Sitharaman never mentioned any investment or trading platform during the press conference, showing that the viral video was digitally altered. Technical analysis using Hive moderator further found that the viral clip is Manipulated by voice cloning.

Conclusion:
The viral video on social media shows Union Finance Minister Nirmala Sitharaman endorsing the government’s new investment project as completely voice cloned, manipulated and false. This highlights the risk of online manipulation, making it crucial to verify news with credible sources before sharing it. With the growing risk of AI-generated misinformation, promoting media literacy is essential in the fight against false information.
- Claim: Fake video falsely claims FM Nirmala Sitharaman endorsed an investment scheme.
- Claimed On: Social Media
- Fact Check: False and Misleading