#FactCheck: IAF Shivangi Singh was captured by Pakistan army after her Rafale fighter jet was shot down
Executive Summary:
False information spread on social media that Flight Lieutenant Shivangi Singh, India’s first female Rafale pilot, had been captured by Pakistan during “Operation Sindoor”. The allegations are untrue and baseless as no credible or official confirmation supports the claim, and Singh is confirmed to be safe and actively serving. The rumor, likely originating from unverified sources, sparked public concern and underscored the serious threat fake news poses to national security.
Claim:
An X user posted stating that “ Initial image released of a female Indian Shivani singh Rafale pilot shot down in Pakistan”. It was falsely claimed that Flight Lieutenant Shivangi Singh had been captured, and that the Rafale aircraft was shot down by Pakistan.


Fact Check:
After doing reverse image search, we found an instagram post stating the two Indian Air Force pilots—Wing Commander Tejpal (50) and trainee Bhoomika (28)—who had ejected from a Kiran Jet Trainer during a routine training sortie from Bengaluru before it crashed near Bhogapuram village in Karnataka. The aircraft exploded upon impact, but both pilots were later found alive, though injured and exhausted.

Also we found a youtube channel which is showing the video from the past and not what it was claimed to be.

Conclusion:
The false claims about Flight Lieutenant Shivangi Singh being captured by Pakistan and her Rafale jet being shot down have been debunked. The image used was unrelated and showed IAF pilots from a separate training incident. Several media also confirmed that its video made no mention of Ms. Singh’s arrest. This highlights the dangers of misinformation, especially concerning national security. Verifying facts through credible sources and avoiding the spread of unverified content is essential to maintain public trust and protect the reputation of those serving in the armed forces.
- Claim: False claims about Flight Lieutenant Shivangi Singh being captured by Pakistan and her Rafale jet being shot down
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs
.webp)
Introduction
Social media has emerged as a leading source of communication and information; its relevance cannot be ignored during natural disasters since it is relied upon by governments and disaster relief organisations as a tool for disseminating aid and relief-related resources and communications instantly. During disaster times, social media has emerged as a primary source for affected populations to access information on relief resources; community forums offering aid resources and official government channels for government aid have enabled efficient and timely administration of relief initiatives.
However, given the nature of social media, misinformation risks during natural disasters has also emerged as a primary concern that severely hampers aid administration during natural disasters. The disaster-disinformation network offers some sensationalised influential campaigns against communities at their most vulnerable. Victims who seek reliable resources during natural calamities often reach out to inhospitable campaigns and may experience delayed or lack of access to necessary healthcare, significantly impacting their recovery and survival. This delay can lead to worsening medical conditions and an increased death toll among those affected by the disaster. Victims may lack clear information on the appropriate agencies to seek assistance from, causing confusion and delays in receiving help.
Misinformation Threat Landscape during Natural Disaster
During the 2018 floods in Kerala, it was noted that a fake video on water leakage from the Mullaperyar Dam created panic among the citizens and negatively impacted the rescue operations. Similarly, in 2017, reports emerged claiming that Hurricane Irma had caused sharks to be displaced onto a Florida highway. Similar stories, accompanied by the same image, resurfaced following Hurricanes Harvey and Florence. The disaster-affected nation may face international criticism and fail to receive necessary support due to its perceived inability to manage the crisis effectively. This lack of confidence from the global community can further exacerbate the challenges faced by the nation, leaving it more vulnerable and isolated in its time of need.
The spread of misinformation through social media severely hinders the administration of aid and relief operations during natural disasters since it hinders first responders' efforts to counteract and reduce the spread of misinformation, rumours, and false information and declines public trust in government, media, and non-governmental organisations (NGOs), who are often the first point of contact for both victims and officials due to their familiarity with the region and the community. In Moldova, it was noted that foreign influence has exploited the ongoing drought to create divisions between the semi-autonomous regions of Transnistria and Gagauzia and the central government in Chisinau. News coverage critical of the government leverages economic and energy insecurities to incite civil unrest in this already unstable region. Additionally, First responders may struggle to locate victims and assist them to safety, complicating rescue operations. The inability to efficiently find and evacuate those in need can result in prolonged exposure to dangerous conditions and a higher risk of injury or death.
Further, international aid from other countries could be impeded, affecting the overall relief effort. Without timely and coordinated support from the global community, the disaster response may be insufficient, leaving many needs unmet. Further, misinformation also impedes military, reducing the effectiveness of rescue and relief operations. Military assistance often plays a crucial role in disaster response, and any delays can hinder efforts to provide immediate and large-scale aid.
Misinformation also creates problems of allocation of relief resources to unaffected areas which resultantly impacts aid processes for regions in actual need. Following the April 2015 earthquake in Nepal, a Facebook post claimed that 300 houses in Dhading needed aid. Shared over 1,000 times, it reached around 350,000 people within 48 hours. The originator aimed to seek help for Ward #4’s villagers via social media. Given the average Facebook user has 350 contacts, the message was widely viewed. However, the need had already been reported on quakemap.org, a crisis-mapping database managed by Kathmandu Living Labs, a week earlier. Helping Hands, a humanitarian group was notified on May 7, and by May 11, Ward #4 received essential food and shelter. The re-sharing and sensationalisation of outdated information could have wasted relief efforts since critical resources would have been redirected to a region that had already been secured.
Policy Recommendations
Perhaps the most important step in combating misinformation during natural disasters is the increasing public education and the rapid, widespread dissemination of early warnings. This was best witnessed in the November 1970 tropical cyclone in southeastern Bangladesh, combined with a high tide, struck southeastern Bangladesh, leaving more than 300,000 people dead and 1.3 million homeless. In May 1985, when a comparable cyclone and storm surge hit the same area, local dissemination of disaster warnings was much improved and the people were better prepared to respond to them. The loss of life, while still high (at about 10,000), the numbers were about 3% of that in 1970. On a similar note, when a devastating cyclone struck the same area of Bangladesh in May 1994, fewer than 1,000 people died. In India, the 1977 cyclone in Andra Pradesh killed 10,000 people, but a similar storm in the same area 13 years later killed only 910. The dramatic difference in mortalities was owed to a new early-warning system connected with radio stations to alert people in low-lying areas.
Additionally, location-based filtering for monitoring social media during disasters is considered as another best practice to curb misinformation. However, agencies should be aware that this method may miss local information from devices without geolocation enabled. A 2012 Georgia Tech study found that less than 1.4 percent of Twitter content is geolocated. Additionally, a study by Humanity Road and Arizona State University on Hurricane Sandy data indicated a significant decline in geolocation data during weather events.
Alternatively, Publish frequent updates to promote transparency and control the message. In emergency management and disaster recovery, digital volunteers—trusted agents who provide online support—can assist overwhelmed on-site personnel by managing the vast volume of social media data. Trained digital volunteers help direct affected individuals to critical resources and disseminate reliable information.
Enhancing the quality of communication requires double-verifying information to eliminate ambiguity and reduce the impact of misinformation, rumors, and false information must also be emphasised. This approach helps prevent alert fatigue and "cry wolf" scenarios by ensuring that only accurate, relevant information is disseminated. Prioritizing ground truth over assumptions and swiftly releasing verified information or acknowledging the situation can bolster an agency's credibility. This credibility allows the agency to collaborate effectively with truth amplifiers. Prebunking and Debunking methods are also effective way to counter misinformation and build cognitive defenses to recognise red flags. Additionally, evaluating the relevance of various social media information is crucial for maintaining clear and effective communication.
References
- https://www.nature.com/articles/s41598-023-40399-9#:~:text=Moreover%2C%20misinformation%20can%20create%20unnecessary,impacting%20the%20rescue%20operations29.
- https://www.redcross.ca/blog/2023/5/why-misinformation-is-dangerous-especially-during-disasters
- https://www.soas.ac.uk/about/blog/disinformation-during-natural-disasters-emerging-vulnerability
- https://www.dhs.gov/sites/default/files/publications/SMWG_Countering-False-Info-Social-M dia-Disasters-Emergencies_Mar2018-508.pdf

Introduction
Generative AI, particularly deepfake technology, poses significant risks to security in the financial sector. Deepfake technology can convincingly mimic voices, create lip-sync videos, execute face swaps, and carry out other types of impersonation through tools like DALL-E, Midjourney, Respeecher, Murf, etc, which are now widely accessible and have been misused for fraud. For example, in 2024, cybercriminals in Hong Kong used deepfake technology to impersonate the Chief Financial Officer of a company, defrauding it of $25 million. Surveys, including Regula’s Deepfake Trends 2024 and Sumsub reports, highlight financial services as the most targeted sector for deepfake-induced fraud.
Deepfake Technology and Its Risks to Financial Systems
India’s financial ecosystem, including banks, NBFCs, and fintech companies, is leveraging technology to enhance access to credit for households and MSMEs. The country is a leader in global real-time payments and its digital economy comprises 10% of its GDP. However, it faces unique cybersecurity challenges. According to the RBI’s 2023-24 Currency and Finance report, banks cite cybersecurity threats, legacy systems, and low customer digital literacy as major hurdles in digital adoption. Deepfake technology intensifies risks like:
- Social Engineering Attacks: Information security breaches through phishing, vishing, etc. become more convincing with deepfake imagery and audio.
- Bypassing Authentication Protocols: Deepfake audio or images may circumvent voice and image-based authentication systems, exposing sensitive data.
- Market Manipulation: Misleading deepfake content making false claims and endorsements can harm investor trust and damage stock market performance.
- Business Email Compromise Scams: Deepfake audio can mimic the voice of a real person with authority in the organization to falsely authorize payments.
- Evolving Deception Techniques: The usage of AI will allow cybercriminals to deploy malware that can adapt in real-time to carry out phishing attacks and inundate targets with increased speed and variations. Legacy security frameworks are not suited to countering automated attacks at such a scale.
Existing Frameworks and Gaps
In 2016, the RBI introduced cybersecurity guidelines for banks, neo-banking, lending, and non-banking financial institutions, focusing on resilience measures like Board-level policies, baseline security standards, data leak prevention, running penetration tests, and mandating Cybersecurity Operations Centres (C-SOCs). It also mandated incident reporting to the RBI for cyber events. Similarly, SEBI’s Cybersecurity and Cyber Resilience Framework (CSCRF) applies to regulated entities (REs) like stock brokers, mutual funds, KYC agencies, etc., requiring policies, risk management frameworks, and third-party assessments of cyber resilience measures. While both frameworks are comprehensive, they require updates addressing emerging threats from generative AI-driven cyber fraud.
Cyberpeace Recommendations
- AI Cybersecurity to Counter AI Cybercrime: AI-generated attacks can be designed to overwhelm with their speed and scale. Cybercriminals increasingly exploit platforms like LinkedIn, Microsoft Teams, and Messenger, to target people. More and more organizations of all sizes will have to use AI-based cybersecurity for detection and response since generative AI is becoming increasingly essential in combating hackers and breaches.
- Enhancing Multi-factor Authentication (MFA): With improving image and voice-generation/manipulation technologies, enhanced authentication measures such as token-based authentication or other hardware-based measures, abnormal behaviour detection, multi-device push notifications, geolocation verifications, etc. can be used to improve prevention strategies. New targeted technological solutions for content-driven authentication can also be implemented.
- Addressing Third-Party Vulnerabilities: Financial institutions often outsource operations to vendors that may not follow the same cybersecurity protocols, which can introduce vulnerabilities. Ensuring all parties follow standardized protocols can address these gaps.
- Protecting Senior Professionals: Senior-level and high-profile individuals at organizations are at a greater risk of being imitated or impersonated since they hold higher authority over decision-making and have greater access to sensitive information. Protecting their identity metrics through technological interventions is of utmost importance.
- Advanced Employee Training: To build organizational resilience, employees must be trained to understand how generative and emerging technologies work. A well-trained workforce can significantly lower the likelihood of successful human-focused human-focused cyberattacks like phishing and impersonation.
- Financial Support to Smaller Institutions: Smaller institutions may not have the resources to invest in robust long-term cybersecurity solutions and upgrades. They require financial and technological support from the government to meet requisite standards.
Conclusion
According to The India Cyber Threat Report 2025 by the Data Security Council of India (DSCI) and Seqrite, deepfake-enabled cyberattacks, especially in the finance and healthcare sectors, are set to increase in 2025. This has the potential to disrupt services, steal sensitive data, and exploit geopolitical tensions, presenting a significant risk to the critical infrastructure of India.
As the threat landscape changes, institutions will have to continue to embrace AI and Machine Learning (ML) for threat detection and response. The financial sector must prioritize robust cybersecurity strategies, participate in regulation-framing procedures, adopt AI-based solutions, and enhance workforce training, to safeguard against AI-enabled fraud. Collaborative efforts among policymakers, financial institutions, and technology providers will be essential to strengthen defenses.
Sources
- https://sumsub.com/newsroom/deepfake-cases-surge-in-countries-holding-2024-elections-sumsub-research-shows/
- https://www.globenewswire.com/news-release/2024/10/31/2972565/0/en/Deepfake-Fraud-Costs-the-Financial-Sector-an-Average-of-600-000-for-Each-Company-Regula-s-Survey-Shows.html
- https://www.sipa.columbia.edu/sites/default/files/2023-05/For%20Publication_BOfA_PollardCartier.pdf
- https://edition.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk/index.html
- https://www.rbi.org.in/Commonman/English/scripts/Notification.aspx?Id=1721
- https://elplaw.in/leadership/cybersecurity-and-cyber-resilience-framework-for-sebi-regulated-entities/
- https://economictimes.indiatimes.com/tech/artificial-intelligence/ai-driven-deepfake-enabled-cyberattacks-to-rise-in-2025-healthcarefinance-sectors-at-risk-report/articleshow/115976846.cms?from=mdr
.webp)
Misinformation spread has become a cause for concern for all stakeholders, be it the government, policymakers, business organisations or the citizens. The current push for combating misinformation is rooted in the growing awareness that misinformation leads to sentiment exploitation and can result in economic instability, personal risks, and a rise in political, regional, and religious tensions. The circulation of misinformation poses significant challenges for organisations, brands and administrators of all types. The spread of misinformation online poses a risk not only to the everyday content consumer, but also creates concerns for the sharer but the platforms themselves. Sharing misinformation in the digital realm, intentionally or not, can have real consequences.
Consequences for Platforms
Platforms have been scrutinised for the content they allow to be published and what they don't. It is important to understand not only how this misinformation affects platform users, but also its impact and consequences for the platforms themselves. These consequences highlight the complex environment that social media platforms operate in, where the stakes are high from the perspective of both business and societal impact. They are:
- Legal Consequences: Platforms can be fined by regulators if they fail to comply with content moderation or misinformation-related laws and a prime example of such a law is the Digital Services Act of the EU, which has been created for the regulation of digital services that act as intermediaries for consumers and goods, services, and content. They can face lawsuits by individuals, organisations or governments for any damages due to misinformation. Defamation suits are part of the standard practice when dealing with misinformation-causing vectors. In India, the Prohibition of Fake News on Social Media Bill of 2023 is in the pipeline and would establish a regulatory body for fake news on social media platforms.
- Reputational Consequences: Platforms employ a trust model where the user trusts it and its content. If a user loses trust in the platform because of misinformation, it can reduce engagement. This might even lead to negative coverage that affects the public opinion of the brand, its value and viability in the long run.
- Financial Consequences: Businesses that engage with the platform may end their engagement with platforms accused of misinformation, which can lead to a revenue drop. This can also have major consequences affecting the long-term financial health of the platform, such as a decline in stock prices.
- Operational Consequences: To counter the scrutiny from regulators, the platform might need to engage in stricter content moderation policies or other resource-intensive tasks, increasing operational costs for the platforms.
- Market Position Loss: If the reliability of a platform is under question, then, platform users can migrate to other platforms, leading to a loss in the market share in favour of those platforms that manage misinformation more effectively.
- Freedom of Expression vs. Censorship Debate: There needs to be a balance between freedom of expression and the prevention of misinformation. Censorship can become an accusation for the platform in case of stricter content moderation and if the users feel that their opinions are unfairly suppressed.
- Ethical and Moral Responsibilities: Accountability for platforms extends to moral accountability as they allow content that affects different spheres of the user's life such as public health, democracy etc. Misinformation can cause real-world harm like health misinformation or inciting violence, which leads to the fact that platforms have social responsibility too.
Misinformation has turned into a global issue and because of this, digital platforms need to be vigilant while they navigate the varying legal, cultural and social expectations across different jurisdictions. Efforts to create standardised practices and policies have been complicated by the diversity of approaches, leading platforms to adopt flexible strategies for managing misinformation that align with global and local standards.
Addressing the Consequences
These consequences can be addressed by undertaking the following measures:
- The implementation of a more robust content moderation system by the platforms using a combination of AI and human oversight for the identification and removal of misinformation in an effective manner.
- Enhancing the transparency in platform policies for content moderation and decision-making would build user trust and reduce the backlash associated with perceived censorship.
- Collaborations with fact checkers in the form of partnerships to help verify the accuracy of content and reduce the spread of misinformation.
- Engage with regulators proactively to stay ahead of legal and regulatory requirements and avoid punitive actions.
- Platforms should Invest in media literacy initiatives and help users critically evaluate the content available to them.
Final Takeaways
The accrual of misinformation on digital platforms has resulted in presenting significant challenges across legal, reputational, financial, and operational functions for all stakeholders. As a result, a critical need arises where the interlinked, but seemingly-exclusive priorities of preventing misinformation and upholding freedom of expression must be balanced. Platforms must invest in the creation and implementation of a robust content moderation system with in-built transparency, collaborating with fact-checkers, and media literacy efforts to mitigate the adverse effects of misinformation. In addition to this, adapting to diverse international standards is essential to maintaining their global presence and societal trust.
References
- https://pirg.org/edfund/articles/misinformation-on-social-media/
- https://www.mdpi.com/2076-0760/12/12/674
- https://scroll.in/article/1057626/israel-hamas-war-misinformation-is-being-spread-across-social-media-with-real-world-consequences
- https://www.who.int/europe/news/item/01-09-2022-infodemics-and-misinformation-negatively-affect-people-s-health-behaviours--new-who-review-finds