#FactCheck: IAF Shivangi Singh was captured by Pakistan army after her Rafale fighter jet was shot down
Executive Summary:
False information spread on social media that Flight Lieutenant Shivangi Singh, India’s first female Rafale pilot, had been captured by Pakistan during “Operation Sindoor”. The allegations are untrue and baseless as no credible or official confirmation supports the claim, and Singh is confirmed to be safe and actively serving. The rumor, likely originating from unverified sources, sparked public concern and underscored the serious threat fake news poses to national security.
Claim:
An X user posted stating that “ Initial image released of a female Indian Shivani singh Rafale pilot shot down in Pakistan”. It was falsely claimed that Flight Lieutenant Shivangi Singh had been captured, and that the Rafale aircraft was shot down by Pakistan.


Fact Check:
After doing reverse image search, we found an instagram post stating the two Indian Air Force pilots—Wing Commander Tejpal (50) and trainee Bhoomika (28)—who had ejected from a Kiran Jet Trainer during a routine training sortie from Bengaluru before it crashed near Bhogapuram village in Karnataka. The aircraft exploded upon impact, but both pilots were later found alive, though injured and exhausted.

Also we found a youtube channel which is showing the video from the past and not what it was claimed to be.

Conclusion:
The false claims about Flight Lieutenant Shivangi Singh being captured by Pakistan and her Rafale jet being shot down have been debunked. The image used was unrelated and showed IAF pilots from a separate training incident. Several media also confirmed that its video made no mention of Ms. Singh’s arrest. This highlights the dangers of misinformation, especially concerning national security. Verifying facts through credible sources and avoiding the spread of unverified content is essential to maintain public trust and protect the reputation of those serving in the armed forces.
- Claim: False claims about Flight Lieutenant Shivangi Singh being captured by Pakistan and her Rafale jet being shot down
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
Whatsapp is one of the leading OTT messaging platforms, which has been owned by the tech giant Meta since 2013. WhatsApp enjoys a user base of nearly 2.24 billion people globally, with almost 487 million users in India. Since the advent of Whatsapp, it has been the most commonly used messaging app, and it has made an impact to such an extent that it is used for professional as well as personal purposes. Meta powers the platform and follows similar guidelines and policies as its parent company.
The New Feature
Users of WhatsApp on the web and desktop can now access one account from various devices. One WhatsApp account may now be used on up to four handsets thanks to a new update from Meta. Be aware that the multi-device capability has been planned for some time and is finally being made available to stable WhatsApp users. Each linked device (up to four devices can be linked) will function independently, and the independent devices will continue to receive messages even if the central device’s network connection is lost. Remember that WhatsApp will automatically log out of all the companion devices if the primary smartphone is dormant for an extended period. Four more gadgets may be a mix of four PCs and smartphones or four smartphones. This feature is now available for updates and downloads on Android as well as iOS platforms.
Potential issues
As we go deeper into the digital age, it is the responsibility of the tech giants to pilot innovation with features of security by design. Thus such new features should be accompanied by coherent safety and security policies or advisories to ensure the users understand the implications of the new features. Convenience over conditions is an essential part of cyberspace. It points to the civic duty of netizens to go through the conditions of any app rather than only focus on the convenience it creates. The following potential issues may arise from the new features on Whatsapp –
- Increased cybercrime- The bad actors now do not need to access SIM cards to commit frauds over the platforms as now on a single number 4 devices can be used hence the cybercriminal activity can increase over the platform. It is also pertinent for the platform to create SoPs for fake accounts which use multiple devices, as they pose a direct threat to the users and their interests.
- Difficulty in identifying and tracing- The LEAs will face a significant issue in identifying the bad actors and tracing them as the individual’s involvement through a linked device needs to be given legal validity and scope for investigation. This may also cause issues in evidence handling and analysis.
- Surge in Misinformation and Disinformation- With access to multiple devices, the screen time of an individual is also bound to increase. This leads to more time spent online, thus causing a rise in instances of misinformation and disinformation by bad actors. Thus the aspect of fack checking is of prime importance.
- Potential Oversharing of Personal Data- With the increased accessibility on different devices, it is very easy for the app to seek data from all devices on which the app is running, thus leading to a bigger reservoir of personal data for the platforms and data fiduciaries.
- Higher risk of Phishing, Ransomware and Malware Attacks- As the devices under the same login credentials and mobile number will increase, the message can be viewed on all the devices, thus increasing the risk of widespread embedded ransomware and malware in multiple devices is and ever-present threat.
- One number, more criminals- This feature will allow cybercriminals to operate using one device only, earlier they used to forge Adhaar cards to get new sims, but this feature will enable the bad actors to commit crimes and attacks from one single SIM using 4 different devices.
- Rise in Digital Footprint- As the number of devices increases, the users will generate more digital footprints. As a tech giant, Meta will have access to a bigger database, which increases the risk of data breaches by third-party actors.
Conclusion
In the fast-paced digital world, it is important to remain updated about new software, technologies and policies for our applications or forms of tech. This was a long-awaited feature from WhatsApp, and its value of it doesn’t lie in technological advancement only but also in the formulation of policies to govern this technology towards the trust and safety aspect of users. The platforms, in synergy with the policy makers, need to create a robust framework to accommodate the new features and add-ons on apps vehicle, staying in compliance with the laws of the land. Awareness about new features and vulnerabilities is a must for all netizens, and it is a shared responsibility for all netizens to spread the word about safety and security mechanisms.

Introduction
In today's era of digitalised community and connections, social media has become an integral part of our lives. we use social media to connect with our friends and family, and social media is also used for business purposes. Social media offers us numerous opportunities and ease to connect and communicate with larger communities. While it also poses some challenges, while we use social media, we come across issues such as inappropriate content, online harassment, online stalking, account hacking, misuse of personal information or data, privacy issues, fake accounts, Intellectual property violation issues, abusive and dishearted content, content against the terms and condition policy of the platform and more. To deal with such issues, social media entities have proper reporting mechanisms and set terms and conditions guidelines to effectively prevent such issues and by addressing them in the best possible way by platform help centre or reporting mechanism.
The Role of Help Centers in Resolving User Complaints:
The help centres are established on platforms to address user complaints and provide satisfactory assistance or resolution. Addressing user complaints is a key component of maintaining a safe and secure digital environment for users. Platform-centric help centres play a vital role in providing users with a resource to seek assistance and report their issues.
Some common issues reported on social media:
- Reporting abusive content: Users can report content that they find abusive, offensive, or in violation of platform policies. These reports are reviewed by the help centre.
- Reporting CSAM (Child Sexual Abuse Material): CSAM content can be reported to platform help centre. Social media platforms have stringent policies in place to address such concerns and ensure a safe digital environment for everyone, including children.
- Reporting Misinformation or Fake News: With the proliferation of misinformation online, users can report content that they find or suspect misleading or false information and Fact-checking bodies are employed to assess the accuracy of reported content.
- Content violating intellectual property rights: If there is a violation or infringement of any intellectual property work, it can be reported on the platform.
- Violence of commercial policies: Products listed on social media platforms are also needed to comply with the platform’s Commercial Policies.
Submitting a Complaint to the Indian Grievance Officer for Facebook:
A user can report his issue through the below-mentioned websites:
The user can go to the Facebook Help Center, where go to the "Reporting a Problem” section, then by clicking on Reporting a Problem, Choose the Appropriate Issue that best describes your complaint. For example, if you have encountered inappropriate or abusive content, select the ‘I found inappropriate or abusive content’ option.
Here is a list of issues which you can report on Facebook:
- My account has been hacked.
- I've lost access to a page or a group I used to manage.
- I've found a fake profile or a profile that's pretending to be me.
- I am being bullied or harassed.
- I found inappropriate or abusive content.
- I want to report content showing me in nudity/partial nudity or in a sexual act.
- I (or someone I am legally responsible for) appear in content that I do not want to be displayed.
- I am a law enforcement official seeking to access user data.
- I am a government official or a court officer seeking to submit an order, notice or direction.
- I want to download my personal data or report an issue with how Facebook is processing my data.
- I want to report an Intellectual Property infringement.
- I want to report another issue.
Then, describe your issues and attach supporting evidence such as screenshots, then submit your report. After submitting a report, you will receive a confirmation that your report has been submitted to the platform. The platform will review the complaint within the stipulated time period, and users can also check the status of their filed complaint. Appropriate action will be taken by platforms after reviewing such complaints. If it violates any standard policy, terms & conditions, or privacy policies of the platform, the platform will take down that content or will take any other appropriate action.
Conclusion:
It is important to be aware of your rights in a digital landscape and report such issues to the platform. It is essential to understand how to report your issues or grievances on social media platforms effectively. By using the help centre or reporting mechanism of the platform, users can effectively file their complaints on the platform and contribute to a safer and more responsible online environment. Social media platforms have their compliance framework and privacy and policy guidelines in place to ensure the compliance framework for community standards and legal requirements. So, whenever you encounter an issue on social media, report it on the platform and contribute to a safer digital environment on social media platforms.
References:
- https://www.cyberyodha.org/2023/09/how-to-submit-complaint-to-indian.html
- https://transparency.fb.com/en-gb/enforcement/taking-action/complaints-handling-process/
- https://www.facebook.com/help/contact/278770247037228
- https://www.facebook.com/help/263149623790594
.webp)
Introduction
Misinformation is a major issue in the AI age, exacerbated by the broad adoption of AI technologies. The misuse of deepfakes, bots, and content-generating algorithms have made it simpler for bad actors to propagate misinformation on a large scale. These technologies are capable of creating manipulative audio/video content, propagate political propaganda, defame individuals, or incite societal unrest. AI-powered bots may flood internet platforms with false information, swaying public opinion in subtle ways. The spread of misinformation endangers democracy, public health, and social order. It has the potential to affect voter sentiments, erode faith in the election process, and even spark violence. Addressing misinformation includes expanding digital literacy, strengthening platform detection capabilities, incorporating regulatory checks, and removing incorrect information.
AI's Role in Misinformation Creation
AI's growth in its capabilities to generate content have grown exponentially in recent years. Legitimate uses or purposes of AI many-a-times take a backseat and result in the exploitation of content that already exists on the internet. One of the main examples of misinformation flooding the internet is when AI-powered bots flood social media platforms with fake news at a scale and speed that makes it impossible for humans to track and figure out whether the same is true or false.
The netizens in India are greatly influenced by viral content on social media. AI-generated misinformation can have particularly negative consequences. Being literate in the traditional sense of the word does not automatically guarantee one the ability to parse through the nuances of social media content authenticity and impact. Literacy, be it social media literacy or internet literacy, is under attack and one of the main contributors to this is the rampant rise of AI-generated misinformation. Some of the most common examples of misinformation that can be found are related to elections, public health, and communal issues. These issues have one common factor that connects them, which is that they evoke strong emotions in people and as such can go viral very quickly and influence social behaviour, to the extent that they may lead to social unrest, political instability and even violence. Such developments lead to public mistrust in the authorities and institutions, which is dangerous in any economy, but even more so in a country like India which is home to a very large population comprising a diverse range of identity groups.
Misinformation and Gen AI
Generative AI (GAI) is a powerful tool that allows individuals to create massive amounts of realistic-seeming content, including imitating real people's voices and creating photos and videos that are indistinguishable from reality. Advanced deepfake technology blurs the line between authentic and fake. However, when used smartly, GAI is also capable of providing a greater number of content consumers with trustworthy information, counteracting misinformation.
Generative AI (GAI) is a technology that has entered the realm of autonomous content production and language creation, which is linked to the issue of misinformation. It is often difficult to determine if content originates from humans or machines and if we can trust what we read, see, or hear. This has led to media users becoming more confused about their relationship with media platforms and content and highlighted the need for a change in traditional journalistic principles.
We have seen a number of different examples of GAI in action in recent times, from fully AI-generated fake news websites to fake Joe Biden robocalls telling the Democrats in the U.S. not to vote. The consequences of such content and the impact it could have on life as we know it are almost too vast to even comprehend at present. If our ability to identify reality is quickly fading, how will we make critical decisions or navigate the digital landscape safely? As such, the safe and ethical use and applications of this technology needs to be a top global priority.
Challenges for Policymakers
AI's ability to generate anonymous content makes it difficult to hold perpetrators accountable due to the massive amount of data generated. The decentralised nature of the internet further complicates regulation efforts, as misinformation can spread across multiple platforms and jurisdictions. Balancing the need to protect the freedom of speech and expression with the need to combat misinformation is a challenge. Over-regulation could stifle legitimate discourse, while under-regulation could allow misinformation to propagate unchecked. India's multilingual population adds more layers to already-complex issue, as AI-generated misinformation is tailored to different languages and cultural contexts, making it harder to detect and counter. Therefore, developing strategies catering to the multilingual population is necessary.
Potential Solutions
To effectively combat AI-generated misinformation in India, an approach that is multi-faceted and multi-dimensional is essential. Some potential solutions are as follows:
- Developing a framework that is specific in its application to address AI-generated content. It should include stricter penalties for the originator and spreader and dissemination of fake content in proportionality to its consequences. The framework should establish clear and concise guidelines for social media platforms to ensure that proactive measures are taken to detect and remove AI-generated misinformation.
- Investing in tools that are driven by AI for customised detection and flagging of misinformation in real time. This can help in identifying deepfakes, manipulated images, and other forms of AI-generated content.
- The primary aim should be to encourage different collaborations between tech companies, cyber security orgnisations, academic institutions and government agencies to develop solutions for combating misinformation.
- Digital literacy programs will empower individuals by training them to evaluate online content. Educational programs in schools and communities teach critical thinking and media literacy skills, enabling individuals to better discern between real and fake content.
Conclusion
AI-generated misinformation presents a significant threat to India, and it is safe to say that the risks posed are at scale with the rapid rate at which the nation is developing technologically. As the country moves towards greater digital literacy and unprecedented mobile technology adoption, one must be cognizant of the fact that even a single piece of misinformation can quickly and deeply reach and influence a large portion of the population. Indian policymakers need to rise to the challenge of AI-generated misinformation and counteract it by developing comprehensive strategies that not only focus on regulation and technological innovation but also encourage public education. AI technologies are misused by bad actors to create hyper-realistic fake content including deepfakes and fabricated news stories, which can be extremely hard to distinguish from the truth. The battle against misinformation is complex and ongoing, but by developing and deploying the right policies, tools, digital defense frameworks and other mechanisms, we can navigate these challenges and safeguard the online information landscape.
References:
- https://economictimes.indiatimes.com/news/how-to/how-ai-powered-tools-deepfakes-pose-a-misinformation-challenge-for-internet-users/articleshow/98770592.cms?from=mdr
- https://www.dw.com/en/india-ai-driven-political-messaging-raises-ethical-dilemma/a-69172400
- https://pure.rug.nl/ws/portalfiles/portal/975865684/proceedings.pdf#page=62