#Fact Check: Pakistan’s Airstrike Claim Uses Video Game Footage
Executive Summary:
A widely circulated claim on social media, including a post from the official X account of Pakistan, alleges that the Pakistan Air Force (PAF) carried out an airstrike on India, supported by a viral video. However, according to our research, the video used in these posts is actually footage from the video game Arma-3 and has no connection to any real-world military operation. The use of such misleading content contributes to the spread of false narratives about a conflict between India and Pakistan and has the potential to create unnecessary fear and confusion among the public.

Claim:
Viral social media posts, including the official Government of Pakistan X handle, claims that the PAF launched a successful airstrike against Indian military targets. The footage accompanying the claim shows jets firing missiles and explosions on the ground. The video is presented as recent and factual evidence of heightened military tensions.


Fact Check:
As per our research using reverse image search, the videos circulating online that claim to show Pakistan launching an attack on India under the name 'Operation Sindoor' are misleading. There is no credible evidence or reliable reporting to support the existence of any such operation. The Press Information Bureau (PIB) has also verified that the video being shared is false and misleading. During our research, we also came across footage from the video game Arma-3 on YouTube, which appears to have been repurposed to create the illusion of a real military conflict. This strongly indicates that fictional content is being used to propagate a false narrative. The likely intention behind this misinformation is to spread fear and confusion by portraying a conflict that never actually took place.


Conclusion:
It is true to say that Pakistan is using the widely shared misinformation videos to attack India with false information. There is no reliable evidence to support the claim, and the videos are misleading and irrelevant. Such false information must be stopped right away because it has the potential to cause needless panic. No such operation is occurring, according to authorities and fact-checking groups.
- Claim: Viral social media posts claim PAF attack on India
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs
.webp)
Introduction
The spread of misinformation online has become a significant concern, with far-reaching social, political, economic and personal implications. The degree of vulnerability to misinformation differs from person to person, dependent on psychological elements such as personality traits, familial background and digital literacy combined with contextual factors like information source, repetition, emotional content and topic. How to reduce misinformation susceptibility in real-world environments where misinformation is regularly consumed on social media remains an open question. Inoculation theory has been proposed as a way to reduce susceptibility to misinformation by informing people about how they might be misinformed. Psychological inoculation campaigns on social media are effective at improving misinformation resilience at scale.
Prebunking has gained prominence as a means to preemptively build resilience against anticipated exposure to misinformation. This approach, grounded in Inoculation Theory, allows people to analyse and avoid manipulation without prior knowledge of specific misleading content by helping them build generalised resilience. We may draw a parallel here with broad spectrum antibiotics that can be used to fight infections and protect the body against symptoms before one is able to identify the particular pathogen at play.
Inoculation Theory and Prebunking
Inoculation theory is a promising approach to combat misinformation in the digital age. It involves exposing individuals to weakened forms of misinformation before encountering the actual false information. This helps develop resistance and critical thinking skills to identify and counter deceptive content.
Inoculation theory has been established as a robust framework for countering unwanted persuasion and can be applied within the modern context of online misinformation:
- Preemptive Inoculation: Preemptive inoculation entails exposing people to weaker kinds of misinformation before they encounter genuine erroneous information. Individuals can build resistance and critical thinking abilities by being exposed to typical misinformation methods and strategies.
- Technique/logic based Inoculation: Individuals can educate themselves about typical manipulative strategies used in online misinformation, which could be emotionally manipulative language, conspiratorial reasoning, trolling and logical fallacies. Learning to recognise these tactics as indicators of misinformation is an important first step to being able to recognise and reject the same. Through logical reasoning, individuals can recognize such tactics for what they are: attempts to distort the facts or spread misleading information. Individuals who are equipped with the capacity to discern weak arguments and misleading methods may properly evaluate the reliability and validity of information they encounter on the Internet.
- Educational Campaigns: Educational initiatives that increase awareness about misinformation, its consequences, and the tactics used to manipulate information can be useful inoculation tools. These programmes equip individuals with the knowledge and resources they need to distinguish between reputable and fraudulent sources, allowing them to navigate the online information landscape more successfully.
- Interactive Games and Simulations: Online games and simulations, such as ‘Bad News,’ have been created as interactive aids to protect people from misinformation methods. These games immerse users in a virtual world where they may learn about the creation and spread of misinformation, increasing their awareness and critical thinking abilities.
- Joint Efforts: Combining inoculation tactics with other anti-misinformation initiatives, such as accuracy primes, building resilience on social media platforms, and media literacy programmes, can improve the overall efficacy of our attempts to combat misinformation. Expert organisations and people can build a stronger defence against the spread of misleading information by using many actions at the same time.
CyberPeace Policy Recommendations for Tech/Social Media Platforms
Implementation of the Inoculation Theory on social media platforms can be seen as an effective strategy point for building resilience among users and combating misinformation. Tech/social media platforms can develop interactive and engaging content in the form of educational prebunking videos, short animations, infographics, tip sheets, and misinformation simulations. These techniques can be deployed through online games, collaborations with influencers and trusted sources that help design and deploy targeted campaigns whilst also educating netizens about the usefulness of Inoculation Theory so that they can practice critical thinking.
The approach will inspire self-monitoring amongst netizens so that people consume information mindfully. It is a powerful tool in the battle against misinformation because it not only seeks to prevent harm before it occurs, but also actively empowers the target audience. In other words, Inoculation Theory helps build people up, and takes them on a journey of transformation from ‘potential victim’ to ‘warrior’ in the battle against misinformation. Through awareness-building, this approach makes people more aware of their own vulnerabilities and attempts to exploit them so that they can be on the lookout while they read, watch, share and believe the content they receive online.
Widespread adoption of Inoculation Theory may well inspire systemic and technological change that goes beyond individual empowerment: these interventions on social media platforms can be utilized to advance digital tools and algorithms so that such interventions and their impact are amplified. Additionally, social media platforms can explore personalized inoculation strategies, and customized inoculation approaches for different audiences so as to be able to better serve more people. One such elegant solution for social media platforms can be to develop a dedicated prebunking strategy that identifies and targets specific themes and topics that could be potential vectors for misinformation and disinformation. This will come in handy, especially during sensitive and special times such as the ongoing elections where tools and strategies for ‘Election Prebunks’ could be transformational.
Conclusion
Applying Inoculation Theory in the modern context of misinformation can be an effective method of establishing resilience against misinformation, help in developing critical thinking and empower individuals to discern fact from fiction in the digital information landscape. The need of the hour is to prioritize extensive awareness campaigns that encourage critical thinking, educate people about manipulation tactics, and pre-emptively counter false narratives associated with information. Inoculation strategies can help people to build mental amour or mental defenses against malicious content and malintent that they may encounter in the future by learning about it in advance. As they say, forewarned is forearmed.
References
- https://www.science.org/doi/10.1126/sciadv.abo6254
- https://stratcomcoe.org/publications/download/Inoculation-theory-and-Misinformation-FINAL-digital-ISBN-ebbe8.pdf

Introduction
Snapchat's Snap Map redefined location sharing with an ultra-personalised feature that allows users to track where they and their friends are, discover hotspots, and even explore events worldwide. In November 2024, Snapchat introduced a new addition to its Family Center, aiming to bolster teen safety. This update enables parents to request and share live locations with their teens, set alerts for specific locations, and monitor who their child shares their location with.
While designed with keeping safety in mind, such tracking tools raise significant privacy concerns. Misusing these features could expose teens to potential harm, amplifying the debate around safeguarding children’s online privacy. This blog delves into the privacy and safety challenges Snap Map poses under existing data protection laws, highlighting critical gaps and potential risks.
Understanding Snapmap: How It Works and Why It’s Controversial
Snap Map, built on technology from Snap's acquisition of social mapping startup Zenly, revolutionises real-time location sharing by letting users track friends, send messages, and explore the world through an interactive map. With over 350 million active users by Q4 2023, and India leading with 202.51 million Snapchat users, Snap Map has become a global phenomenon.
This opt-in feature allows users to customise their location-sharing settings, offering modes like "Ghost Mode" for privacy, sharing with all friends, or selectively with specific contacts. However, location updates occur only when the app is in use, adding a layer of complexity to privacy management.
While empowering users to connect and share, Snap Map’s location-sharing capabilities raise serious concerns. Unintentional sharing or misuse of this tool could expose users—especially teens—to risks like stalking or predatory behaviour. As Snap Map becomes increasingly popular, ensuring its safe use and addressing its potential for harm remains a critical challenge for users and regulators.
The Policy Vacuum: Protecting Children’s Data Privacy
Given the potential misuse of location-sharing features, evaluating the existing regulatory frameworks for protecting children's geolocation privacy is important. Geolocation features remain under-regulated in many jurisdictions, creating opportunities for misuse, such as stalking or unauthorised surveillance. Presently, multiple international and national jurisdictions are in the process of creating and implementing privacy laws. The most notable examples are the COPPA in the US, GDPR in the EU and the DPDP Act which have made considerable progress in privacy for children and their online safety. COPPA and GDPR prioritise children’s online safety through strict data protections, consent requirements, and limits on profiling. India’s DPDP Act, 2023, prohibits behavioral tracking and targeted ads for children, enhancing privacy. However, it lacks safeguards against geolocation tracking, leaving a critical gap in protecting children from risks posed by location-based features.
Balancing Innovation and Privacy: The Role of Social Media Platforms
Privacy is an essential element that needs to be safeguarded and this is specifically important for children as they are vulnerable to harm they cannot always foresee. Social media companies must uphold their responsibility to create platforms that do not become a breeding ground for offences against children. Some of the challenges that platforms face in implementing a safe online environment are robust parental control and consent mechanisms to ensure parents are informed about their children’s online presence and options to opt out of services that they feel are not safe for their children. Platforms need to maintain a level of privacy that allows users to know what data is collected by the platform, sharing and retention data policies.
Policy Recommendations: Addressing the Gaps
Some of the recommendations for addressing the gaps in the safety of minors are as follows:
- Enhancing privacy and safety for minors by taking measures such as mandatory geolocation restrictions for underage users.
- Integrating clear consent guidelines for data protection for users.
- Collaboration between stakeholders such as government, social media platforms, and civil society is necessary to create awareness about location-sharing risks among parents and children.
Conclusion
Safeguarding privacy, especially of children, with the introduction of real-time geolocation tools like Snap Map, is critical. While these features offer safety benefits, they also present the danger of misuse, potentially harming vulnerable teens. Policymakers must urgently update data protection laws and incorporate child-specific safeguards, particularly around geolocation tracking. Strengthening regulations and enhancing parental controls are essential to protect young users. However, this must be done without stifling technological innovation. A balanced approach is needed, where safety is prioritised, but innovation can still thrive. Through collaboration between governments, social media platforms, and civil society, we can create a digital environment that ensures safety and progress.
References
- https://indianexpress.com/article/technology/tech-news-technology/snapchat-family-center-real-time-location-sharing-travel-notifications-9669270/
- https://economictimes.indiatimes.com/tech/technology/snapchat-unveils-location-sharing-features-to-safeguard-teen-users/articleshow/115297065.cms?from=mdr
- https://www.thehindu.com/sci-tech/technology/snapchat-adds-more-location-safety-features-for-teens/article68871301.ece
- https://www.moneycontrol.com/technology/snapchat-expands-parental-control-with-location-tracking-to-make-it-easier-for-parents-to-track-their-kids-article-12868336.html
- https://www.statista.com/statistics/545967/snapchat-app-dau/

India is the world's largest democracy, and conducting free and fair elections is a mammoth task shouldered by the Election Commission of India. But technology is transforming every aspect of the electoral process in the digital age, with Artificial Intelligence (AI) being integrated into campaigns, voter engagement, and election monitoring. In the upcoming Bihar elections of 2025, all eyes are on how the use of AI will influence the state polls and the precedent it will set for future elections.
Opportunities: Harnessing AI for Better Elections
Breaking Language Barriers with AI:
AI is reshaping political outreach by making speeches accessible in multiple languages. At the Kashi Tamil Sangamam in 2024, the PM’s Hindi address was AI-dubbed in Tamil in real time. Since then, several speeches have been rolled out in eight languages, ensuring inclusivity and connecting with voters beyond Hindi-speaking regions more effectively.
Monitoring and Transparency
During Bihar’s Panchayat polls, the State Election Commission used Staqu’s JARVIS, an AI-powered system that connects with CCTV cameras to monitor EVM screens in real time. By reducing human error, JARVIS brought greater accuracy, speed, and trust to the counting process.
AI for Information Access on Public Service Delivery
NaMo AI is a multilingual chatbot that citizens can use to inquire about the details of public services. The feature aims to make government schemes easy to understand, transparent, and help voters connect directly with the policies of the government.
Personalised Campaigning
AI is transforming how campaigns connect with voters. By analysing demographics and social media activity, AI builds detailed voter profiles. This helps craft messages that feel personal, whether on WhatsApp, a robocall, or a social media post, ensuring each group hears what matters most to them. This aims to make political outreach sharper and more effective.
Challenges: The Dark Side of AI in Elections
Deepfakes and Disinformation
AI-powered deepfakes create hyper-realistic videos and audio that are nearly impossible to distinguish from the real. In elections, they can distort public perception, damage reputations, or fuel disharmony on social media. There is a need for mandatory disclaimers stating when content is AI-generated, to ensure transparency and protect voters from manipulative misinformation.
Data Privacy and Behavioural Manipulation
Cambridge Analytica’s consulting services, provided by harvesting the data of millions of users from Facebook without their consent, revealed how personal data can be weaponised in politics. This data was allegedly used to “microtarget” users through ads, which could influence their political opinions. Data mining of this nature can be supercharged through AI models, jeopardising user privacy, trust, safety, and casting a shadow on democratic processes worldwide.
Algorithmic Bias
AI systems are trained on datasets. If the datasets contain biases, AI-driven tools could unintentionally reinforce stereotypes or favor certain groups, leading to unfair outcomes in campaigning or voter engagement.
The Road Ahead: Striking a Balance
The adoption of AI in elections opens a Pandora's box of uncertainties. On the one hand, it offers solutions for breaking language barriers and promoting inclusivity. On the other hand, it opens the door to manipulation and privacy violations.
To counter risks from deepfakes and synthetic content, political parties are now advised to clearly label AI-generated materials and add disclaimers in their campaign messaging. In Delhi, a nodal officer has even been appointed to monitor social media misuse, including the circulation of deepfake videos during elections. The Election Commission of India constantly has to keep up with trends and tactics used by political parties to ensure that elections remain free and fair.
Conclusion
With Bihar’s pioneering experiments with JARVIS in Panchayat elections to give vote counting more accuracy and speed, India is witnessing both sides of this technological revolution. The challenge lies in ensuring that AI strengthens democracy rather than undermining it. Deepfakes algorithms, bias, and data misuse remind us of the risk of when technology oversteps. The real challenge is to strike the right balance in embracing AI for elections to enhance inclusivity and transparency, while safeguarding trust, privacy, and the integrity of democratic processes.
References
- https://timesofindia.indiatimes.com/india/how-ai-is-rewriting-the-rules-of-election-campaign-in-india/articleshow/120848499.cms#
- https://m.economictimes.com/news/elections/lok-sabha/india/2024-polls-stand-out-for-use-of-ai-to-bridge-language-barriers/articleshow/108737700.cms
- https://www.ndtv.com/india-news/namo-ai-on-namo-app-a-unique-chatbot-that-will-answer-everything-on-pm-modi-govt-schemes-achievements-5426028
- https://timesofindia.indiatimes.com/gadgets-news/staqu-deploys-jarvis-to-facilitate-automated-vote-counting-for-bihar-panchayat-polls/articleshow/87307475.cms
- https://www.drishtiias.com/daily-updates/daily-news-editorials/deepfakes-in-elections-challenges-and-mitigation
- https://internetpolicy.mit.edu/blog-2018-fb-cambridgeanalytica/
- https://www.deccanherald.com/elections/delhi/delhi-assembly-elections-2025-use-ai-transparently-eci-issues-guidelines-for-political-parties-3357978#