#FactCheck: Viral Video Showing Pakistan Shot Down Indian Air Force' MiG-29 Fighter Jet
Executive Summary
Recent claims circulating on social media allege that an Indian Air Force MiG-29 fighter jet was shot down by Pakistani forces during "Operation Sindoor." These reports suggest the incident involved a jet crash attributed to hostile action. However, these assertions have been officially refuted. No credible evidence supports the existence of such an operation or the downing of an Indian aircraft as described. The Indian Air Force has not confirmed any such event, and the claim appears to be misinformation.

Claim
A social media rumor has been circulating, suggesting that an Indian Air Force MiG-29 fighter jet was shot down by Pakistani Air forces during "Operation Sindoor." The claim is accompanied by images purported to show the wreckage of the aircraft.

Fact Check
The social media posts have falsely claimed that a Pakistani Air Force shot down an Indian Air Force MiG-29 during "Operation Sindoor." This claim has been confirmed to be untrue. The image being circulated is not related to any recent IAF operations and has been previously used in unrelated contexts. The content being shared is misleading and does not reflect any verified incident involving the Indian Air Force.

After conducting research by extracting key frames from the video and performing reverse image searches, we successfully traced the original post, which was first published in 2024, and can be seen in a news article from The Hindu and Times of India.
A MiG-29 fighter jet of the Indian Air Force (IAF), engaged in a routine training mission, crashed near Barmer, Rajasthan, on Monday evening (September 2, 2024). Fortunately, the pilot safely ejected and escaped unscathed, hence the claim is false and an act to spread misinformation.

Conclusion
The claims regarding the downing of an Indian Air Force MiG-29 during "Operation Sindoor" are unfounded and lack any credible verification. The image being circulated is outdated and unrelated to current IAF operations. There has been no official confirmation of such an incident, and the narrative appears to be misleading. Peoples are advised to rely on verified sources for accurate information regarding defence matters.
- Claim: Pakistan Shot down an Indian Fighter Jet, MIG-29
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs
.webp)
Introduction
The unprecedented rise of social media, challenges with regional languages, and the heavy use of messaging apps like WhatsApp have all led to an increase in misinformation in India. False stories spread quickly and can cause significant harm, like political propaganda and health-related mis/misinformation. Programs that teach people how to use social media responsibly and attempt to check facts are essential, but they do not always connect with people deeply. Reading stories, attending lectures, and using tools that check facts are standard passive learning methods used in traditional media literacy programs.
Adding game-like features to non-game settings is called "gamification," it could be a new and interesting way to answer this question. Gamification involves engaging people by making them active players instead of just passive consumers of information. Research shows that interactive learning improves interest, thinking skills, and memory. People can learn to recognise fake news safely by turning fact-checking into a game before encountering it in real life. A study by Roozenbeek and van der Linden (2019) showed that playing misinformation games can significantly enhance people's capacity to recognise and avoid false information.
Several misinformation-related games have been successfully implemented worldwide:
- The Bad News Game – This browser-based game by Cambridge University lets players step into the shoes of a fake news creator, teaching them how misinformation is crafted and spread (Roozenbeek & van der Linden, 2019).
- Factitious – A quiz game where users swipe left or right to decide whether a news headline is real or fake (Guess et al., 2020).
- Go Viral! – A game designed to inoculate people against COVID-19 misinformation by simulating the tactics used by fake news peddlers (van der Linden et al., 2020).
For programs to effectively combat misinformation in India, they must consider factors such as the responsible use of smartphones, evolving language trends, and common misinformation patterns in the country. Here are some key aspects to keep in mind:
- Vernacular Languages
There should be games in Hindi, Tamil, Bengali, Telugu, and other major languages since that is how rumours spread in different areas and diverse cultural contexts. AI voice conversation and translation can help reduce literacy differences. Research shows that people are more likely to engage with and trust information in their native language (Pennycook & Rand, 2019).
- Games Based on WhatsApp
Interactive tests and chatbot-powered games can educate consumers directly within the app they use most frequently since WhatsApp is a significant hub for false information. A game with a WhatsApp-like interface where players may feel like they are in real life, having to decide whether to avoid, check the facts of, or forward messages that are going viral could be helpful in India.
- Detecting False Information
As part of a mobile-friendly game, players can pretend to be reporters or fact-checkers and have to prove stories that are going viral. They can do the same with real-life tools like reverse picture searches or reliable websites that check facts. Research shows that doing interactive tasks to find fake news makes people more aware of it over time (Lewandowsky et al., 2017).
- Reward-Based Participation
Participation could be increased by providing rewards for finishing misleading challenges, such as badges, diplomas, or even incentives on mobile data. This might be easier to do if there are relationships with phone companies. Reward-based learning has made people more interested and motivated in digital literacy classes (Deterding et al., 2011).
- Universities and Schools
Educational institutions can help people spot false information by adding game-like elements to their lessons. Hamari et al. (2014) say that students are more likely to join and remember what they learn when there are competitive and interactive parts to the learning. Misinformation games can be used in media studies classes at schools and universities by using models to teach students how to check sources, spot bias, and understand the psychological tricks that misinformation campaigns use.
What Artificial Intelligence Can Do for Gamification
Artificial intelligence can tailor learning experiences to each player in false games. AI-powered misinformation detection bots could lead participants through situations tailored to their learning level, ensuring they are consistently challenged. Recent natural language processing (NLP) developments enable AI to identify nuanced misinformation patterns and adjust gameplay accordingly (Zellers et al., 2019). This could be especially helpful in India, where fake news is spread differently depending on the language and area.
Possible Opportunities
Augmented reality (AR) scavenger hunts for misinformation, interactive misinformation events, and educational misinformation tournaments are all examples of games that help fight misinformation. India can help millions, especially young people, think critically and combat the spread of false information by making media literacy fun and interesting. Using Artificial Intelligence (AI) in gamified treatments for misinformation could be a fascinating area of study in the future. AI-powered bots could mimic real-time cases of misinformation and give quick feedback, which would help students learn more.
Problems and Moral Consequences
While gaming is an interesting way to fight false information, it also comes with some problems that you should think about:
- Ethical Concerns: Games that try to imitate how fake news spreads must ensure players do not learn how to spread false information by accident.
- Scalability: Although worldwide misinformation initiatives exist, developing and expanding localised versions for India's varied language and cultural contexts provide significant challenges.
- Assessing Impact: There is a necessity for rigorous research approaches to evaluate the efficacy of gamified treatments in altering misinformation-related behaviours, keeping cultural and socio-economic contexts in the picture.
Conclusion
A gamified approach can serve as an effective tool in India's fight against misinformation. By integrating game elements into digital literacy programs, it can encourage critical thinking and help people recognize misinformation more effectively. The goal is to scale these efforts, collaborate with educators, and leverage India's rapidly evolving technology to make fact-checking a regular practice rather than an occasional concern.
As technology and misinformation evolve, so must the strategies to counter them. A coordinated and multifaceted approach, one that involves active participation from netizens, strict platform guidelines, fact-checking initiatives, and support from expert organizations that proactively prebunk and debunk misinformation can be a strong way forward.
References
- Deterding, S., Dixon, D., Khaled, R., & Nacke, L. (2011). From game design elements to gamefulness: defining "gamification". Proceedings of the 15th International Academic MindTrek Conference.
- Guess, A., Nagler, J., & Tucker, J. (2020). Less than you think: Prevalence and predictors of fake news dissemination on Facebook. Science Advances.
- Hamari, J., Koivisto, J., & Sarsa, H. (2014). Does gamification work?—A literature review of empirical studies on gamification. Proceedings of the 47th Hawaii International Conference on System Sciences.
- Lewandowsky, S., Ecker, U. K., & Cook, J. (2017). Beyond misinformation: Understanding and coping with the “post-truth” era. Journal of Applied Research in Memory and Cognition.
- Pennycook, G., & Rand, D. G. (2019). Fighting misinformation on social media using “accuracy prompts”. Nature Human Behaviour.
- Roozenbeek, J., & van der Linden, S. (2019). The fake news game: actively inoculating against the risk of misinformation. Journal of Risk Research.
- van der Linden, S., Roozenbeek, J., Compton, J. (2020). Inoculating against fake news about COVID-19. Frontiers in Psychology.
- Zellers, R., Holtzman, A., Rashkin, H., Bisk, Y., Farhadi, A., Roesner, F., & Choi, Y. (2019). Defending against neural fake news. Advances in Neural Information Processing Systems.

Introduction
In today’s hyper-connected world, information spreads faster than ever before. But while much attention is focused on public platforms like Facebook and Twitter, a different challenge lurks in the shadows: misinformation circulating on encrypted and closed-network platforms such as WhatsApp and Telegram. Unlike open platforms where harmful content can be flagged in public, private groups operate behind a digital curtain. Here, falsehoods often spread unchecked, gaining legitimacy because they are shared by trusted contacts. This makes encrypted platforms a double-edged sword. It is essential for privacy and free expression, yet uniquely vulnerable to misuse.
As Prime Minister Narendra Modi rightly reminded,
“Think 10 times before forwarding anything,” warning that even a “single fake news has the capability to snowball into a matter of national concern.”
The Moderation Challenge with End-to-End Encryption
Encrypted messaging platforms were built to protect personal communication. Yet, the same end-to-end encryption that shields users’ privacy also creates a blind spot for moderation. Authorities, researchers, and even the platforms themselves cannot view content circulating in private groups, making fact-checking nearly impossible.
Trust within closed groups makes the problem worse. When a message comes from family, friends, or community leaders, people tend to believe it without questioning and quickly pass it along. Features like large group chats, broadcast lists, and “forward to many” options further speed up its spread. Unlike open networks, there is no public scrutiny, no visible counter-narrative, and no opportunity for timely correction.
During the COVID-19 pandemic, false claims about vaccines spread widely through WhatsApp groups, undermining public health campaigns. Even more alarming, WhatsApp rumors about child kidnappers and cow meat in India triggered mob lynchings, leading to the tragic loss of life.
Encrypted platforms, therefore, represent a unique challenge: they are designed to protect privacy, but, unintentionally, they also protect the spread of dangerous misinformation.
Approaches to Curbing Misinformation on End-to-End Platforms
- Regulatory: Governments worldwide are exploring ways to access encrypted data on messaging platforms, creating tensions between the right to user privacy and crime prevention. Approaches like traceability requirements on WhatsApp, data-sharing mandates for platforms in serious cases, and stronger obligations to act against harmful viral content are also being considered.
- Technological Interventions: Platforms like WhatsApp have introduced features such as “forwarded many times” labels and limits on mass forwarding. These tools can be expanded further by introducing AI-driven link-checking and warnings for suspicious content.
- Community-Based Interventions: Ultimately, no regulation or technology can succeed without public awareness. People need to be inoculated against misinformation through pre-bunking efforts and digital literacy campaigns. Fact-checking websites and tools also have to be taught.
Best Practices for Netizens
Experts recommend simple yet powerful habits that every user can adopt to protect themselves and others. By adopting these, ordinary users can become the first line of defence against misinformation in their own communities:
- Cross-Check Before Forwarding: Verify claims from trusted platforms & official sources.
- Beware of Sensational Content: Headlines that sound too shocking or dramatic probably need checking. Consult multiple sources for a piece of news. If only one platform/ channel is carrying sensational news, it is likely to be clickbait or outright false.
- Stick to Trusted News Sources: Verify news through national newspapers and expert commentary. Remember, not everything on the internet/television is true.
- Look Out for Manipulated Media: Now, with AI-generated deepfakes, it becomes more difficult to tell the difference between original and manipulated media. Check for edited images, cropped videos, or voice messages without source information. Always cross-verify any media received.
- Report Harmful Content: Report misinformation to the platform it is being circulated on and PIB’s Fact Check Unit.
Conclusion
In closed, unmonitored groups, platforms like WhatsApp and Telegram often become safe havens where people trust and forward messages from friends and family without question. Once misinformation takes root, it becomes extremely difficult to challenge or correct, and over time, such actions can snowball into serious social, economic and national concerns.
Preventing this is a matter of shared responsibility. Governments can frame balanced regulations, but individuals must also take initiative: pause, think, and verify before sharing. Ultimately, the right to privacy must be upheld, but with reasonable safeguards to ensure it is not misused at the cost of societal trust and safety.
References
- India WhatsApp ‘child kidnap’ rumours claim two more victims (BBC) The people trying to fight fake news in India (BBC)
- Press Information Bureau – PIB Fact Check
- Brookings Institution – Encryption and Misinformation Report (2021)
- Curtis, T. L., Touzel, M. P., Garneau, W., Gruaz, M., Pinder, M., Wang, L. W., Krishna, S., Cohen, L., Godbout, J.-F., Rabbany, R., & Pelrine, K. (2024). Veracity: An Open-Source AI Fact-Checking System. arXiv.
- NDTV – PM Modi cautions against fake news (2022)
- Times of India – Govt may insist on WhatsApp traceability (2019)
- Medianama – Telegram refused to share ISIS channel data (2019)

Introduction
Iran stands as a nation poised at the threshold of a transformative era. The Islamic Republic, a land of ancient civilisations now grappling with the exigencies of the 21st century, is now making strides in the emerging field of artificial intelligence (AI). This is not merely an adoption of new tools; it is a strategic embrace, a calculated leap into the digital unknown, where the potential for economic growth and security enhancement resonates with the promise of a redefined future.
Embarking on this technological odyssey, Iranian President Ebrahim Raisi, in a conclave with the nation’s virtual business activists, delineated the ‘big steps’ being undertaken in the realm of AI. The gathering, as reported by the pro-government Tasnim News, was not a simple exchange of polite remarks but a profound discourse that offered an incisive overview of the burgeoning digital economy and the strides Iran is making in the AI landscape. The conversation deeply revolved around the current ecosystem of technology and innovation within Iran, delving into the burgeoning startup culture and the commendable drive within its youth populace to propel the nation to the forefront of technology.
Iranian AI Integration
Military Implications
The discourse ranged from the current technological infrastructure to the broader implications for the security and defense of the region. The Iranian polity, with its rich history that seamlessly blends with aspirations for the future, is acutely aware that the implications of AI reach far beyond mere economic growth. They extend into the very fibres of military might and the structure of national security. The investment in cyber capabilities in Iran is well-documented, a display of shrewdness and pragmatism. And the integration of AI technologies is the next logical step in an ever-evolving defense architecture. Brigadier General Alireza Sabahifard, Commander of the Iranian Army Air Defense Force, has underscored the pivotal role of AI in modern warfare. He identifies the ongoing adoption of AI technologies as a strategic imperative, a top priority fundamentally designed to elevate the air defense capabilities in Iran to meet 21st-century threats.
Economic Implications
Yet, the Iranian pursuit of AI is not solely confined to bolstering military prowess. It is also pervasive in nurturing economic opportunity. President Raisi’s rhetoric touches upon economic rejuvenation, job creation, and the proliferation of financial and legal support mechanisms, all blurred into a cohesive vision that would foster a suitable environment for the private sector in the AI domain. The ambition is grand and strikingly clear — a nation committed to training several thousand individuals in the digital economy sector, signaling a deep-rooted commitment to cultivating a healthy environment for AI-driven innovation.
The Iranian leader’s vision extends beyond the simple creation of infrastructure. It extends to the fostering of a healthy, competitive, and peaceful social milieu where domestic and international markets are within easy reach, promoting the prosperity of the digital economy and its activists. Such a vision of technological symbiosis, in many Western democracies, would be labelled as audaciously progressive. In Iran, however, withdrawing a major chunk of economic investments from the country's security state adds layers of complexity and nuance to this transformative narrative.
Cultural Integration
Still, Iran’s ambitious AI journey unfolds with a recognition of its cultural underpinnings and societal structure. The Nexus between the private sector, with its cyber-technocratic visionaries, and the regime, with its omnipresent ties to the Islamic Revolutionary Guard Corps, is a tightrope that requires unparalleled poise and vigilance.
Moreover, in the holy city of Qom, a hub of intellectual fervour and the domicile of half of Iran's 200,000 Shia clerics, there burgeons a captivating interest in the possible synergies between AI and theological study. The clerical establishment, hidden within a stronghold of religious scholarship, perceives AI not as a problem but as a potential solution, a harbinger of progress that could ally with tradition. It sees in AI the potential of parsing Islamic texts with newfound precision, thereby allowing religious rulings, or fatwas, to resonate with the everchanging Iranian society. This integration of technology is a testament to the dynamic interplay between tradition and modernity.
Yet the integration of AI into the venerable traditions of societies such as Iran's is threaded with challenges. Herein lays the paradox, for as AI is poised to potentially bolster religious study, the threat of cultural dissolution remains present. AI, if not judiciously designed with local values and ethics in mind, could inadvertently propagate an ideology at odds with local customs, beliefs, and the cornerstone principles of a society.
Natural Resources
Similarly, Iran's strategic foray into AI extends into its sovereign dominion—the charge of its natural resources. As Mehr News Agency reports, the National Iranian Oil Company (NIOC) is on the cusp of pioneering a joint venture with international tech juggernauts, chiefly Chinese companies, to inject the lifeblood of AI into the heart of its oil and gas production processes. This grand undertaking is nothing short of a digital renaissance aimed at achieving 'great reforms’ and driving a drastic 20% improvement in efficiency. AI’s algorithmic potency, unleashed in the hydrocarbon fields, promises to streamline expenses, enhance efficacy, and maximise production outputs, thereby bolstering Iran's economic bulwark.
The AI way Forward
As we delve further into Iran's sophisticated AI strategy, we observe an approach that is both vibrant and multi-dimensional. From military development to religious tutelage, from the diligent charge of the environment to the pursuit of sustainable economic development, Iran's AI ventures are emblematic of the broader global discourse. They mark a vivid intersection of AI governance, security, and the future of technological enterprise, highlighting the evolution of technological adoption and its societal, ethical, and geopolitical repercussions.
Conclusion
The multifaceted nature of Iran's AI pursuits encapsulates a spectrum of strategic imperatives, bringing the spearheads of defense modernisation and religious academics with the imperatives of resource allocation. It reflects a nuanced approach to the adoption and integration of technology, adjudicating between the venerable pillars of traditional values and the inexorable forces of modernisation. As Iran continues to delineate and traverse its path through the burgeoning landscape of AI, attending global stakeholders, watch with renewed interest and measured apprehension. Mindful of the intricate geopolitical implications and the transformative potential inherent in Iran's burgeoning AI endeavours, the global community watches, waits, and wonders at what may emerge from this ancient civilisation’s bold, resolute strides into the future.
References
- https://www.jpost.com/middle-east/article-792391
- https://www.ft.com/content/9c1c3fd3-4aea-40ab-977b-24fe5527300c
- https://www.foxnews.com/world/iran-looks-ai-weather-western-sanctions-help-military-fight-cheap