#FactCheck: Viral Video Claiming IAF Air Chief Marshal Acknowledged Loss of Jets Found Manipulated
Executive Summary:
A video circulating on social media falsely claims to show Indian Air Chief Marshal AP Singh admitting that India lost six jets and a Heron drone during Operation Sindoor in May 2025. It has been revealed that the footage had been digitally manipulated by inserting an AI generated voice clone of Air Chief Marshal Singh into his recent speech, which was streamed live on August 9, 2025.
Claim:
A viral video (archived video) (another link) shared by an X user stating in the caption “ Breaking: Finally Indian Airforce Chief admits India did lose 6 Jets and one Heron UAV during May 7th Air engagements.” which is actually showing the Air Chief Marshal has admitted the aforementioned loss during Operation Sindoor.

Fact Check:
By conducting a reverse image search on key frames from the video, we found a clip which was posted by ANI Official X handle , after watching the full clip we didn't find any mention of the aforementioned alleged claim.

On further research we found an extended version of the video in the Official YouTube Channel of ANI which was published on 9th August 2025. At the 16th Air Chief Marshal L.M. Katre Memorial Lecture in Marathahalli, Bengaluru, Air Chief Marshal AP Singh did not mention any loss of six jets or a drone in relation to the conflict with Pakistan. The discrepancies observed in the viral clip suggest that portions of the audio may have been digitally manipulated.

The audio in the viral video, particularly the segment at the 29:05 minute mark alleging the loss of six Indian jets, appeared to be manipulated and displayed noticeable inconsistencies in tone and clarity.
Conclusion:
The viral video claiming that Air Chief Marshal AP Singh admitted to the loss of six jets and a Heron UAV during Operation Sindoor is misleading. A reverse image search traced the footage that no such remarks were made. Further an extended version on ANI’s official YouTube channel confirmed that, during the 16th Air Chief Marshal L.M. Katre Memorial Lecture, no reference was made to the alleged losses. Additionally, the viral video’s audio, particularly around the 29:05 mark, showed signs of manipulation with noticeable inconsistencies in tone and clarity.
- Claim: Viral Video Claiming IAF Chief Acknowledged Loss of Jets Found Manipulated
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs
.webp)
Introduction
The unprecedented rise of social media, challenges with regional languages, and the heavy use of messaging apps like WhatsApp have all led to an increase in misinformation in India. False stories spread quickly and can cause significant harm, like political propaganda and health-related mis/misinformation. Programs that teach people how to use social media responsibly and attempt to check facts are essential, but they do not always connect with people deeply. Reading stories, attending lectures, and using tools that check facts are standard passive learning methods used in traditional media literacy programs.
Adding game-like features to non-game settings is called "gamification," it could be a new and interesting way to answer this question. Gamification involves engaging people by making them active players instead of just passive consumers of information. Research shows that interactive learning improves interest, thinking skills, and memory. People can learn to recognise fake news safely by turning fact-checking into a game before encountering it in real life. A study by Roozenbeek and van der Linden (2019) showed that playing misinformation games can significantly enhance people's capacity to recognise and avoid false information.
Several misinformation-related games have been successfully implemented worldwide:
- The Bad News Game – This browser-based game by Cambridge University lets players step into the shoes of a fake news creator, teaching them how misinformation is crafted and spread (Roozenbeek & van der Linden, 2019).
- Factitious – A quiz game where users swipe left or right to decide whether a news headline is real or fake (Guess et al., 2020).
- Go Viral! – A game designed to inoculate people against COVID-19 misinformation by simulating the tactics used by fake news peddlers (van der Linden et al., 2020).
For programs to effectively combat misinformation in India, they must consider factors such as the responsible use of smartphones, evolving language trends, and common misinformation patterns in the country. Here are some key aspects to keep in mind:
- Vernacular Languages
There should be games in Hindi, Tamil, Bengali, Telugu, and other major languages since that is how rumours spread in different areas and diverse cultural contexts. AI voice conversation and translation can help reduce literacy differences. Research shows that people are more likely to engage with and trust information in their native language (Pennycook & Rand, 2019).
- Games Based on WhatsApp
Interactive tests and chatbot-powered games can educate consumers directly within the app they use most frequently since WhatsApp is a significant hub for false information. A game with a WhatsApp-like interface where players may feel like they are in real life, having to decide whether to avoid, check the facts of, or forward messages that are going viral could be helpful in India.
- Detecting False Information
As part of a mobile-friendly game, players can pretend to be reporters or fact-checkers and have to prove stories that are going viral. They can do the same with real-life tools like reverse picture searches or reliable websites that check facts. Research shows that doing interactive tasks to find fake news makes people more aware of it over time (Lewandowsky et al., 2017).
- Reward-Based Participation
Participation could be increased by providing rewards for finishing misleading challenges, such as badges, diplomas, or even incentives on mobile data. This might be easier to do if there are relationships with phone companies. Reward-based learning has made people more interested and motivated in digital literacy classes (Deterding et al., 2011).
- Universities and Schools
Educational institutions can help people spot false information by adding game-like elements to their lessons. Hamari et al. (2014) say that students are more likely to join and remember what they learn when there are competitive and interactive parts to the learning. Misinformation games can be used in media studies classes at schools and universities by using models to teach students how to check sources, spot bias, and understand the psychological tricks that misinformation campaigns use.
What Artificial Intelligence Can Do for Gamification
Artificial intelligence can tailor learning experiences to each player in false games. AI-powered misinformation detection bots could lead participants through situations tailored to their learning level, ensuring they are consistently challenged. Recent natural language processing (NLP) developments enable AI to identify nuanced misinformation patterns and adjust gameplay accordingly (Zellers et al., 2019). This could be especially helpful in India, where fake news is spread differently depending on the language and area.
Possible Opportunities
Augmented reality (AR) scavenger hunts for misinformation, interactive misinformation events, and educational misinformation tournaments are all examples of games that help fight misinformation. India can help millions, especially young people, think critically and combat the spread of false information by making media literacy fun and interesting. Using Artificial Intelligence (AI) in gamified treatments for misinformation could be a fascinating area of study in the future. AI-powered bots could mimic real-time cases of misinformation and give quick feedback, which would help students learn more.
Problems and Moral Consequences
While gaming is an interesting way to fight false information, it also comes with some problems that you should think about:
- Ethical Concerns: Games that try to imitate how fake news spreads must ensure players do not learn how to spread false information by accident.
- Scalability: Although worldwide misinformation initiatives exist, developing and expanding localised versions for India's varied language and cultural contexts provide significant challenges.
- Assessing Impact: There is a necessity for rigorous research approaches to evaluate the efficacy of gamified treatments in altering misinformation-related behaviours, keeping cultural and socio-economic contexts in the picture.
Conclusion
A gamified approach can serve as an effective tool in India's fight against misinformation. By integrating game elements into digital literacy programs, it can encourage critical thinking and help people recognize misinformation more effectively. The goal is to scale these efforts, collaborate with educators, and leverage India's rapidly evolving technology to make fact-checking a regular practice rather than an occasional concern.
As technology and misinformation evolve, so must the strategies to counter them. A coordinated and multifaceted approach, one that involves active participation from netizens, strict platform guidelines, fact-checking initiatives, and support from expert organizations that proactively prebunk and debunk misinformation can be a strong way forward.
References
- Deterding, S., Dixon, D., Khaled, R., & Nacke, L. (2011). From game design elements to gamefulness: defining "gamification". Proceedings of the 15th International Academic MindTrek Conference.
- Guess, A., Nagler, J., & Tucker, J. (2020). Less than you think: Prevalence and predictors of fake news dissemination on Facebook. Science Advances.
- Hamari, J., Koivisto, J., & Sarsa, H. (2014). Does gamification work?—A literature review of empirical studies on gamification. Proceedings of the 47th Hawaii International Conference on System Sciences.
- Lewandowsky, S., Ecker, U. K., & Cook, J. (2017). Beyond misinformation: Understanding and coping with the “post-truth” era. Journal of Applied Research in Memory and Cognition.
- Pennycook, G., & Rand, D. G. (2019). Fighting misinformation on social media using “accuracy prompts”. Nature Human Behaviour.
- Roozenbeek, J., & van der Linden, S. (2019). The fake news game: actively inoculating against the risk of misinformation. Journal of Risk Research.
- van der Linden, S., Roozenbeek, J., Compton, J. (2020). Inoculating against fake news about COVID-19. Frontiers in Psychology.
- Zellers, R., Holtzman, A., Rashkin, H., Bisk, Y., Farhadi, A., Roesner, F., & Choi, Y. (2019). Defending against neural fake news. Advances in Neural Information Processing Systems.

Introduction
A policy, no matter how artfully conceived, is like a timeless idiom, its truth self-evident, its purpose undeniable, standing in silent witness before those it vows to protect, yet trapped in the stillness of inaction, where every moment of delay erodes the very justice it was meant to serve. This is the case of the Digital Personal Data Protection Act, 2023, which holds in its promise a resolution to all the issues related to data protection and a protection framework at par with GDPR and Global Best Practices. While debates on its substantive efficacy are inevitable, its execution has emerged as a site of acute contention. The roll-out and the decision-making have been making headlines since late July on various fronts. The government is being questioned by industry stakeholders, media and independent analysts on certain grounds, be it “slow policy execution”, “centralisation of power” or “arbitrary amendments”. The act is now entrenched in a never-ending dilemma of competing interests under the DPDP Act.
The change to the Right to Information Act (RTI), 2005, made possible by Section 44(3) of the DPDP Act, has become a focal point of debate. This amendment is viewed by some as an attack on weakening the hard-won transparency architecture of Indian democracy by substituting an absolute exemption for personal information for the “public interest override” in Section 8(1)(j) of the RTI Act.
The Lag Ledger: Tracking the Delays in DPDP Enforcement
As per a news report of July 28, 2025, the Parliamentary Standing Committee on Information and Communications Technology has expressed its concern over the delayed implementation and has urged the Ministry of Electronics and Information Technology (MeitY) to ensure that data privacy is adequately ensured in the nation. In the report submitted to the Lok Sabha on July 24, the committee reviewed the government’s reaction to the previous recommendations and concluded that MeitY had only been able to hold nine consultations and twenty awareness workshops about the Draft DPDP Rules, 2025. In addition, four brainstorming sessions with academic specialists were conducted to examine the needs for research and development. The ministry acknowledges that this is a specialised field that urgently needs industrial involvement. Another news report dated 30th July, 2025, of a day-long consultation held where representatives from civil society groups, campaigns, social movements, senior lawyers, retired judges, journalists, and lawmakers participated on the contentious and chilling effects of the Draft Rules that were notified in January this year. The organisers said in a press statement the DPDP Act may have a negative impact on the freedom of the press and people’s right to information and the activists, journalists, attorneys, political parties, groups and organisations “who collect, analyse, and disseminate critical information as they become ‘data fiduciaries’ under the law.”
The DPDP Act has thus been caught up in an uncomfortable paradox: praised as a significant legislative achievement for India’s digital future, but caught in a transitional phase between enactment and enforcement, where every day not only postpones protection but also feeds worries about the dwindling amount of room for accountability and transparency.
The Muzzling Effect: Diluting Whistleblower Protections
The DPDP framework raises a number of subtle but significant issues, one of which is the possibility that it would weaken safeguards for whistleblowers. Critics argue that the Act runs the risk of trapping journalists, activists, and public interest actors who handle sensitive material while exposing wrongdoing because it expands the definition of “personal data” and places strict compliance requirements on “data fiduciaries.”One of the most important checks on state overreach may be silenced if those who speak truth to power are subject to legal retaliation in the absence of clear exclusions of robust public-interest protections.
Noted lawyer Prashant Bhushan has criticised the law for failing to protect whistleblowers, warning that “If someone exposes corruption and names officials, they could now be prosecuted for violating the DPDP Act.”
Consent Management under the DPDP Act
In June 2025, the National e-Governance Division (NeGD) under MeitY released a Business Requirement Document (BRD) for developing consent management systems under the DPDP Act, 2023. The document supports the idea of “Consent Manager”, which acts as a single point of contact between Data Principals and Data Fiduciaries. This idea is fundamental to the Act, which is now being operationalised with the help of MeitY’s “Code for Consent: The DPDP Innovation Challenge.” The government has established a collaborative ecosystem to construct consent management systems (CMS) that can serve as a single, standardised interface between Data Principals and Data Fiduciaries by choosing six distinct entities, such as Jio Platforms, IDfy, and Zoop. Such a framework could enable people to have meaningful control over their personal data, lessen consent fatigue, and move India’s consent architecture closer to international standards if it is implemented precisely and transparently.
There is no debate to the importance of this development however, there are various concerns associated with this advancement that must be considered. Although effective, a centralised consent management system may end up being a single point of failure in terms of political overreach and technical cybersecurity flaws. Concerns are raised over the concentration of power over the framing, seeking, and recording of consent when big corporate entities like Jio are chosen as key innovators. Critics contend that the organisations responsible for generating revenue from user data should not be given the responsibility for designing the gatekeeping systems. Furthermore, the CMS can create opaque channels for data access, compromising user autonomy and whistleblower protections, in the absence of strong safeguards, transparency mechanisms and independent oversight.
Conclusion
Despite being hailed as a turning point in India’s digital governance, the DPDP Act is still stuck in a delayed and unequal transition from promise to reality. Its goals are indisputable, but so are the conundrum it poses to accountability, openness, and civil liberties. Every delay increases public mistrust, and every safeguard that remains unsolved. The true test of a policy intended to safeguard the digital rights of millions lies not in how it was drafted, but in the integrity, pace, and transparency with which it is to be implemented. In the digital age, the true cost of delay is measured not in time, but in trust. CyberPeace calls for transparent, inclusive, and timely execution that balances innovation with the protection of digital rights.
References
- https://www.storyboard18.com/how-it-works/parliamentary-committee-raises-concern-with-meity-over-dpdp-act-implementation-lag-77105.htm
- https://thewire.in/law/excessive-centralisation-of-power-lawyers-activists-journalists-mps-express-fear-on-dpdp-act
- https://www.medianama.com/2025/08/223-jio-idfy-meity-consent-management-systems-dpdpa/
- https://www.downtoearth.org.in/governance/centre-refuses-to-amend-dpdp-act-to-protect-journalists-whistleblowers-and-rti-activists

Introduction
In a setback to the Centre, the Bombay High Court on Friday 20th September 2024, struck down the provisions under IT Amendment Rules 2023, which empowered the Central Government to establish Fact Check Units (FCUs) to identify ‘fake and misleading’ information about its business on social media platforms.
Chronological Overview
- On 6th April 2023, the Ministry of Electronics and Information Technology (MeitY) notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2023 (IT Amendment Rules, 2023). These rules introduced new provisions to establish a fact-checking unit with respect to “any business of the central government”. This amendment was done In exercise of the powers conferred by section 87 of the Information Technology Act, 2000. (IT Act).
- On 20 March 2024, the Central Government notified the Press Information Bureau (PIB) as FCU under rule 3(1)(b)(v) of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules 2023 (IT Amendment Rules 2023).
- The next day on 21st March 2024, the Supreme Court stayed the Centre's decision on notifying PIB -FCU, considering the pendency of the proceedings before the High Court of Judicature at Bombay. A detailed analysis covered by CyberPeace on the Supreme Court Stay decision can be accessed here.
- In the latest development, the Bombay High Court on 20th September 2024, struck down the provisions under IT Amendment Rules 2023, which empowered the Central Government to establish Fact Check Units (FCUs) to identify ‘fake and misleading’ information about its business on social media platforms.
Brief Overview of Bombay High Court decision dated 20th September 2024
Justice AS Chandurkar was appointed as the third judge after a split verdict in January 2023 by a division bench consisting of Justices Gautam Patel and Neela Gokhal. As a Tie-breaker judge' Justice AS Chandurkar delivered the decision striking down provisions for setting up a Fact Check Unit under IT amendment 2023 rules. Striking down the Centre's proposed fact check unit provision, Justice A S Chandurkar of Bombay High Court also opined that there was no rationale to undertake an exercise in determining whether information related to the business of the Central govt was fake or false or misleading when in digital form but not doing the same when such information was in print. It was also contended that there is no justification to introduce an FCU only in relation to the business of the Central Government. Rule 3(1)(b)(v) has a serious chilling effect on the exercise of the freedom of speech and expression under Article 19(1)(a) of the Constitution since the communication of the view of the FCU will result in the intermediary simply pulling down the content for fear of consequences or losing the safe harbour provision given under IT Act.
Justice Chandurkar held that the expressions ‘fake, false or misleading’ are ‘vague and overbroad’, and that the ‘test of proportionality’ is not satisfied. Rule 3(1)(b)(v), was violative of Articles 14 and 19 (1) (a) and 19 (1) (g) of the Constitution and it is “ultra vires”, or beyond the powers, of the IT Act.
Role of Expert Organisations in Curbing Mis/Disinformation and Fake News
In light of the recent developments, and the rising incidents of Mis/Disinformation and Fake News it becomes significantly important that we all stand together in the fight against these challenges. The actions against Mis/Disinformation and fake news should be strengthened by collective efforts, the expert organisations like CyberPeace Foundation plays an key role in enabling and encouraging netizens to exercise caution and rely on authenticated sources, rather than solely rely on govt FCU to block the content.
Mis/Disinformation and Fake News should be stopped, identified and countered by netizens at the very first stage of its spread. In light of the Bombay High Court's decision to stuck down the provision related to setting up the FCU by the Central Government, it entails that the government's intention to address misinformation related solely to its business/operations may not have been effectively communicated in the eyes of the judiciary.
It is high time to exercise collective efforts against Mis/Disinformation and Fake News and support expert organizations who are actively engaged in conducting proactive measures, and campaigns to target these challenges, specifically in the online information landscape. CyberPeace actively publishes fact-checking reports and insights on Prebunking and Debunking, conducts expert sessions and takes various key steps aimed at empowering netizens to build cognitive defences to recognise the susceptible information, disregard misleading claims and prevent further spreads to ensure the true online information landscape.
References:
- https://www.scconline.com/blog/post/2024/09/20/bombay-high-court-it-rules-amendment-2023-fact-check-units-article14-article19-legal-news/#:~:text=Bombay%20High%20Court%3A%20A%20case,grounds%20that%20it%20violated%20constitutional
- https://indianexpress.com/article/cities/mumbai/bombay-hc-strikes-down-it-act-amendment-fact-check-unit-9579044/
- https://www.cyberpeace.org/resources/blogs/supreme-court-stay-on-centres-notification-of-pibs-fact-check-unit-under-it-amendment-rules-2023