#FactCheck: AI-Generated Audio Falsely Claims COAS Admitted to Loss of 6 Jets and 250 Soldiers
Executive Summary:
A viral video (archive link) claims General Upendra Dwivedi, Chief of Army Staff (COAS), admitted to losing six Air Force jets and 250 soldiers during clashes with Pakistan. Verification revealed the footage is from an IIT Madras speech, with no such statement made. AI detection confirmed parts of the audio were artificially generated.
Claim:
The claim in question is that General Upendra Dwivedi, Chief of Army Staff (COAS), admitted to losing six Indian Air Force jets and 250 soldiers during recent clashes with Pakistan.

Fact Check:
Upon conducting a reverse image search on key frames from the video, it was found that the original footage is from IIT Madras, where the Chief of Army Staff (COAS) was delivering a speech. The video is available on the official YouTube channel of ADGPI – Indian Army, published on 9 August 2025, with the description:
“Watch COAS address the faculty and students on ‘Operation Sindoor – A New Chapter in India’s Fight Against Terrorism,’ highlighting it as a calibrated, intelligence-led operation reflecting a doctrinal shift. On the occasion, he also focused on the major strides made in technology absorption and capability development by the Indian Army, while urging young minds to strive for excellence in their future endeavours.”
A review of the full speech revealed no reference to the destruction of six jets or the loss of 250 Army personnel. This indicates that the circulating claim is not supported by the original source and may contribute to the spread of misinformation.

Further using AI Detection tools like Hive Moderation we found that the voice is AI generated in between the lines.

Conclusion:
The claim is baseless. The video is a manipulated creation that combines genuine footage of General Dwivedi’s IIT Madras address with AI-generated audio to fabricate a false narrative. No credible source corroborates the alleged military losses.
- Claim: AI-Generated Audio Falsely Claims COAS Admitted to Loss of 6 Jets and 250 Soldiers
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Executive Summary:
Our research has determined that a widely circulated social media image purportedly showing astronaut Sunita Williams with U.S. President Donald Trump and entrepreneur Elon Musk following her return from space is AI-generated. There is no verifiable evidence to suggest that such a meeting took place or was officially announced. The image exhibits clear indicators of AI generation, including inconsistencies in facial features and unnatural detailing.
Claim:
It was claimed on social media that after returning to Earth from space, astronaut Sunita Williams met with U.S. President Donald Trump and Elon Musk, as shown in a circulated picture.

Fact Check:
Following a comprehensive analysis using Hive Moderation, the image has been verified as fake and AI-generated. Distinct signs of AI manipulation include unnatural skin texture, inconsistent lighting, and distorted facial features. Furthermore, no credible news sources or official reports substantiate or confirm such a meeting. The image is likely a digitally altered post designed to mislead viewers.

While reviewing the accounts that shared the image, we found that former Indian cricketer Manoj Tiwary had also posted the same image and a video of a space capsule returning, congratulating Sunita Williams on her homecoming. Notably, the image featured a Grok watermark in the bottom right corner, confirming that it was AI-generated.

Additionally, we discovered a post from Grok on X (formerly known as Twitter) featuring the watermark, stating that the image was likely AI-generated.
Conclusion:
As per our research on the viral image of Sunita Williams with Donald Trump and Elon Musk is AI-generated. Indicators such as unnatural facial features, lighting inconsistencies, and a Grok watermark suggest digital manipulation. No credible sources validate the meeting, and a post from Grok on X further supports this finding. This case underscores the need for careful verification before sharing online content to prevent the spread of misinformation.
- Claim: Sunita Williams met Donald Trump and Elon Musk after her space mission.
- Claimed On: Social Media
- Fact Check: False and Misleading

Introduction
The digital landscape of the nation has reached a critical point in its evolution. The rapid adoption of technologies such as cloud computing, mobile payment systems, artificial intelligence, and smart infrastructure has led to a high degree of integration between digital systems and governance, commercial activity, and everyday life. As dependence on these systems continues to grow, a wide range of cyber threats has emerged that are complex, multi-layered, and closely interconnected. By 2026, cyber security threats directed at India are expected to include an increasing number of targeted, well-organised, and strategic cyber attacks. These attacks are likely to focus on exploiting the trust placed in technology, institutions, automation, and the fast pace of technological change.
1. Social Engineering 2.0: Hyper-Personalised AI Phishing & Mobile Banking Malware
Cybercriminals have moved from generalised methods to hyper-targeted attacks through AI-based psychological manipulation. In addition to social media profiles, data breaches, and digital/tracking footprints, the latest types of cybercrimes expected in 2026 will involve AI-based analysis of this information to create and increase the use of hyper-targeted phishing emails.
Phishing emails are capable of impersonating banks, employers, and even family members, with all the same regionally or culturally relevant tone, language, and context as would be done if these persons were sending the emails in person.
With malicious applications disguised as legitimate service apps, cybercriminals have the ability to intercept and capture One-Time Passwords (OTPs), hijack user sessions, and steal money from user accounts in a matter of minutes.
These types of attempts or attacks are successful not only because of their technical sophistication, but because they take advantage of human trust at scale, giving them an almost limitless reach into the financial systems of people around the world through their computers and mobile devices.
2. Cloud and Supply Chain Vulnerabilities
As Indian organisations increasingly migrate to cloud infrastructure, cloud misconfigurations are emerging as a major cybersecurity risk. Weak identity controls, exposed storage, and improper access management can allow attackers to bypass traditional network defences. Alongside this, supply chain attacks are expected to intensify in 2026.
In supply chain attacks, cybercriminals compromise a trusted software vendor or service provider to infiltrate multiple downstream organisations. Even entities with strong internal security can be affected through third-party dependencies. For India’s startup ecosystem, government digital platforms, and IT service providers, this presents a systemic risk. Strengthening vendor risk management and visibility across digital supply chains will be essential.
3. Threats to IoT and Critical Infrastructure
By implementing smart cities, digital utilities, and connected public services, IoT has opened itself up to increased levels of operational technology (OT) through India’s initiative. However, there is currently a lack of adequate security in the form of strong authentication, encryption, and update methods available on many IoT devices. By the year 2026, attackers are going to be able to exploit these vulnerabilities much more than they already are.
Cyberattacks on critical infrastructure such as energy, transportation, healthcare, and telecom systems have far-reaching consequences that extend well beyond data loss; they directly affect the provision of essential services, can damage public safety, and raise concerns over national security. Effectively securing critical infrastructure needs to involve dedicated security solutions to deal with the specific needs of critical infrastructure, in contrast to conventional IT security.
4. Hidden File Vectors and Stealth Payload Delivery
SVG File Abuse in Stealth Attacks
Cybercriminals are continually searching for ways to bypass security filters, and hidden file vectors are emerging as a preferred tactic. One such method involves the abuse of SVG (Scalable Vector Graphics) files. Although commonly perceived as harmless image files, SVGs can contain embedded scripts capable of executing malicious actions.
By 2026, SVG-based attacks are expected to be used in phishing emails, cloud file sharing, and messaging platforms. Because these files often bypass traditional antivirus and email security systems, they provide an effective stealth delivery mechanism. Indian organisations will need to rethink assumptions about “safe” file formats and strengthen deep content inspection capabilities.
5. Quantum-Era Cyber Risks and “Harvest Now, Decrypt Later” Attacks
Although practical quantum computers are still emerging, quantum-era cyber risks are already a present-day concern. Adversaries are believed to be intercepting and storing encrypted data now with the intention of decrypting it in the future once quantum capabilities mature—a strategy known as “harvest now, decrypt later.” This poses serious long-term confidentiality risks.
Recognising this threat, the United States took early action during the Biden administration through National Security Memorandum 10, which directed federal agencies to prepare for the transition to quantum-resistant cryptography. For India, similar foresight is essential, as sensitive government communications, financial data, health records, and intellectual property could otherwise be exposed retrospectively. Preparing for quantum-safe cryptography will therefore become a strategic priority in the coming years.
6. AI Trust Manipulation and Model Exploitation
Poisoning the Well – Direct Attacks on AI Models
As artificial intelligence systems are increasingly used for decision-making—ranging from fraud detection and credit scoring to surveillance and cybersecurity—attackers are shifting focus from systems to models themselves. “Poisoning the well” refers to attacks that manipulate training data, feedback mechanisms, or input environments to distort AI outputs.
In the context of India's rapidly growing digital ecosystem, compromised AI models can result in biased decisions, false security alerts or denying legitimate services. The big problem with these types of attacks is they may occur without triggering conventional security measures. Transparency, integrity and continuous monitoring of AI systems will be key to creating and maintaining stakeholder confidence in the decision-making process of the automated systems.
Recommendations
Despite the increasing sophistication of malicious cyber actors, India is entering this phase with a growing level of preparedness and institutional capacity. The country has strengthened its cyber security posture through dedicated mechanisms and relevant agencies such as the Indian Cyber Crime Coordination Centre, which play a central role in coordination, threat response, and capacity building. At the same time, sustained collaboration among government bodies, non-governmental organisations, technology companies, and academic institutions has expanded cyber security awareness, skill development, and research. These collective efforts have improved detection capabilities, response readiness, and public resilience, placing India in a stronger position to manage emerging cyber threats and adapt to the evolving digital environment.
Conclusion
By 2026, complexity, intelligence, and strategic intent will increasingly define cyber threats to the digital ecosystem. Cyber criminals are expected to use advanced methods of attack, including artificial intelligence assisted social engineering and the exploitation of cloud supply chain risks. As these threats evolve, adversaries may also experiment with quantum computing techniques and the manipulation of AI models to create new ways of influencing and disrupting digital systems. In response, the focus of cybersecurity is shifting from merely preventing breaches to actively protecting and restoring digital trust. While technical controls remain essential, they must be complemented by strong cybersecurity governance, adherence to regulatory standards, and sustained user education. As India continues its digital transformation, this period presents a valuable opportunity to invest proactively in cybersecurity resilience, enabling the country to safeguard citizens, institutions, and national interests with confidence in an increasingly complex and dynamic digital future.
References
- https://www.seqrite.com/india-cyber-threat-report-2026/
- https://www.uscsinstitute.org/cybersecurity-insights/blog/ai-powered-phishing-detection-and-prevention-strategies-for-2026
- https://www.expresscomputer.in/guest-blogs/cloud-security-risks-that-should-guide-leadership-in-2026/130849/
- https://www.hakunamatatatech.com/our-resources/blog/top-iot-challenges
- https://csrc.nist.gov/csrc/media/Presentations/2024/u-s-government-s-transition-to-pqc/images-media/presman-govt-transition-pqc2024.pdf
- https://www.cyber.nj.gov/Home/Components/News/News/1721/214

A video circulating widely on social media shows a child throwing stones at a moving train, while a few other children can also be seen climbing onto the engine. The video is being shared with a communal narrative, with claims that the incident took place in India.
Cyber Peace Foundation’s research found the viral claim to be misleading. Our research revealed that the video is not from India, but from Bangladesh, and is being falsely linked to India on social media.
Claim:
On January 15, 2026, a Facebook user shared the viral video claiming it depicted an incident from India. The post carried a provocative caption stating, “We are not afraid of Pakistan outside our borders. We are afraid of the thousands of mini-Pakistans within India.” The post has been widely circulated, amplifying communal sentiments.

Fact Check:
To verify the authenticity of the video, we conducted a reverse image search using Google Lens by extracting keyframes from the viral clip. During this process, we found the same video uploaded on a Bangladeshi Facebook account named AL Amin Babukhali on December 28, 2025. The caption of the original post mentions Kamalapur, which is a well-known railway station in Bangladesh. This strongly indicates that the incident did not occur in India.

Further analysis of the video shows that the train engine carries the marking “BR”, along with text written in the Bengali language. “BR” stands for Bangladesh Railways, confirming the origin of the train. To corroborate this further, we searched for images related to Bangladesh Railways using Google’s open tools. We found multiple images on Getty Images showing train engines with the same design and markings as seen in the viral video. The visual match clearly establishes that the train belongs to Bangladesh Railways.

Conclusion
Our research confirms that the viral video is from Bangladesh, not India. It is being shared on social media with a false and misleading claim to give it a communal angle and link it to India.