#FactCheck- AI-Generated Video Falsely Claims Iran Shot Down US F-35 Fighter Jet
Executive Summary:
Amid the ongoing conflict involving the US-Israel and Iran, Tehran has claimed that it shot down a US F-35 fighter jet. In this context, a video is going viral on social media showing a crashed American fighter aircraft on the ground. It is being claimed that the footage shows Iran downing a US F-35 jet. However, an research by the CyberPeace found that the viral video is a deepfake and not real. The clip appears to have been created using Google AI tools.
Claim:
A social media user “Azania” shared the viral video on March 20, 2026, with the caption,“#Iran hit the 5th generation F-35 fighter of the #US Air Force… An American F-35 fighter made an emergency landing at an air base in the Middle East after coming under Iranian fire, sources told CNN.”

Fact Check:
We began our research with a news search and found multiple reports stating that a US F-35 fighter jet was damaged during a combat mission over Iran. According to reports, Iran’s Islamic Revolutionary Guard Corps (IRGC) claimed to have damaged a US F-35 jet and also released a video. As per a CNN report, US officials confirmed that an American F-35 was damaged during a mission over Iran, forcing it to make an emergency landing at a US airbase in the Middle East. The pilot was safe and in stable condition, and the incident is currently under research .
A spokesperson for the US Central Command, Captain Tim Hawkins, also acknowledged that an F-35 made an emergency landing during the mission. However, the US has not officially confirmed that the damage was caused by an Iranian attack.Reports by Fox News and The Times of India also mention the emergency landing of the aircraft.

Upon closely examining the viral video, we noticed several inconsistencies indicating possible AI manipulation. We then analyzed the clip using Hive Moderation, which indicated nearly a 79 percent probability that the video is AI-generated. The analysis also suggests that it was likely created using Google’s AI video generation tools (Veo).

Conclusion:
The viral video claiming to show Iran shooting down a US F-35 fighter jet is AI-generated and not real. While Iran has claimed to have targeted a US F-35, and the US has confirmed an emergency landing during a mission, there is no official confirmation that the aircraft was shot down by Iran.
Related Blogs
.webp)
Executive Summary:
A video showing a car catching fire is rapidly going viral on social media. In the clip, a family can be seen bursting firecrackers in front of a newly purchased car. Moments later, the vehicle also appears to catch fire. The video is being shared with the claim that the family was celebrating the purchase of a new car with fireworks, which accidentally led to the vehicle going up in flames. Many users are circulating the clip as footage of a real incident. However, an research by the CyberPeace found that the video is not from a real-life event but has been created using Artificial Intelligence (AI).
Claim
On February 25, 2026, an X user named “Mamta Rajgarh” shared the viral video with the caption:“This was supposed to be a grand celebration for buying a new car, but it turned into a ceremony of burning the car. What do you say? Comment below.”
- Post link: https://x.com/rajgarh_mamta1/status/2026696175311786408?s=20
- Archived link: https://perma.cc/22AA-KBS4

Fact Check:
To verify the claim, we conducted a keyword search on Google but found no credible news reports supporting the alleged incident. Upon closely examining the video, we noticed several technical inconsistencies. The car’s number plate is unclear, a common flaw often seen in AI-generated content. Additionally, the sequence of events appears unnatural — the firecrackers seem to extinguish first, and only after a delay does the car suddenly catch fire. These irregularities raised suspicion that the video may have been artificially generated. To further verify, we analyzed the clip using AI detection tools. Hive Moderation indicated a 98.7 percent likelihood that the video was generated using Artificial Intelligence.

Another AI detection tool, Undetectable.ai, suggested a 77 percent probability that the video was AI-generated.
Conclusion
Our research confirms that the viral video does not depict a real incident. It has been created using Artificial Intelligence and is being misleadingly shared as genuine footage.

Introduction
The spread of information in the quickly changing digital age presents both advantages and difficulties. The phrases "misinformation" and "disinformation" are commonly used in conversations concerning information inaccuracy. It's important to counter such prevalent threats, especially in light of how they affect countries like India. It becomes essential to investigate the practical ramifications of misinformation/disinformation and other prevalent digital threats. Like many other nations, India has had to deal with the fallout from fraudulent internet actions in 2023, which has highlighted the critical necessity for strong cybersecurity safeguards.
The Emergence of AI Chatbots; OpenAI's ChatGPT and Google's Bard
The launch of OpenAI's ChatGPT in November 2022 was a major turning point in the AI space, inspiring the creation of rival chatbot ‘Google's Bard’ (Launched in 2023). These chatbots represent a significant breakthrough in artificial intelligence (AI) as they produce replies by combining information gathered from huge databases, driven by Large Language Models (LLMs). In the same way, AI picture generators that make use of diffusion models and existing datasets have attracted a lot of interest in 2023.
Deepfake Proliferation in 2023
Deepfake technology's proliferation in 2023 contributed to misinformation/disinformation in India, affecting politicians, corporate leaders, and celebrities. Some of these fakes were used for political purposes while others were for creating pornographic and entertainment content. Social turmoil, political instability, and financial ramifications were among the outcomes. The lack of tech measures about the same added difficulties in detection & prevention, causing widespread synthetic content.
Challenges of Synthetic Media
Problems of synthetic media, especially AI-powered or synthetic Audio video content proliferated widely during 2023 in India. These included issues with political manipulation, identity theft, disinformation, legal and ethical issues, security risks, difficulties with identification, and issues with media integrity. It covered an array of consequences, ranging from financial deception and the dissemination of false information to swaying elections and intensifying intercultural conflicts.
Biometric Fraud Surge in 2023
Biometric fraud in India, especially through the Aadhaar-enabled Payment System (AePS), has become a major threat in 2023. Due to the AePS's weaknesses being exploited by cybercriminals, many depositors have had their hard-earned assets stolen by fraudulent activity. This demonstrates the real effects of biometric fraud on those who have had their Aadhaar-linked data manipulated and unauthorized access granted. The use of biometric data in financial systems raises more questions about the security and integrity of the nation's digital payment systems in addition to endangering individual financial stability.
Government strategies to counter digital threats
- The Indian Union Government has sent a warning to the country's largest social media platforms, highlighting the importance of exercising caution when spotting and responding to deepfake and false material. The advice directs intermediaries to delete reported information within 36 hours, disable access in compliance with IT Rules 2021, and act quickly against content that violates laws and regulations. The government's dedication to ensuring the safety of digital citizens was underscored by Union Minister Rajeev Chandrasekhar, who also stressed the gravity of deepfake crimes, which disproportionately impact women.
- The government has recently come up with an advisory to social media intermediaries to identify misinformation and deepfakes and to make sure of the compliance of Information Technology (IT) Rules 2021. It is the legal obligation of online platforms to prevent the spread of misinformation and exercise due diligence or reasonable efforts to identify misinformation and deepfakes.
- The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules 2021 were amended in 2023. The online gaming industry is required to abide by a set of rules. These include not hosting harmful or unverified online games, not promoting games without approval from the SRB, labelling real-money games with a verification mark, educating users about deposit and winning policies, setting up a quick and effective grievance redressal process, requesting user information, and forbidding the offering of credit or financing for real-money gaming. These steps are intended to guarantee ethical and open behaviour throughout the online gaming industry.
- With an emphasis on Personal Data Protection, the government enacted the Digital Personal Data Protection Act, 2023. It is a brand-new framework for digital personal data protection which aims to protect the individual's digital personal data.
- The " Cyber Swachhta Kendra " (Botnet Cleaning and Malware Analysis Centre) is a part of the Government of India's Digital India initiative under the (MeitY) to create a secure cyberspace. It uses malware research and botnet identification to tackle cybersecurity. It works with antivirus software providers and internet service providers to establish a safer digital environment.
Strategies by Social Media Platforms
Various social media platforms like YouTube, and Meta have reformed their policies on misinformation and disinformation. This shows their comprehensive strategy for combating deepfake, misinformation/disinformation content on the network. The platform YouTube prioritizes eliminating content that transgresses its regulations, decreasing the amount of questionable information that is recommended, endorsing reliable news sources, and assisting reputable authors. YouTube uses unambiguous facts and expert consensus to thwart misrepresentation. In order to quickly delete information that violates policies, a mix of content reviewers and machine learning is used throughout the enforcement process. Policies are designed in partnership with external experts and producers. In order to improve the overall quality of information that users have access to, the platform also gives users the ability to flag material, places a strong emphasis on media literacy, and gives precedence to giving context.
Meta’s policies address different misinformation categories, aiming for a balance between expression, safety, and authenticity. Content directly contributing to imminent harm or political interference is removed, with partnerships with experts for assessment. To counter misinformation, the efforts include fact-checking partnerships, directing users to authoritative sources, and promoting media literacy.
Promoting ‘Tech for Good’
By 2024, the vision for "Tech for Good" will have expanded to include programs that enable people to understand the ever-complex digital world and promote a more secure and reliable online community. The emphasis is on using technology to strengthen cybersecurity defenses and combat dishonest practices. This entails encouraging digital literacy and providing users with the knowledge and skills to recognize and stop false information, online dangers, and cybercrimes. Furthermore, the focus is on promoting and exposing effective strategies for preventing cybercrime through cooperation between citizens, government agencies, and technology businesses. The intention is to employ technology's good aspects to build a digital environment that values security, honesty, and moral behaviour while also promoting innovation and connectedness.
Conclusion
In the evolving digital landscape, difficulties are presented by false information powered by artificial intelligence and the misuse of advanced technology by bad actors. Notably, there are ongoing collaborative efforts and progress in creating a secure digital environment. Governments, social media corporations, civil societies and tech companies have shown a united commitment to tackling the intricacies of the digital world in 2024 through their own projects. It is evident that everyone has a shared obligation to establish a safe online environment with the adoption of ethical norms, protective laws, and cybersecurity measures. The "Tech for Good" goal for 2024, which emphasizes digital literacy, collaboration, and the ethical use of technology, seems promising. The cooperative efforts of people, governments, civil societies and tech firms will play a crucial role as we continue to improve our policies, practices, and technical solutions.
References:
- https://news.abplive.com/fact-check/deepfakes-ai-driven-misinformation-year-2023-brought-new-era-of-digital-deception-abpp-1651243
- https://pib.gov.in/PressReleaseIframePage.aspx?PRID=1975445
.webp)
Introduction
Autonomous transportation, smart cities, remote medical care, and immersive augmented reality are just a few of the revolutionary applications made possible by the global rollout of 5G technology. However, along with this revolution in connectivity, a record-breaking rise in vulnerabilities and threats has emerged, driven by software-defined networks, growing attack surfaces, and increasingly complex networks. As work on next-generation 6G networks accelerates, with commercialisation starting in 2030, security issues are piling up, including those related to AI-driven networks, terahertz communications, and quantum computing attacks. For a nation like India, poised to become a global technological leader, next-generation network procurement is not merely a technical necessity but a strategic imperative. Initiatives such as India-UK collaboration on telecom security in recent years say a lot about how international alliances are the order of the day to address these challenges.
Why Cybersecurity in 5G and 6G Networks is Crucial
With the launch of global 5G services and the rapid introduction of 6G technologies, the telecom sector is seeing a fundamental transformation. Besides expanding connectivity, future networks are also creating the building blocks for networked and highly intelligent environments. With its ultra-high speed of 10 Gbps, network slicing, and ultra-low latency, 5G provides new capabilities that are perfectly suited for mission-critical applications such as telemedicine, autonomous vehicles, and industrial IoT. Sixth-generation wireless technology is still in development, and it will be approximately one hundred times faster than fifth-generation. Here are a few drawbacks and challenges:
- Decentralised Infrastructure (edge computing nodes): Increased number of entry points for attack.
- Virtual Network Functions (VNFs): Greater vulnerability to configuration issues and software exploitation.
- Billions of IoT devices with different security states, thus forming networks that are more difficult to secure.
Although these challenges are unparalleled, the advancement in technology also creates new opportunities.
Understanding the Cyber Threat Landscape for 5G and 6G
The move to 5G and the upgrade to 6G open great opportunities, but also open doors for new cybersecurity risks. Open RAN usage offers flexibility and vendor selection but exposes the supply chain to untested third-party components and attacks. SBA security vulnerabilities can be exploited to disrupt vital network services, resulting in outages or data breaches. Similarly, widespread adoption of edge computing to reduce latency creates multiple entry points for an attacker to target. Compounding the problem is the explosion of IoT device connections through 5G, which, if breached, can fuel massive botnets capable of conducting massive distributed denial-of-service (DDoS) attacks.
Challenges in 6G
- AI-Powered Cyberattacks: AI-native 6G networks are susceptible to adversarial machine learning attacks, data model poisoning, both for security and for traffic optimisation.
- Quantum Threats: Post-quantum cryptography may be required if quantum computing renders current encryption algorithms outdated.
- Privacy Concerns with Digital Twins: 6G may result in creating enormous privacy and data protection issues in addition to offering real-time virtual replicas of the physical world.
- Cross-Border Data Flow Risks: Secure interoperability frameworks and standardised data sovereignty are essential for the worldwide rollout of 6G.
A Critical Step Toward Secure Telecom: The India-UK Partnership
India's recent foray with the UK reflects its active role in shaping the future of telecom security. Major points of the UK-India Telecom Roundtable are:
- MoU between SONIC Labs and C-DOT: Dedicated to Open RAN and AI integration security in 4G/5G deployments. This will offer supply chain diversity without sacrificing resilience.
- Research Partnerships for 6G: Partnerships with UK institutions like CHEDDAR (Cloud & Distributed Computing Hub) and the University of Glasgow 6G Research Centre are focused on developing AI-driven network security solutions, green 6G, and quantum-resistant design.
- Telecom Cybersecurity Centres of Excellence: Constructing two-way CoEs for telecom cybersecurity, ethical AI, and digital twin security models.
- Standardisation Efforts: Joint contribution to ITU for the creation of IMT-2030 standards, in a way that cybersecurity-by-design principles are integrated into worldwide 6G specifications.
- Future Initiatives:
- Application of privacy-enhancing technologies (PETs) for cross-sectoral data usage.
- Secure quantum communications to be used for satellite and submarine cable connections.
- Encouragement of native telecommunication stacks for strategic independence.
Global Policy and Regulatory Aspects
- India's Bharat 6G Vision: India will lead the global standardisation process in the Bharat 6G Alliance with a vision of inclusive, secure, and sustainable connectivity.
- International Harmonisation:
- 3GPP and ITU's joint effort towards standardisation of 6G security.
- Cross-border privacy and cybersecurity compliance system designs to enable secure flows of data.
- Cyber Diplomacy for Telecom Security: Cross-border sharing of information architectures, threat intelligence sharing, and coordinated incident response schemes are essential to 6G security resilience globally.
Building a Secure and Resilient Future for 5G and 6G
Establishing a safe and future-proof 5G and 6G environment should be an end-to-end effort involving governments, industry, and technology vendors. Security should be integrated into the underlying architecture of the networks and not an afterthought feature to be optionally provided. Active engagement in international bodies to establish homogeneous security and privacy standards across geographies is also required. Public-private partnerships, including academia partnerships, will be the driver for innovation and the creation of advanced protection mechanisms. Simultaneously, creating a competent talent pool to manage AI-based threat analysis, quantum-resistant cryptography, and next-generation cryptographic methods will be required to combat the advanced menace of new telecom technologies.
Conclusion
Given 6G on the way and 5G technologies already changing global connections, cybersecurity needs to continue to be a key focus. The partnership between India and the UK serves as an example of why the safe rise of tomorrow's networks depends on global collaboration, AI-driven security measures, plus quantum preparedness. The world can unleash the potential for transformation of 5G and 6G through combining security by design, supporting international standards, and encouraging innovation via cooperation. This will result in an online future that is not only quick and egalitarian but also solid and trustworthy.
References:
- https://www.pib.gov.in/PressReleasePage.aspx?PRID=2105225
- https://www.itu.int/en/ITU-R/study-groups/rsg5/rwp5d/imt-2030/pages/default.aspx
- https://dot.gov.in/sites/default/files/Bharat%206G%20Vision%20Statement%20-%20full.pdf
- https://www.gsma.com/solutions-and-impact/technologies/security/wp-content/uploads/2024/07/FS.40-v3.0-002-19-July.pdf