#FactCheck - Viral Video of Burning Aircraft Falsely Linked to UAE, Found to Be AI-Generated
Executive Summary:
A video is being shared on social media showing an aircraft engulfed in massive flames on an airport runway. The video is being linked to the UAE. It is being claimed that a UAE airport was completely destroyed due to recent drone and missile attacks by Iran. Research by the CyberPeace found the viral claim to be false. Our research revealed that the viral video is not real, but AI-generated.
Claim:
On social media platform Facebook, a user shared the viral video on March 3, 2026, and wrote, “Amid the Iran-US-Israel conflict in the Middle East, operations at several major airports, including Dubai International Airport, have been temporarily suspended, causing thousands of flight cancellations and delays. Due to multiple missile and drone attacks from Iran, the United Arab Emirates (UAE) had shut its airspace, and limited structural damage at Dubai Airport was also confirmed, with reports of four staff members being injured. Later, considering the security situation, a limited number of flights were resumed, but full operations are still delayed due to ongoing safety concerns. This tension has significantly impacted regional aviation, travel, and global flight routes.”

Fact Check:
To verify the viral video, we searched relevant keywords on Google. However, we did not find any credible media report confirming the claim.However, we found a video report on the YouTube channel of CNN-News18 mentioning explosions near Dubai Airport after a suspected Iranian drone strike. But the visuals shown in that report are completely different from the viral video.

Upon closely examining the viral video, we noticed several inconsistencies, raising suspicion that it might be AI-generated. We then analyzed the video using the AI detection tool Sightengine. The results indicated that the video is 71 percent likely to be AI-generated.

Conclusion:
Our research found that the viral video is not real, but AI-generated.
Related Blogs

There has been a struggle to create legal frameworks that can define where free speech ends and harmful misinformation begins, specifically in democratic societies where the right to free expression is a fundamental value. Platforms like YouTube, Wikipedia, and Facebook have gained a huge consumer base by focusing on hosting user-generated content. This content includes anything a visitor puts on a website or social media pages.
The legal and ethical landscape surrounding misinformation is dependent on creating a fine balance between freedom of speech and expression while protecting public interests, such as truthfulness and social stability. This blog is focused on examining the legal risks of misinformation, specifically user-generated content, and the accountability of platforms in moderating and addressing it.
The Rise of Misinformation and Platform Dynamics
Misinformation content is amplified by using algorithmic recommendations and social sharing mechanisms. The intent of spreading false information is closely interwoven with the assessment of user data to identify target groups necessary to place targeted political advertising. The disseminators of fake news have benefited from social networks to reach more people, and from the technology that enables faster distribution and can make it more difficult to distinguish fake from hard news.
Multiple challenges emerge that are unique to social media platforms regulating misinformation while balancing freedom of speech and expression and user engagement. The scale at which content is created and published, the different regulatory standards, and moderating misinformation without infringing on freedom of expression complicate moderation policies and practices.
The impacts of misinformation on social, political, and economic consequences, influencing public opinion, electoral outcomes, and market behaviours underscore the urgent need for effective regulation, as the consequences of inaction can be profound and far-reaching.
Legal Frameworks and Evolving Accountability Standards
Safe harbour principles allow for the functioning of a free, open and borderless internet. This principle is embodied under the US Communications Decency Act and the Information Technology Act in Sections 230 and 79 respectively. They play a pivotal role in facilitating the growth and development of the Internet. The legal framework governing misinformation around the world is still in nascent stages. Section 230 of the CDA protects platforms from legal liability relating to harmful content posted on their sites by third parties. It further allows platforms to police their sites for harmful content and protects them from liability if they choose not to.
By granting exemptions to intermediaries, these safe harbour provisions help nurture an online environment that fosters free speech and enables users to freely express themselves without arbitrary intrusions.
A shift in regulations has been observed in recent times. An example is the enactment of the Digital Services Act of 2022 in the European Union. The Act requires companies having at least 45 million monthly users to create systems to control the spread of misinformation, hate speech and terrorist propaganda, among other things. If not followed through, they risk penalties of up to 6% of the global annual revenue or even a ban in EU countries.
Challenges and Risks for Platforms
There are multiple challenges and risks faced by platforms that surround user-generated misinformation.
- Moderating user-generated misinformation is a big challenge, primarily because of the quantity of data in question and the speed at which it is generated. It further leads to legal liabilities, operational costs and reputational risks.
- Platforms can face potential backlash, both in instances of over-moderation or under-moderation. It can be considered as censorship, often overburdening. It can also be considered as insufficient governance in cases where the level of moderation is not protecting the privacy rights of users.
- Another challenge is more in the technical realm, including the limitations of AI and algorithmic moderation in detecting nuanced misinformation. It holds out to the need for human oversight to sift through the misinformation that is created by AI-generated content.
Policy Approaches: Tackling Misinformation through Accountability and Future Outlook
Regulatory approaches to misinformation each present distinct strengths and weaknesses. Government-led regulation establishes clear standards but may risk censorship, while self-regulation offers flexibility yet often lacks accountability. The Indian framework, including the IT Act and the Digital Personal Data Protection Act of 2023, aims to enhance data-sharing oversight and strengthen accountability. Establishing clear definitions of misinformation and fostering collaborative oversight involving government and independent bodies can balance platform autonomy with transparency. Additionally, promoting international collaborations and innovative AI moderation solutions is essential for effectively addressing misinformation, especially given its cross-border nature and the evolving expectations of users in today’s digital landscape.
Conclusion
A balance between protecting free speech and safeguarding public interest is needed to navigate the legal risks of user-generated misinformation poses. As digital platforms like YouTube, Facebook, and Wikipedia continue to host vast amounts of user content, accountability measures are essential to mitigate the harms of misinformation. Establishing clear definitions and collaborative oversight can enhance transparency and build public trust. Furthermore, embracing innovative moderation technologies and fostering international partnerships will be vital in addressing this cross-border challenge. As we advance, the commitment to creating a responsible digital environment must remain a priority to ensure the integrity of information in our increasingly interconnected world.
References
- https://www.thehindu.com/opinion/op-ed/should-digital-platform-owners-be-held-liable-for-user-generated-content/article68609693.ece
- https://www.thehindu.com/opinion/op-ed/should-digital-platform-owners-be-held-liable-for-user-generated-content/article68609693.ece
- https://hbr.org/2021/08/its-time-to-update-section-230
- https://www.cnbctv18.com/information-technology/deepfakes-digital-india-act-safe-harbour-protection-information-technology-act-sajan-poovayya-19255261.htm

Introduction
So it's that time of year when you feel bright and excited to start the year with new resolutions; your goals could be anything from going to the gym to learning new skills and being productive this year, but with cybercrime on the rise, you must also be smart and take your New Year Cyber Resolutions seriously. Yes, you heard it right: it's a new year, a new you, but the same hackers with advanced dangers. It's time to make a cyber resolution this year to be secure, smart, and follow the best cyber safety tips for 2K25 and beyond.
Best Cyber Security Tips For You
So while taking your cyber resolutions this 2k25, remember that hackers have resolutions too; so you have to make yours better! CyberPeace has curated a list of great tips and cyber hygiene practices you must practice in 2025:
- Be Aware Of Your Digital Rights: Netizens should be aware of their rights in the digital space. It's important to know where to report issues, how to raise concerns with platforms, and what rights are available to you under applicable IT and Data Protection laws. And as we often say, sharing is caring, so make sure to discuss and share your knowledge of digital rights with your family, peers, and circle. Not only will this help raise awareness, but you’ll also learn from their experiences, collectively empowering yourselves. After all, a well-informed online community is a happy one.
- Awareness Is Your First Line Of Defence: Awareness serves as the first line of defence, especially in light of the lessons learned from 2024, where new forms of cybercrimes have emerged with serious consequences. Scams like digital arrests, romance frauds, lottery scams, and investment scams have become more prevalent. As we move into 2025, remember that sophisticated cyber scams require equally advanced strategies to stay protected. As cybercrimes evolve and become more complex, it's crucial to stay updated with specific strategies and hygiene tips to defend yourself. Build your first line of defence by being aware of these growing scams, and say goodbye to the manipulative tactics used by cyber crooks.
- Customise Social Media Media Profile And Privacy Settings: With the rising misuse of advanced technologies such as deepfake, it’s crucial to share access to your profile only with people you trust and know. Customize your social media profile settings based on your convenience, such as who can add you, who can see your uploaded pictures and stories, and who can comment on your posts. Tailor these settings to suit your needs and preferences, ensuring a safer digital environment for yourself.
- Be Cautious: Choose wisely, just because an online deal seems exciting doesn’t mean it’s legitimate. A single click could have devastating consequences. Not every link leads to a secure website; it could be a malware or phishing attempt. Be cautious and follow basic cyber hygiene tips, such as only visiting websites with a padlock symbol, a secure connection, and the 'HTTPS' status in the URL.
- Don’t Let Fake News Fake You Out: Online misinformation and disinformation have sparked serious concern due to their widespread proliferation. That’s why it’s crucial to 'Spot The Lies Before They Spot You.' Exercise due care and caution when consuming, sharing, or forwarding any online information. Always verify it from trusted sources, recognize the red flags of misleading claims, and contribute to creating a truthful online information landscape.
- Turn the Tables on Cybercriminals: It is crucial to know the proper reporting channels for cybercrimes, including specific reporting methods based on the type of issue. For example, ‘unsolicited commercial communications’ can be reported on the Chakshu portal by the government. Unauthorized electronic transactions can be reported to the RBI toll-free number at 14440, while women can report incidents to the National Commission for Women. If you encounter issues on a platform, you can reach out to the platform's grievance officer. All types of cybercrimes can be reported through the National Cyber Crime Reporting Portal (cybercrime.gov.in) and the helpline at 1930. It’s essential to be aware of the right authorities and reporting mechanisms, so if something goes wrong in your digital experience, you can take action, turn the tables on cybercrooks, and stay informed about official grievances and reporting channels.
- Log Out, Chill Out: The increased use of technology can have far-reaching consequences that are often overlooked, such as procrastination, stress, anxiety, and eye strain (also known as digital eye strain or computer vision syndrome). Sometimes, it’s essential to switch off the digital curtains. This is where a ‘Digital Detox’ comes in, offering a chance to recharge and reset. We’re all aware of how our devices and phones influence our daily lives, shaping our behaviours, decisions, and lifestyles from morning until night, even impacting our sleep. Taking time to unplug can provide a much-needed psychological and physical boost. Practicing a digital detox at regular suitable intervals, such as twice a month, can help restore balance, reduce stress, and improve overall well-being.
Final Words & the Idea of ‘Tech for Good’
Remember that we are in the technological era, and these technologies are created for our ease and convenience. There are certain challenges that bad actors pose, but to counter this, the change starts from you. Remember that technology, while having its risks, also brings tremendous benefits to society. We encourage you to take a step and encourage the responsible and ethical use of the technology. The vision for ‘Tech for Good’ will have to be expanded to a larger picture. Do not engage in a behaviour that you would not ordinarily do in an offline environment, the online environment is also the same and has far-reaching effects. Use technology for good, and follow and encourage ethical and responsible behaviour in online communities. The emphasis should be on using technology in a safer environment for everyone and combatting dishonest practices.
The effective strategies for preventing cybercrime and dishonest practices requires cooperation , efforts by citizens, government agencies, and technology businesses. We intend to employ technology's good aspects to build a digital environment that values security, honesty, and moral behaviour while promoting innovation and connectedness. In 2025, together we can make a cyber safe resilient society.

Executive Summary
A video featuring Uttar Pradesh Chief Minister Yogi Adityanath is being widely shared on social media. In the video, Adityanath can be heard saying, “Let me become the Prime Minister, and Pakistan-occupied Kashmir will also become a part of India.” The video also carries an on-screen text that reads “Next PM 2029.” By sharing this clip, social media users are claiming that Yogi Adityanath is set to become India’s Prime Minister in 2029.
However, CyberPeace research found the viral claim to be misleading. Our research revealed that the video circulating online has been edited and is being shared out of context. The original video dates back to May 2024. In the original footage, Yogi Adityanath is not speaking about himself. Instead, he is referring to Prime Minister Narendra Modi.
In the original statement, Adityanath says:
“Let Modi ji become Prime Minister for the third time, and within the next six months, Pakistan-occupied Kashmir will also become a part of India.”
It is evident that the video has been trimmed and misleading text has been added to falsely portray the statement as a declaration about Yogi Adityanath becoming Prime Minister in 2029.
Claim
A YouTube user shared the viral video on January 29, 2026, claiming that Yogi Adityanath said, “Let me become Prime Minister, and Pakistan-occupied Kashmir will be part of India.” The video carries the caption “Next PM 2029,” suggesting that Adityanath is set to become the Prime Minister in 2029.
Link to the post n archive

Fact Check:
To verify the viral claim, we first conducted a keyword search on Google. During this process, we found a report published by Aaj Tak on May 18, 2024. According to the report, Yogi Adityanath stated that if Narendra Modi becomes Prime Minister for the third time, Pakistan-occupied Kashmir would become part of India within six months.
Report link:

Next, we extracted keyframes from the viral video and ran them through Google Lens. This led us to the official YouTube channel of Yogi Adityanath, where the same video was uploaded on May 18, 2024.
Original video link:

In the original video, Yogi Adityanath clearly makes the statement in reference to Prime Minister Narendra Modi, not himself.Finally, we compared the viral clip with the original footage. The visuals in both videos are identical; however, the viral version has been edited and overlaid with misleading text to change the meaning of the statement.
Conclusion
Our research confirms that the viral video is edited and misleading. The original video is from May 2024, in which Yogi Adityanath was speaking about Prime Minister Narendra Modi, not about himself becoming Prime Minister in 2029. The video has been falsely altered and shared with a deceptive claim on social media.