#FactCheck - AI-Generated Flyover Collapse Video Shared With Misleading Claims
Executive Summary
A video showing a flyover collapse is going viral on social media. The clip shows a flyover and a road passing beneath it, with vehicles moving normally. Suddenly, a portion of the flyover appears to collapse and fall onto the road below, with some vehicles seemingly coming under its impact. The video has been widely shared by users online. However, research by the CyberPeace found the viral claim to be false. The probe revealed that the video is not real but has been created using artificial intelligence.
Claim:
On X (formerly Twitter), a user shared the viral video on February 13, 2026, claiming it showed the reality of India’s infrastructure development and criticizing ongoing projects. The post quickly gained traction, with several users sharing it as a real incident. Similarly, another user shared the same video on Facebook on February 13, 2026, making a similar claim.

Fact Check:
To verify the claim, key frames from the viral video were extracted and searched using Google Lens. During the search, the video was traced to an account named “sphereofai” on Instagram, where it had been posted on February 9. The post included hashtags such as “AI Creator” and “AI Generated,” clearly indicating that the video was created using AI. Further examination of the account showed that the user identifies themselves as an AI content creator.


To confirm the findings, the viral video was also analysed using Hive Moderation. The tool’s analysis suggested a 99 percent probability that the video was AI-generated.

Conclusion:
The research established that the viral flyover collapse video is not authentic. It is an AI-generated clip being circulated online with misleading claims.
Related Blogs

Executive Summary:
Amid the ongoing conflict in West Asia involving the United States, Israel and Iran, a video is being widely circulated on social media with the claim that Iran attacked the headquarters of tech giants Apple and Microsoft in Israel. The clip shows a building engulfed in flames, with firefighters attempting to douse the fire. However, research by the CyberPeace found that the viral video is AI-generated and is being falsely linked to the ongoing conflict to spread misinformation.
Claim:
An Instagram user ‘bharat_updatenews’ shared the video on March 19, 2026, claiming that Iran had launched an attack on major tech company headquarters, including Apple and Microsoft, in Israel. The post suggested that the incident had raised serious security concerns and was being widely reported by international media.
Link: https://www.instagram.com/bharat_updatenews/reel/DWEUhLEAKaw

Fact Check:
To verify the claim, we extracted keyframes from the viral video and conducted a reverse search using Google Lens. During this process, we found the same video on a TikTok account named ‘dailyupdate122’, where it had been uploaded on March 15, 2026.

The video on this account was clearly labelled as “AI-generated media.” The account also featured several other AI-generated videos, raising doubts about the authenticity of the viral clip. Following this, we analysed the video using the AI detection tool Hive Moderation. The results indicated that the video is nearly 100 percent AI-generated. The tool further suggested with over 98 percent probability that the clip may have been created using OpenAI’s Sora or a similar AI video generation model.

Conclusion:
The viral claim that Iran attacked Apple and Microsoft headquarters in Israel is false. The video circulating online is AI-generated and has no connection to the ongoing conflict in West Asia.
.webp)
Introduction
Empowering today’s youth with the right skills is more crucial than ever in a rapidly evolving digital world. Every year on July 15th, the United Nations marks World Youth Skills Day to emphasise the critical role of skills development in preparing young people for meaningful work and resilient futures. As AI transforms industries and societies, equipping young minds with digital and AI skills is key to fostering security, adaptability, and growth in the years ahead.
Why AI Upskilling is Crucial in Modern Cyber Defence
Security in the digital age remains a complex challenge, regardless of the presence of Artificial Intelligence (AI). It is one of the biggest modern ironies, and not only that, it is a paradox wrapped in code, where the cure and the curse are written in the same language. The very hand that protects the world from cyber threats can very well be used for the creation of that threat. This being said, the modern-day implementation of AI has to circumvent the threats posed by it or any other advanced technology. A solid grasp of AI and machine learning mechanisms is no longer optional; it is fundamental for modern cybersecurity. The traditional cybersecurity training programs employ static content, which can often become outdated and inadequate for the vulnerabilities. AI-powered solutions, such as intrusion detection systems and next-generation firewalls, use behavioural analysis instead of just matching signatures. AI models are susceptible, nevertheless, as malevolent actors can introduce hostile inputs or tainted data to trick computers into incorrect classification. Data poisoning is a major threat to AI defences, according to Cisco's evidence.
As threats surpass the current understanding of cybersecurity professionals, a need arises to upskill them in advanced AI technologies so that they can fortify the security of current systems. Two of the most important skills for professionals would be AI/ML Model Auditing and Data Science. Skilled data scientists can sift through vast logs, from pocket captures to user profiles, to detect anomalies, assess vulnerabilities, and anticipate attacks. A news report from Business Insider puts it correctly: ‘It takes a good-guy AI to fight a bad-guy AI.’ The technology of generative AI is quite new. As a result, it poses fresh security issues and faces security risks like data exfiltration and prompt injections.
Another method that can prove effective is Natural Language Processing (NLP), which helps machines process this unstructured data, enabling automated spam detection, sentiment analysis, and threat context extraction. Security teams skilled in NLP can deploy systems that flag suspicious email patterns, detect malicious content in code reviews, and monitor internal networks for insider threats, all at speeds and scales humans cannot match.
The AI skills, as aforementioned, are not only for courtesy’s sake; they have become essential in the current landscape. India is not far behind in this mission; it is committed, along with its western counterparts, to employ the emerging technologies in its larger goal of advancement. With quiet confidence, India takes pride in its remarkable capacity to nurture exceptional talent in science and technology, with Indian minds making significant contributions across global arenas.
AI Upskilling in India
As per a news report of March 2025, Jayant Chaudhary, Minister of State, Ministry of Skill Development & Entrepreneurship, highlighted that various schemes under the Skill India Programme (SIP) guarantee greater integration of emerging technologies, such as artificial intelligence (AI), cybersecurity, blockchain, and cloud computing, to meet industry demands. The SIP’s parliamentary brochure states that more than 6.15 million recipients have received training as of December 2024. Other schemes that facilitate educating and training professionals, such as Data Scientist, Business Intelligence Analyst, and Machine Learning Engineer are,
- Pradhan Mantri Kaushal Vikas Yojana 4.0 (PMKVY 4.0)
- Pradhan Mantri National Apprenticeship Promotion Scheme (PM-NAPS)
- Jan Shikshan Sansthan (JSS)
Another report showcases how Indian companies, or companies with their offices in India such as Ernst & Young (EY), are recognising the potential of the Indian workforce and yet their deficiencies in emerging technologies and leading the way by internal upskilling and establishing an AI Academy, a new program designed to assist businesses in providing their employees with essential AI capabilities, in response to the increasing need for AI expertise. Using more than 200 real-world AI use cases, the program offers interactive, organised learning opportunities that cover everything from basic ideas to sophisticated generative AI capabilities.
In order to better understand the need for these initiatives, a reference is significant to a report backed by Google.org and the Asian Development Bank; India appears to be at a turning point in the global use of AI. As per the research, “AI for All: Building an AI-Ready Workforce in Asia-Pacific,” India urgently needs to provide accessible and efficient AI upskilling despite having the largest workforce in the world. According to the paper, by 2030, AI could boost the Asia-Pacific region’s GDP by up to USD 3 trillion. The key to this potential is India, a country with the youngest and fastest-growing population.
Conclusion and CyberPeace Resolution
As the world stands at the crossroads of innovation and insecurity, India finds itself uniquely poised, with its vast young population and growing technologies. But to truly safeguard its digital future and harness the promise of AI, the country must think beyond flagship schemes. Imagine classrooms where students learn not just to code but to question algorithms, workplaces where AI training is as routine as onboarding.
India’s journey towards digital resilience is not just about mastering technology but about cultivating curiosity, responsibility, and trust. CyberPeace is committed to this future and is resolute in this collective pursuit of an ethically secure digital world. CyberPeace resolves to be an active catalyst in AI upskilling across India. We commit to launching specialised training modules on AI, cybersecurity, and digital ethics tailored for students and professionals. It seeks to close the AI literacy gap and develop a workforce that is both morally aware and technologically proficient by working with educational institutions, skilling initiatives, and industry stakeholders.
References
- https://www.helpnetsecurity.com/2025/03/07/ai-gamified-simulations-cybersecurity/
- https://www.businessinsider.com/artificial-intelligence-cybersecurity-large-language-model-threats-solutions-2025-5?utm
- https://apacnewsnetwork.com/2025/03/ai-5g-skills-boost-skill-india-targets-industry-demands-over-6-15-million-beneficiaries-trained-till-2024/
- https://indianexpress.com/article/technology/artificial-intelligence/india-must-upskill-fast-to-keep-up-with-ai-jobs-says-new-report-10107821/

Executive Summary
Claims are circulating that Iran’s Supreme Leader Ayatollah Ali Khamenei was killed in a major attack allegedly carried out by Israel and the United States. Amid these claims, a video is being widely shared on social media in which Khamenei can be heard saying, “Beware of fake news, I am alive.” Research conducted by CyberPeace has found the viral claim to be false. Our research revealed that the video being shared is old and that Khamenei’s voice has been altered using artificial intelligence to support a misleading narrative.
Claim
On March 1, 2026, an Instagram user shared the viral video in which Ayatollah Ali Khamenei is heard saying, “Beware of fake news, I am alive.” The link to the post and its archived version are provided above along with a screenshot.

Fact Check:
To verify the authenticity of the claim, we extracted key frames from the viral video and conducted a reverse image search using Google Lens. During the research, we found the same video on the YouTube channel of Sky News Australia, published on June 19, 2025. In the approximately 43-minute-long video, the portion used in the viral clip appears around the 10-minute mark.

According to Sky News Australia’s report, Iran’s Supreme Leader Ayatollah Ali Khamenei had rejected US President Donald Trump’s demand for unconditional surrender. The Ayatollah regime also warned that any American military intervention would be accompanied by “irreparable damage.” Upon closely listening to the viral clip, we noticed that Khamenei’s voice sounded robotic, raising suspicion that it may have been AI-generated. We then analyzed the video using the AI detection tool AURGIN AI. The results indicated that the viral clip had been generated using artificial intelligence.

Conclusion
Our research establishes that the viral video is old and has been digitally manipulated. Ayatollah Ali Khamenei’s voice has been altered using artificial intelligence and the clip is being shared with a misleading claim.