#FactCheck - Viral Video Claiming Attack on Burj Khalifa is AI-Generated
Executive Summary
Amid rising tensions between the United States, Israel, and Iran, a video is circulating on social media claiming that Burj Khalifa in Dubai has been attacked. The clip is being widely shared with users alleging that a strike took place near the iconic skyscraper. However, research by CyberPeace found the claim to be misleading. Our research revealed that the viral video is not real and was generated using artificial intelligence.
Claim
On March 1, 2026, a Facebook user shared the viral clip claiming that an attack had taken place in Dubai. The post was shared with the caption: “Dubai has been attacked.” The link to the post and its archive is provided below along with a screenshot.

Fact Check
To verify the claim, we first searched Google using relevant keywords. During this process, we found a report published on March 1, 2026, by the Indian news outlet Dainik Bhaskar.

According to the report, tensions in the Middle East escalated amid the Israel–Iran conflict, impacting several countries in the region. A drone incident reportedly occurred near Burj Khalifa, prompting authorities to evacuate the building as a precautionary measure and temporarily switch off its lights. However, the visuals seen in the viral video do not match the details or imagery described in the report. Upon closely examining the viral clip, we noticed several technical inconsistencies and unusual visual elements, raising suspicions that the video might have been generated using artificial intelligence. To verify this, we analyzed the video using the AI detection tool Sightengine. The results indicated a 99% probability that the video was AI-generated.

Conclusion
Our research found that the viral video circulating on social media is not authentic. The footage was created using artificial intelligence and does not depict a real attack on Burj Khalifa.
Related Blogs

Introduction
Over the past few years, the virtual space has been an irreplaceable livelihood platform for content creators and influencers, particularly on major social media platforms like YouTube and Instagram. Yet, if this growth in digital entrepreneurship is accompanied by anything, it is a worrying trend, a steep surge in account takeover (ATO) attacks against these actors. In recent years, cybercriminals have stepped up the quantity and level of sophistication of such attacks, hacking into accounts, jeopardising the follower base, and incurring economic and reputational damage. They don’t just take over accounts to cause disruption. Instead, they use these hijacked accounts to run scams like fake livestreams and cryptocurrency fraud, spreading them by pretending to be the original account owner. This type of cybercrime is no longer a nuisance; it now poses a serious threat to the creator economy, digital trust, and the wider social media ecosystem.
Why Are Content Creators Prime Targets?
Content creators hold a special place on the web. They are prominent users who live for visibility, public confidence, and ongoing interaction with their followers. Their social media footprint tends to extend across several interrelated platforms, e.g., YouTube, Instagram, X (formerly Twitter), with many of these accounts having similar login credentials or being managed from the same email accounts. This interconnectivity of their online presence crosses multiple platforms and benefits workflow, but makes them appealing targets for hackers. One entry point can give access to a whole chain of vulnerabilities. Attackers, once they control an account, can wield its influence and reach to share scams, lead followers to phishing sites, or spread malware, all from the cover of a trusted name.
Popular Tactics Used by Attackers
- Malicious Livestream Takeovers and Rebranding - Cybercriminals hijack high-subscriber channels and rebrand them to mimic official channels. Original videos are hidden or deleted, replaced with scammy streams using deep fake personas to promote crypto schemes.
- Fake Sponsorship Offers - Creators receive emails from supposed sponsors that contain malware-infected attachments or malicious download links, leading to credential theft.
- Malvertising Campaigns - These involve fake ads on social platforms promoting exclusive software like AI tools or unreleased games. Victims download malware that searches for stored login credentials.
- Phishing and Social Engineering on Instagram - Hackers impersonate Meta support teams via DMs and emails. They direct creators to login pages that are cloned versions of Instagram's site. Others pose as fans to request phone numbers and trick victims into revealing password reset codes.
- Timely Exploits and Event Hijacking - During major public or official events, attackers often escalate their activity. Hijacked accounts are used to promote fake giveaways or exclusive live streams, luring users to malicious websites designed to steal personal information or financial data.
Real-World Impact and Case Examples
The reach and potency of account takeover attacks upon content creators are far-reaching and profound. In a report presented in 2024 by Bitdefender, over 9,000 malicious live streams were seen on YouTube during a year, with many having been streamed from hijacked creator accounts and reassigned to advertise scams and fake content. Perhaps the most high-profile incident was a channel with more than 28 million subscribers and 12.4 billion total views, which was totally taken over and utilised for a crypto fraud scheme live streaming. Additionally, Bitdefender research indicated that over 350 scam domains were utilised by cybercriminals, directly connected via hijacked social media accounts, to entice followers into phishing scams and bogus investment opportunities. Many of these pieces of content included AI-created deep fakes impersonating recognisable personalities like Elon Musk and other public figures, providing the illusion of authenticity around fake endorsements (CCN, 2024). Further, attackers have exploited popular digital events such as esports events, such as Counter-Strike 2 (CS2), by hijacking YouTube gaming channels and livestreaming false giveaways or referring viewers to imitated betting sites.
Protective Measures for Creators
- Enable Multi-Factor Authentication (MFA)
Adds an essential layer of defence. Even if a password is compromised, attackers can't log in without the second factor. Prefer app-based or hardware token authentication.
- Scrutinize Sponsorships
Verify sender domains and avoid opening suspicious attachments. Use sandbox environments to test files. In case of doubt, verify collaboration opportunities through official company sources or verified contacts.
- Monitor Account Activity
Keep tabs on login history, new uploads, and connected apps. Configure alerts for suspicious login attempts or spikes in activity to detect breaches early. Configure alerts for suspicious login attempts or spikes in activity to detect breaches early.
- Educate Your Team
If your account is managed by editors or third parties, train them on common phishing and malware tactics. Employ regular refresher sessions and send mock phishing tests to reinforce awareness.
- Use Purpose-Built Security Tools
Specialised security solutions offer features like account monitoring, scam detection, guided recovery, and protection for team members. These tools can also help identify suspicious activity early and support a quick response to potential threats.
Conclusion
Account takeover attacks are no longer random events, they're systemic risks that compromise the financial well-being and personal safety of creators all over the world. As cybercriminals grow increasingly sophisticated and realistic in their scams, the only solution is a security-first approach. This encompasses a mix of technical controls, platform-level collaboration, education, and investment in creator-centric cybersecurity technologies. In today's fast-paced digital landscape, creators not only need to think about content but also about defending their digital identity. As digital platforms continue to grow, so do the threats targeting creators. However, with the right awareness, tools, and safeguards in place, a secure and thriving digital environment for creators is entirely achievable.
References
- https://www.bitdefender.com/en-au/blog/hotforsecurity/account-takeover-attacks-on-social-media-a-rising-threat-for-content-creators-and-influencers
- https://www.arkoselabs.com/account-takeover/social-media-account-takeover/
- https://www.imperva.com/learn/application-security/account-takeover-ato/
- https://www.security.org/digital-safety/account-takeover-annual-report/
- https://www.niceactimize.com/glossary/account-takeover/

Introduction
The information of hundreds of thousands of Indians who received the COVID vaccine was Leaked in a significant data breach and posted on a Telegram channel. Numerous reports claim that sensitive information, including a person’s phone number, gender, ID card details, and date of birth, leaked over Telegram. It could be obtained by typing a person’s name into a Telegram bot.
What really happened?
The records pertaining to the mobile number registered in the CoWin portal are accessible on the Malayalam news website channel. It is also feasible to determine which vaccination was given and where it was given.
According to The Report, the list of individuals whose data was exposed includes BJP Tamil Nadu president K Annamalai, Congress MP Karti Chidambaram, and former BJP union minister for health Harsh Vardhan. Telangana’s minister of information and communication technology, Kalvakuntla Taraka Rama Rao, is also on the list.
MEITY stated in response to the data leak, “It is old data, we are still confirming it. We have requested a report on the matter.
After the media Report, the bot was disabled, but experts said the incident raised severe issues because the information might be used for identity theft, phishing emails, con games, and extortion calls. The Indian Computer Emergency Response Team (CERT-In), the government’s nodal body, has opened an investigation into the situation
The central government declared the data breach reports regarding the repository of beneficiaries against Covid to be “mischievous in nature” on Monday and claimed the ‘bot’ that purportedly accessed the confidential data was not directly accessing the CoWIN database.
According to the first complaint by CERT-In, the government’s cybersecurity division, the government claimed the bot might be displaying information from “previously stolen data.” Reports.
The health ministry refuted the claim, asserting that no bots could access the information without first verifying with a one-time password.
“It is made clear that all of these rumours are false and malicious. The health ministry’s CoWIN interface is entirely secure and has sufficient data privacy protections. The security of the data on the CoWIN portal is being ensured in every way possible, according to a statement from the health ministry.
Meity said the CoWin program or database was not directly compromised, and the shared information appeared to be taken from a previous intrusion. But the hack again highlights the growing danger of cyber assaults, particularly on official websites.

Recent cases of data leak
Dominos India 2021– Dominos India, a division of Jubilant FoodWorks, faced a cyberattack on May 22, 2021, which led to the disclosure of information from 180 million orders. The breach exposed order information, email addresses, phone numbers, and credit card information. Although Jubilant FoodWorks acknowledged a security breach, it refuted any illegal access to financial data.
Air India – A cyberattack that affected Air India in May 2021 exposed the personal information of about 4.5 million customers globally. Personal information recorded between August 26, 2011, and February 3, 2021, including names, dates of birth, contact information, passport information, ticket details, frequent flyer information from Star Alliance and Air India, and credit card information, were exposed in the breach.
Bigbasket – BigBasket, an online supermarket, had a data breach in November 2020, compromising the personal information of approximately 20 million consumers. Email IDs, password hashes, PINs, phone numbers, addresses, dates of birth, localities, and IP addresses were among the information released from an insecure database containing over 15 GB of customer data. BigBasket admitted to the incident and reported it to the Bengaluru Cyber Crime Department.
Unacademy – Unacademy, an online learning platform, experienced a data breach in May 2020, compromising the email addresses of approximately 11 million subscribers. While no sensitive information, such as financial data or passwords, was compromised, user data, including IDs, passwords, date joined, last login date, email IDs, names, and user credentials, was. The breach was detected when user accounts were uncovered for sale on the dark web.
2022 Card Data- Cybersecurity researchers from AI-driven Singapore-based CloudSEK found a threat actor offering a database of 1.2 million cards for free on a Dark Web forum for crimes on October 12, 2022. This came after a second problem involving 7.9 million cardholder records that were reported on the BidenCash website. This comprised information pertaining to State Bank of India (SBI) clients. And other well-known companies were among those targeted in high-profile data breach cases that have surfaced in recent years.

Conclusion
Data breach cases are increasing daily, and attackers are mainly attacking the healthcare sectors and health details as they can easily find personal details. This recent CoWIN case has compromised thousands of people’s data. The All-India Institute of Medical Sciences’ systems were compromised by hackers a few months ago. Over 95% of adults have had their vaccinations, according to the most recent data, even if the precise number of persons impacted by the CoWin privacy breach could not be determined.
.jpeg)
As technological advancements continue to shape the future, the rise of artificial intelligence brings with it significant potential benefits, yet also raises concerns about the spread of misinformation. Recognising the need for accountability on both ends, on 5th May, during the three-day World News Media Congress 2025 in Kraków, Poland the European Broadcasting Union (EBU) and the World Association of News Publishers (WAN-IFRA) have announced to the public the five core principles for their joint initiative called News Integrity in the Age of AI. The initiative is aimed at fostering dialogue and cooperation between media organisations and technology platforms, and the principles announced are to be a code of practice to be followed by all those taking part. With thousands of public and private media outlets around the world joining the effort, the initiative highlights the shared responsibility of AI developers to ensure that AI systems are trustworthy, safe, and supportive of a reliable news ecosystem. It represents a global call to action to uphold the integrity of news in this age of major influx and curb the growing challenge of misinformation.
The five core principles released focus on:
1. Authorisation of content by the originators is a must prior to its usage in Generative AI tools and models
2. High-quality and up-to-date news content must be recognised by third parties that are benefiting from it
3. There must be a focus on accuracy and attribution, making the original sources of news apparent to the public, promoting transparency
4. Harnessing the plural nature of the news perspectives, which will help AI-driven tools perform better and
5. An invitation to tech companies for an open dialogue with news outlets, facilitating conversation to collaborate and develop standards of transparency, accuracy, and safety.
As this initiative provides a unified platform to address and deliberate on issues affecting the integrity of news, there are also some other technical ways in which misinformation in news caused by AI can be curbed:
1. Encourage the usage of Smaller Generative AI Models: The Large Language Models (LLMs) have to be trained on a range of topics. Businesses don’t require such an expanse of information but just a little that is relevant. A narrower context of information to be sourced from allows better content navigation and a reduced chance of mix-up.
2. Fighting AI hallucination: This is a phenomenon that causes generative AI (such as chatbots and computer vision tools) to present nonsensical and inaccurate outputs as the system perceives objects or patterns that are imperceptible or non-existent to human observers. This occurs as a result of the system trying to focus on both language fluency and stitching information from different sources together. In order to deal with this, one can deploy retrieval augmented generation (RAG). This enables connection with external sources of data that include academic journals, a company’s organisational data, among other things, that would help in providing more accurate, domain-specific content.
Conclusion
This global call to action marks an important step toward fostering unified efforts to combat misinformation. The set of principles introduced is designed to be adaptable, providing a flexible framework that can evolve to address emerging challenges (through dialogue and discussion), including issues like copyright infringement. While AI offers powerful tools to support the news industry, it is essential to emphasise that human oversight remains crucial. These technological advancements are meant to enhance and augment the work of journalists, not replace it, ensuring that the core values of journalism, such as accuracy and integrity, are preserved in the age of AI.
References
● https://www.techtarget.com/searchenterpriseai/tip/Generative-AI-ethics-8-biggest-concerns
● https://trilateralresearch.com/responsible-ai/using-responsible-ai-to-combat-misinformation
● https://www.omdena.com/blog/the-ethical-role-of-ai-in-media-combating-misformation
● https://2024.jou.ufl.edu/page/ai-and-misinformation
● https://techxplore.com/news/2025-05-ai-counter-misinformation-fact-based.html
● https://www.advanced-television.com/2025/05/06/media-outlets-call-for-ai-companies-news-integrity-protection/https://www.ibm.com/think/insights/ai-misinformation