#FactCheck - Misleading Video of Dubai Airport Attack Circulates Online, Found AI-Generated
Executive Summary
Amid rising tensions in the Middle East following attacks on Iran by the United States and Israel, a video is being shared on social media claiming that it shows a recent attack at Dubai International Airport. Research by the CyberPeace found the viral claim to be false. Our research revealed that the viral video is not real but has been created using artificial intelligence technology.
Claim:
An Instagram user shared the viral video on March 1, 2026, claiming it shows an attack at Dubai Airport. The link to the post, the archive link, and a screenshot are provided below.

Fact Check:
To verify the viral claim, we searched Google using relevant keywords. However, we did not find any credible media report confirming the claim.On closely examining the viral video, we noticed several unusual visuals and technical inconsistencies, raising suspicion that it might be AI-generated. To verify this, we scanned the video using the AI detection tool Sightengine. According to the results, around 74 percent of the video shows the likelihood of being AI-generated.

Conclusion:
Our research found that the viral video is not real but has been created using artificial intelligence technology.
Related Blogs
.webp)
Introduction
Big Tech has been pushing back against regulatory measures, particularly regarding data handling practices. X Corp (formerly Twitter) has taken a prominent stance in India. The platform has filed a petition against the Central and State governments, challenging content-blocking orders and opposing the Center’s newly launched Sahyog portal. The X Corp has furthermore labelled the Sahyog Portal as a 'censorship portal' that enables government agencies to issue blocking orders using a standardized template.
The key regulations governing the tech space in India include the IT Act of 2000, IT Rules 2021 and 2023 (which stress platform accountability and content moderation), and the DPDP Act 2023, which intersects with personal data governance. This petition by the X Corp raises concerns for digital freedom, platform accountability, and the evolving regulatory frameworks in India.
Elon Musk vs Indian Government: Key Issues at Stake
The 2021 IT Rules, particularly Rule 3(1)(d) of Part II, outline intermediaries' obligations regarding ‘Content Takedowns’. Intermediaries must remove or disable access to unlawful content within 36 hours of receiving a court order or government notification. Notably, the rules do not require government takedown requests to be explicitly in writing, raising concerns about potential misuse.
X’s petition also focuses on the Sahyog Portal, a government-run platform that allows various agencies and state police to request content removal directly. They contend that the failure to comply with such orders can expose intermediaries' officers to prosecution. This has sparked controversy, with platforms like Elon Musk’s X arguing that such provisions grant the government excessive control, potentially undermining free speech and fostering undue censorship.
The broader implications include geopolitical tensions, potential business risks for big tech companies, and significant effects on India's digital economy, user engagement, and platform governance. Balancing regulatory compliance with digital rights remains a crucial challenge in this evolving landscape.
The Global Context: Lessons from Other Jurisdictions
The ‘EU's Digital Services Act’ establishes a baseline 'notice and takedown' system. According to the Act, hosting providers, including online platforms, must enable third parties to notify them of illegal content, which they must promptly remove to retain their hosting defence. The DSA also mandates expedited removal processes for notifications from trusted flaggers, user suspension for those with frequent violations, and enhanced protections for minors. Additionally, hosting providers have to adhere to specific content removal obligations, including the elimination of terrorist content within one hour and deploying technology to detect known or new CSAM material and remove it.
In contrast to the EU, the US First Amendment protects speech from state interference but does not extend to private entities. Dominant digital platforms, however, significantly influence discourse by moderating content, shaping narratives, and controlling advertising markets. This dual role creates tension as these platforms balance free speech, platform safety, and profitability.
India has adopted a model closer to the EU's approach, emphasizing content moderation to curb misinformation, false narratives, and harmful content. Drawing from the EU's framework, India could establish third-party notification mechanisms, enforce clear content takedown guidelines, and implement detection measures for harmful content like terrorist material and CSAM within defined timelines. This would balance content regulation with platform accountability while aligning with global best practices.
Key Concerns and Policy Debates
As the issue stands, the main concerns that arise are:
- The need for transparency in government orders for takedowns, the reasons and a clear framework for why they are needed and the guidelines for doing so.
- The need for balancing digital freedom with national security and the concerns that arise out of it for tech companies. Essentially, the role platforms play in safeguarding the democratic values enshrined in the Constitution of India.
- This court ruling by the Karnataka HC will have the potential to redefine the principles upon which the intermediary guidelines function under the Indian laws.
Potential Outcomes and the Way Forward
While we wait for the Hon’ble Court’s directives and orders in response to the filed suit, while the court's decision could favour either side or lead to a negotiated resolution, the broader takeaway is the necessity of collaborative policymaking that balances governmental oversight with platform accountability. This debate underscores the pressing need for a structured and transparent regulatory framework for content moderation. Additionally, this case also highlights the importance of due process in content regulation and the need for legal clarity for tech companies operating in India. Ultimately, a consultative and principles-based approach will be key to ensuring a fair and open digital ecosystem.
References
- https://www.thehindu.com/sci-tech/technology/elon-musks-x-sues-union-government-over-alleged-censorship-and-it-act-violations/article69352961.ece
- https://www.hindustantimes.com/india-news/elon-musk-s-x-sues-union-government-over-alleged-censorship-and-it-act-violations-101742463516588.html
- https://www.financialexpress.com/life/technology-explainer-why-has-x-accused-govt-of-censorship-3788648/
- https://thelawreporters.com/elon-musk-s-x-sues-indian-government-over-alleged-censorship-and-it-act-violations
- https://www.linklaters.com/en/insights/blogs/digilinks/2023/february/the-eu-digital-services-act---a-new-era-for-online-harms-and-intermediary-liability
.webp)
Introduction
India's National Commission for Protection of Child Rights (NCPCR) is set to approach the Ministry of Electronics and Information Technology (MeitY) to recommend mandating a KYC-based system for verifying children's age under the Digital Personal Data Protection (DPDP) Act. The decision to approach or send recommendations to MeitY was taken by NCPCR in a closed-door meeting held on August 13 with social media entities. In the meeting, NCPCR emphasised proposing a KYC-based age verification mechanism. In this background, Section 9 of the Digital Personal Data Protection Act, 2023 defines a child as someone below the age of 18, and Section 9 mandates that such children have to be verified and parental consent will be required before processing their personal data.
Requirement of Verifiable Consent Under Section 9 of DPDP Act
Regarding the processing of children's personal data, Section 9 of the DPDP Act, 2023, provides that for children below 18 years of age, consent from parents/legal guardians is required. The Data Fiduciary shall, before processing any personal data of a child or a person with a disability who has a lawful guardian, obtain verifiable consent from the parent or lawful guardian. Additionally, behavioural monitoring or targeted advertising directed at children is prohibited.
Ongoing debate on Method to obtain Verifiable Consent
Section 9 of the DPDP Act gives parents or lawful guardians more control over their children's data and privacy, and it empowers them to make decisions about how to manage their children's online activities/permissions. However, obtaining such verifiable consent from the parent or legal guardian presents a quandary. It was expected that the upcoming 'DPDP rules,' which have yet to be notified by the Central Government, would shed light on the procedure of obtaining such verifiable consent from a parent or lawful guardian.
However, In the meeting held on 18th July 2024, between MeitY and social media companies to discuss the upcoming Digital Personal Data Protection Rules (DPDP Rules), MeitY stated that it may not intend to prescribe a ‘specific mechanism’ for Data Fiduciaries to verify parental consent for minors using digital services. MeitY instead emphasised obligations put forth on the data fiduciary under section 8(4) of the DPDP Act to implement “appropriate technical and organisational measures” to ensure effective observance of the provisions contained under this act.
In a recent update, MeitY held a review meeting on DPDP rules, where they focused on a method for determining children's ages. It was reported that the ministry is making a few more revisions before releasing the guidelines for public input.
CyberPeace Policy Outlook
CyberPeace in its policy recommendations paper published last month, (available here) also advised obtaining verifiable parental consent through methods such as Government Issued ID, integration of parental consent at ‘entry points’ like app stores, obtaining consent through consent forms, or drawing attention from foreign laws such as California Privacy Law, COPPA, and developing child-friendly SIMs for enhanced child privacy.
CyberPeace in its policy paper also emphasised that when deciding the method to obtain verifiable consent, the respective platforms need to be aligned with the fact that verifiable age verification must be done without compromising user privacy. Balancing user privacy is a question of both technological capabilities and ethical considerations.
DPDP Act is a brand new framework for protecting digital personal data and also puts forth certain obligations on Data Fiduciaries and provides certain rights to Data Principal. With upcoming ‘DPDP Rules’ which are expected to be notified soon, will define the detailed procedure for the implementation of the provisions of the Act. MeitY is refining the DPDP rules before they come out for public consultation. The approach of NCPCR is aimed at ensuring child safety in this digital era. We hope that MeitY comes up with a sound mechanism for obtaining verifiable consent from parents/lawful guardians after taking due consideration to recommendations put forth by various stakeholders, expert organisations and concerned authorities such as NCPCR.
References
- https://www.moneycontrol.com/technology/dpdp-rules-ncpcr-to-recommend-meity-to-bring-in-kyc-based-age-verification-for-children-article-12801563.html
- https://pune.news/government/ncpcr-pushes-for-kyc-based-age-verification-in-digital-data-protection-a-new-era-for-child-safety-215989/#:~:text=During%20this%20meeting%2C%20NCPCR%20issued,consent%20before%20processing%20their%20data
- https://www.hindustantimes.com/india-news/ncpcr-likely-to-seek-clause-for-parents-consent-under-data-protection-rules-101724180521788.html
- https://www.drishtiias.com/daily-updates/daily-news-analysis/dpdp-act-2023-and-the-isssue-of-parental-consent

Introduction
In today’s digital environment, national security challenges extend well beyond traditional military domains. One growing concern is the unauthorised extraction of information, which is increasingly being used through subtle and gradual methods rather than overt force. Recent advisories point to a rising pattern in which foreign organisations seek to recruit individuals to collect and handle sensitive material, often using financial cybercrime networks as part of their operational ecosystem. This trend has implications for journalists, defence personnel, researchers, students, and academics working in strategic, geopolitical, and security-related fields. The core risk lies in the fact that these activities can proceed quietly and without coercion, with participants sometimes unaware that their actions may contribute to intelligence gathering efforts.
Digital Platforms as Vectors for Targeted Recruitment
Professional networking and job portals have become central to modern career development. The same visibility that supports professional advancement is being misused by others. Foreign entities reportedly use these platforms to identify individuals with experience in journalism, defence services, strategic studies, cybersecurity, and international relations.
Early-career professionals and students from reputed Higher Education Institutions (HEIs) are particularly vulnerable because they seek freelance work, research experience and international partnerships. Initial outreach is often framed as legitimate consultancy, research assistance, or content development work, which creates the impression of professional credibility through normal business operations.
Task-Based Information Extraction
The organisation assigns writing and research duties to new employees, which seem simple to perform. The topics of source-based articles and analytical pieces include the following two subjects about India.
- The first subject examines India's foreign relations with its strategic partnerships.
- The second subject investigates how armed forces operate through different military movements.
- The third subject focuses on defence procurement activities, which include weapon system development and modernisation projects.
- The fourth subject investigates military activities through joint training exercises and war simulation exercises.
The public possesses most of this knowledge, but its threat emerges from the process of collecting and interpreting data with contextual information. The collection of insights from various sources enables organisations to identify operational patterns, strategic priorities and capacity evaluations which go beyond particular data points.
The Financial Cybercrime Nexus
The financial system that pays contributors presents itself as a major problem for this activity. Payments are often routed through:
- Indian bank accounts, including student accounts
- Funds originating from cyber fraud or financial crimes
- Occasional overseas transfers structured to avoid scrutiny
The system establishes a direct connection between financial cybercrime activities and the theft of confidential information, which brings unintentional danger of legal issues and public image damage to those involved. The Indian legal system considers all connections to illegal financial activities as serious offenses even when the person involved did not intend to commit any crime.
Concealed Identities and Data Harvesting
The entities that conduct recruitment activities willfully hide their real identities. The organisation uses intermediaries for their operations, which they present as foreign consulting firms, think tanks and analytics companies. Contributors who have defence or security experience will face requests to provide their personal data, which includes their PAN and Aadhaar information.
The collection of such data raises significant concerns. The system creates permanent privacy hazards that permit unauthorised access to personal data and identity theft and coercive practices. The ultimate use of this information often remains opaque to the individuals providing it.
Why Incremental Leakage Matters
The threat operates silently because it lacks the visibility of major cyberattacks. The combined effect of all articles and research notes becomes dangerous because no single element can cause harm. Hostile organisations can use incremental information leakage to undermine national security because they can analyse their gathered data to create:
- maps of strategic capabilities,
- defence readiness evaluations,
- security and foreign policy narrative control.
The process of information sovereignty erosion occurs through the establishment of undefined boundaries between journalism and academic research, and consultancy and strategic analysis. The lack of clear boundaries between journalism and academic research, consultancy and strategic analysis makes it difficult to determine who is responsible for research outcomes.
The Role of Institutions and Individuals
The universities and media outlets, together with the professional organizations have essential functions in their quest to diminish environmental effects. The organisation should perform the following proactive steps:
- The organisation should organise training programs which will educate people about its services.
- The organisation should require researchers to conduct thorough investigations before they accept paid assignments for research work and writing tasks.
- The organisation should recommend that people do not share their identity documents except when their institution requires it for authentication purposes.
- The organisation should create specific methods to report any suspicious activities that people might encounter.
Students and professionals need to understand that their specialised knowledge and trustworthiness can be used against them. People must protect their digital identities through three actions, which include verifying their affiliations and assessing the complete effects of their daily activities.
Conclusion
Cyber enabled threats to national security increasingly operate in grey zones, which makes their legality, legitimacy, and true intent difficult to assess. The convergence of foreign recruitment efforts, financial cybercrime, and covert information gathering creates a persistent risk that is still not widely recognised or fully understood. The state does not bear exclusive responsibility for protecting sensitive information. National resilience in an interconnected knowledge economy requires organisations to develop three core capacities, which include institutional awareness and restraint and institutional vigilance. Cyber resilience depends on two essential factors, which include secure systems and informed citizens, because data continues to determine power relationships.
References
- https://reports.weforum.org/docs/WEF_Global_Cybersecurity_Outlook_2025.pdf
- https://www.cyber-espionage.ch/
- https://www.theguardian.com/world/2025/nov/18/mi5-issues-alert-to-mps-and-peers-over-chinese-espionage
- http://cybercrimejournal.com/menuscript/index.php/cybercrimejournal/article/download/263/92
- https://www.researchgate.net/publication/368461675_Cyber_Espionage_Consequences_as_a_Growing_Threat