#FactCheck: Viral video blast of fuel tank in UAE Al Hariyah Port portray as Russia-Ukraine Conflict
Executive Summary:
A viral video showing flames and thick smoke from large fuel tanks has been shared widely on social media. Many claimed it showed a recent Russian missile attack on a fuel depot in Ukraine. However, our research found that the video is not related to the Russia-Ukraine conflict. It actually shows a fire that happened at Al Hamriyah Port in Sharjah, United Arab Emirates, on May 31, 2025. The confusion was likely caused by a lack of context and misleading captions.

Claim:
The circulating claim suggests that Russia deliberately bombed Ukraine's fuel reserves and the viral video shows evidence of the bombing. The posts claim the fuel depot was destroyed purposefully during military operations, implying an increase in violence. This narrative is intended to generate feelings and reinforce fears related to war.

Fact Check:
After doing a reverse image search of the key frames of the viral video, we found that the video is actually from Al Hamriyah Port, UAE, not from the Russia-Ukraine conflict. During further research we found the same visuals were also published by regional news outlets in the UAE, including Gulf News and Khaleej Times, which reported on a massive fire at Al Hamriyah Port on 31 May 2025.
As per the news report, a fire broke out at a fuel storage facility in Al Hamriyah Port, UAE. Fortunately, no casualties were reported. Fire Management Services responded promptly and successfully brought the situation under control.


Conclusion:
The belief that the viral video is evidence of a Russian strike in Ukraine is misleading and incorrect. The video is actually of a fire at a commercial port in the UAE. When you share misleading footage like that, you distort reality and incite fear based on lies. It is simply a reminder that not all viral media is what it appears to be, and every viewer should take the time to check and verify the content source and context before accepting or reposting. In this instance, the original claim is untrue and misleading.
- Claim: Fresh attack in Ukraine! Russian military strikes again!
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
Cybersecurity threats have been globally prevalent for quite some time now. All nations, organisations and individuals stand at risk from new and emerging potential cybersecurity threats, putting finances, privacy, data, identities and sometimes human lives at stake. The latest Data Breach Report by IBM revealed that nearly a staggering 83% of organisations experienced more than one data breach instance during 2022. As per the 2022 Data Breach Investigations Report by Verizon, the total number of global ransomware attacks surged by 13%, indicating a concerning rise equal to the last five years combined. The statistics clearly showcase how the future is filled with potential threats as we advance further into the digital age.
Who is Okta?
Okta is a secure identity cloud that links all your apps, logins and devices into a unified digital fabric. Okta has been in existence since 2009 and is based out of San Francisco, USA and has been one of the leading service providers in the States. The advent of the company led to early success based on the high-quality services and products introduced by them in the market. Although Okta is not as well-known as the big techs, it plays a vital role in big organisations' cybersecurity systems. More than 18,000 users of the identity management company's products rely on it to give them a single login for the several platforms that a particular business uses. For instance, Zoom leverages Okta to provide "seamless" access to its Google Workspace, ServiceNow, VMware, and Workday systems with only one login, thus showing how Okta is fundamental in providing services to ease the human effort on various platforms. In the digital age, such organisations are instrumental in leading the pathway to innovation and entrepreneurship.
The Okta Breach
The last Friday, 20 October, Okta reported a hack of its support system, leading to chaos and havoc within the organisation. The result of the hack can be seen in the market in the form of the massive losses incurred by Okta in the stock exchange.
Since the attack, the company's market value has dropped by more than $2 billion. The well-known incident is the most recent in a long line of events connected to Okta or its products, which also includes a wave of casino invasions that caused days-long disruptions to hotel rooms in Las Vegas, casino giants Caesars and MGM were both affected by hacks as reported earlier this year. Both of those attacks, targeting MGM and Caesars’ Okta installations, used a sophisticated social engineering attack that went through IT help desks.
What can be done to prevent this?
Cybersecurity attacks on organisations have become a very common occurrence ever since the pandemic and are rampant all across the globe. Major big techs have been successful in setting up SoPs, safeguards and precautionary measures to protect their companies and their digital assets and interests. However, the Medium, Mico and small business owners are the most vulnerable to such unknown high-intensity attacks. The governments of various nations have established Computer Emergency Response Teams to monitor and investigate such massive-scale cyberattacks both on organisations and individuals. The issue of cybersecurity can be better addressed by inculcating the following aspects into our daily digital routines:
- Team Upskilling: Organisations need to be critical in creating upskilling avenues for employees pertaining to cybersecurity and threats. These campaigns should be run periodically, focusing on both the individual and organisational impact of any threat.
- Reporting Mechanism for Employees and Customers: Business owners and organisations need to deploy robust, sustainable and efficient reporting mechanisms for both employees well as customers. The mechanism will be fundamental in pinpointing the potential grey areas and threats in the cyber security mechanism as well. A dedicated reporting mechanism is now a mandate by a lot of governments around the world as it showcases transparency and natural justice in terms of legal remedies.
- Preventive, Precautionary and Recovery Policies: Organisations need to create and deploy respective preventive, precautionary and recovery policies in regard to different forms of cyber attacks and threats. This will be helpful in a better understanding of threats and faster response in cases of emergencies and attacks. These policies should be updated regularly, keeping in mind the emerging technologies. Efficient deployment of the policies can be done by conducting mock drills and threat assessment activities.
- Global Dialogue Forums: It is pertinent for organisations and the industry to create a community of cyber security enthusiasts from different and diverse backgrounds to address the growing issues of cyberspace; this can be done by conducting and creating global dialogue forums, which will act as the beacon of sharing best practices, advisories, threat assessment reports, potential threats and attacks thus establishing better inter-agency and inter-organisation communication and coordination.
- Data Anonymisation and Encryption: Organisations should have data management/processing policies in place for transparency and should always store data in an encrypted and anonymous manner, thus creating a blanket of safety in case of any data breach.
- Critical infrastructure: The industry leaders should push the limits of innovation by setting up state-of-the-art critical cyber infrastructure to create employment, innovation, and entrepreneurship spirit among the youth, thus creating a whole new generation of cyber-ready professionals and dedicated netizens. Critical infrastructures are essential in creating a safe, secure, resilient and secured digital ecosystem.
- Cysec Audits & Sandboxing: All organisations should establish periodic routines of Cybersecurity audits, both by internal and external entities, to find any issue/grey area in the security systems. This will create a more robust and adaptive cybersecurity mechanism for the organisation and its employees. All tech developing and testing companies need to conduct proper sandboxing exercises for all or any new tech/software creation to identify its shortcomings and flaws.
Conclusion
In view of the rising cybersecurity attacks on organisations, especially small and medium companies, a lot has been done, and a lot more needs to be done to establish an aspect of safety and security for companies, employees and customers. The impact of the Okta breach very clearly show how cyber attacks can cause massive repercussion for any organisation in the form of monetary loss, loss of business, damage to reputation and a lot of other factors. One should take such instances as examples and learnings for ourselves and prepare our organisation to combat similar types of threats, ultimately working towards preventing these types of threats and eradicating the influence of bad actors from our digital ecosystem altogether.
References:
- https://hbr.org/2023/05/the-devastating-business-impacts-of-a-cyber-breach#:~:text=In%202022%2C%20the%20global%20average,legal%20fees%2C%20and%20audit%20fees.
- https://www.okta.com/intro-to-okta/#:~:text=Okta%20is%20a%20secure%20identity,use%20to%20work%2C%20instantly%20available.
- https://www.cyberpeace.org/resources/blogs/mgm-resorts-shuts-down-it-systems-after-cyberattack
.webp)
Introduction
The Senate bill introduced on 19 March 2024 in the United States would require online platforms to obtain consumer consent before using their data for Artificial Intelligence (AI) model training. If a company fails to obtain this consent, it would be considered a deceptive or unfair practice and result in enforcement action from the Federal Trade Commission (FTC) under the AI consumer opt-in, notification standards, and ethical norms for training (AI Consent) bill. The legislation aims to strengthen consumer protection and give Americans the power to determine how their data is used by online platforms.
The proposed bill also seeks to create standards for disclosures, including requiring platforms to provide instructions to consumers on how they can affirm or rescind their consent. The option to grant or revoke consent should be made available at any time through an accessible and easily navigable mechanism, and the selection to withhold or reverse consent must be at least as prominent as the option to accept while taking the same number of steps or fewer as the option to accept.
The AI Consent bill directs the FTC to implement regulations to improve transparency by requiring companies to disclose when the data of individuals will be used to train AI and receive consumer opt-in to this use. The bill also commissions an FTC report on the technical feasibility of de-identifying data, given the rapid advancements in AI technologies, evaluating potential measures companies could take to effectively de-identify user data.
The definition of ‘Artificial Intelligence System’ under the proposed bill
ARTIFICIALINTELLIGENCE SYSTEM- The term artificial intelligence system“ means a machine-based system that—
- Is capable of influencing the environment by producing an output, including predictions, recommendations or decisions, for a given set of objectives; and
- 2. Uses machine or human-based data and inputs to
(i) Perceive real or virtual environments;
(ii) Abstract these perceptions into models through analysis in an automated manner (such as by using machine learning) or manually; and
(iii) Use model inference to formulate options for outcomes.
Importance of the proposed AI Consent Bill USA
1. Consumer Data Protection: The AI Consent bill primarily upholds the privacy rights of an individual. Consent is necessitated from the consumer before data is used for AI Training; the bill aims to empower individuals with unhinged autonomy over the use of personal information. The scope of the bill aligns with the greater objective of data protection laws globally, stressing the criticality of privacy rights and autonomy.
2. Prohibition Measures: The proposed bill intends to prohibit covered entities from exploiting the data of consumers for training purposes without their consent. This prohibition extends to the sale of data, transfer to third parties and usage. Such measures aim to prevent data misuse and exploitation of personal information. The bill aims to ensure companies are leveraged by consumer information for the development of AI without a transparent process of consent.
3. Transparent Consent Procedures: The bill calls for clear and conspicuous disclosures to be provided by the companies for the intended use of consumer data for AI training. The entities must provide a comprehensive explanation of data processing and its implications for consumers. The transparency fostered by the proposed bill allows consumers to make sound decisions about their data and its management, hence nurturing a sense of accountability and trust in data-driven practices.
4. Regulatory Compliance: The bill's guidelines call for strict requirements for procuring the consent of an individual. The entities must follow a prescribed mechanism for content solicitation, making the process streamlined and accessible for consumers. Moreover, the acquisition of content must be independent, i.e. without terms of service and other contractual obligations. These provisions underscore the importance of active and informed consent in data processing activities, reinforcing the principles of data protection and privacy.
5. Enforcement and Oversight: To enforce compliance with the provisions of the bill, robust mechanisms for oversight and enforcement are established. Violations of the prescribed regulations are treated as unfair or deceptive acts under its provisions. Empowering regulatory bodies like the FTC to ensure adherence to data privacy standards. By holding covered entities accountable for compliance, the bill fosters a culture of accountability and responsibility in data handling practices, thereby enhancing consumer trust and confidence in the digital ecosystem.
Importance of Data Anonymization
Data Anonymization is the process of concealing or removing personal or private information from the data set to safeguard the privacy of the individual associated with it. Anonymised data is a sort of information sanitisation in which data anonymisation techniques encrypt or delete personally identifying information from datasets to protect data privacy of the subject. This reduces the danger of unintentional exposure during information transfer across borders and allows for easier assessment and analytics after anonymisation. When personal information is compromised, the organisation suffers not just a security breach but also a breach of confidence from the client or consumer. Such assaults can result in a wide range of privacy infractions, including breach of contract, discrimination, and identity theft.
The AI consent bill asks the FTC to study data de-identification methods. Data anonymisation is critical to improving privacy protection since it reduces the danger of re-identification and unauthorised access to personal information. Regulatory bodies can increase privacy safeguards and reduce privacy risks connected with data processing operations by investigating and perhaps implementing anonymisation procedures.
The AI consent bill emphasises de-identification methods, as well as the DPDP Act 2023 in India, while not specifically talking about data de-identification, but it emphasises the data minimisation principles, which highlights the potential future focus on data anonymisation processes or techniques in India.
Conclusion
The proposed AI Consent bill in the US represents a significant step towards enhancing consumer privacy rights and data protection in the context of AI development. Through its stringent prohibitions, transparent consent procedures, regulatory compliance measures, and robust enforcement mechanisms, the bill strives to strike a balance between fostering innovation in AI technologies while safeguarding the privacy and autonomy of individuals.
References:
- https://fedscoop.com/consumer-data-consent-training-ai-models-senate-bill/#:~:text=%E2%80%9CThe%20AI%20CONSENT%20Act%20gives,Welch%20said%20in%20a%20statement
- https://www.dataguidance.com/news/usa-bill-ai-consent-act-introduced-house#:~:text=USA%3A%20Bill%20for%20the%20AI%20Consent%20Act%20introduced%20to%20House%20of%20Representatives,-ConsentPrivacy%20Law&text=On%20March%2019%2C%202024%2C%20US,the%20U.S.%20House%20of%20Representatives
- https://datenrecht.ch/en/usa-ai-consent-act-vorgeschlagen/
- https://www.lujan.senate.gov/newsroom/press-releases/lujan-welch-introduce-billto-require-online-platforms-receive-consumers-consent-before-using-their-personal-data-to-train-ai-models/
.webp)
Introduction
The rapid advancement of technology, including generative AI, offers immense benefits but also raises concerns about misuse. The Internet Watch Foundation reported that, as of July 2024, over 3,500 new AI-generated child sexual abuse images appeared on the dark web. The UK’s National Crime Agency records 800 monthly arrests for online child threats and estimates 840,000 adults as potential offenders. In response, the UK is introducing legislation to criminalise AI-generated child exploitation imagery, which will be a part of the Crime and Policing Bill when it comes to parliament in the next few weeks, aligning with global AI regulations like the EU AI Act and the US AI Initiative Act. This policy shift strengthens efforts to combat online child exploitation and sets a global precedent for responsible AI governance.
Current Legal Landscape and the Policy Gap
The UK’s Online Safety Act 2023 aims to combat CSAM and deepfake pornography by holding social media and search platforms accountable for user safety. It mandates these platforms to prevent children from accessing harmful content, remove illegal material, and offer clear reporting mechanisms. For adults, major platforms must be transparent about harmful content policies and provide users control over what they see.
However, the Act has notable limitations, including concerns over content moderation overreach, potential censorship of legitimate debates, and challenges in defining "harmful" content. It may disproportionately impact smaller platforms and raise concerns about protecting journalistic content and politically significant discussions. While intended to enhance online safety, these challenges highlight the complexities of balancing regulation with digital rights and free expression.
The Proposed Criminalisation of AI-Generated Sexual Abuse Content
The proposed law by the UK criminalises the creation, distribution, and possession of AI-generated CSAM and deepfake pornography. It mandates enforcement agencies and digital platforms to identify and remove such content, with penalties for non-compliance. Perpetrators may face up to two years in prison for taking intimate images without consent or installing equipment to facilitate such offences. Currently, sharing or threatening to share intimate images, including deepfakes, is an offence under the Sexual Offences Act 2003, amended by the Online Safety Act 2023. The government plans to repeal certain voyeurism offences, replacing them with broader provisions covering unauthorised intimate recordings. This aligns with its September 2024 decision to classify sharing intimate images as a priority offence under the Online Safety Act, reinforcing its commitment to balancing free expression with harm prevention.
Implications for AI Regulation and Platform Responsibility
The UK's move aligns with its AI Safety Summit commitments, placing responsibility on platforms to remove AI-generated sexual abuse content or face Ofcom enforcement. The Crime and Policing Bill is expected to tighten AI regulations, requiring developers to integrate safeguards against misuse, and the licensing frameworks may enforce ethical AI standards, restricting access to synthetic media tools. Given AI-generated abuse's cross-border nature, enforcement will necessitate global cooperation with platforms, law enforcement, and regulators. Bilateral and multilateral agreements could help harmonise legal frameworks, enabling swift content takedown, evidence sharing, and extradition of offenders, strengthening international efforts against AI-enabled exploitation.
Conclusion and Policy Recommendations
The Crime and Policing Bill marks a crucial step in criminalising AI-generated CSAM and deepfake pornography, strengthening online safety and platform accountability. However, balancing digital rights and enforcement remains a challenge. For effective implementation, industry cooperation is essential, with platforms integrating detection tools and transparent reporting systems. AI ethics frameworks should prevent misuse while allowing innovation, and victim support mechanisms must be prioritised. Given AI-driven abuse's global nature, international regulatory alignment is key for harmonised laws, evidence sharing, and cross-border enforcement. This legislation sets a global precedent, emphasising proactive regulation to ensure digital safety, ethical AI development, and the protection of human dignity.
References
- https://www.iwf.org.uk/about-us/why-we-exist/our-research/how-ai-is-being-abused-to-create-child-sexual-abuse-imagery/
- https://www.reuters.com/technology/artificial-intelligence/uk-makes-use-ai-tools-create-child-abuse-material-crime-2025-02-01/
- https://www.financialexpress.com/life/technology-uk-set-to-ban-ai-tools-for-creating-child-sexual-abuse-images-with-new-laws-3735296/
- https://www.gov.uk/government/publications/national-crime-agency-annual-report-and-accounts-2023-to-2024/national-crime-agency-annual-report-and-accounts-2023-to-2024-accessible#part-1--performance-report