#FactCheck: Viral video blast of fuel tank in UAE Al Hariyah Port portray as Russia-Ukraine Conflict
Executive Summary:
A viral video showing flames and thick smoke from large fuel tanks has been shared widely on social media. Many claimed it showed a recent Russian missile attack on a fuel depot in Ukraine. However, our research found that the video is not related to the Russia-Ukraine conflict. It actually shows a fire that happened at Al Hamriyah Port in Sharjah, United Arab Emirates, on May 31, 2025. The confusion was likely caused by a lack of context and misleading captions.

Claim:
The circulating claim suggests that Russia deliberately bombed Ukraine's fuel reserves and the viral video shows evidence of the bombing. The posts claim the fuel depot was destroyed purposefully during military operations, implying an increase in violence. This narrative is intended to generate feelings and reinforce fears related to war.

Fact Check:
After doing a reverse image search of the key frames of the viral video, we found that the video is actually from Al Hamriyah Port, UAE, not from the Russia-Ukraine conflict. During further research we found the same visuals were also published by regional news outlets in the UAE, including Gulf News and Khaleej Times, which reported on a massive fire at Al Hamriyah Port on 31 May 2025.
As per the news report, a fire broke out at a fuel storage facility in Al Hamriyah Port, UAE. Fortunately, no casualties were reported. Fire Management Services responded promptly and successfully brought the situation under control.


Conclusion:
The belief that the viral video is evidence of a Russian strike in Ukraine is misleading and incorrect. The video is actually of a fire at a commercial port in the UAE. When you share misleading footage like that, you distort reality and incite fear based on lies. It is simply a reminder that not all viral media is what it appears to be, and every viewer should take the time to check and verify the content source and context before accepting or reposting. In this instance, the original claim is untrue and misleading.
- Claim: Fresh attack in Ukraine! Russian military strikes again!
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Starting in mid-December, 2024, a series of attacks have targeted Chrome browser extensions. A data protection company called Cyberhaven, California, fell victim to one of these attacks. Though identified in the U.S., the geographical extent and potential of the attack are yet to be determined. Assessment of these cases can help us to be better prepared for such instances if they occur in the near future.
The Attack
Browser extensions are small software applications that add and enable functionality or a capacity (feature) to a web browser. These are written in CSS, HTML, or JavaScript and like other software, can be coded to deliver malware. Also known as plug-ins, they have access to their own set of Application Programming Interface (APIs). They can also be used to remove unwanted elements as per customisation, such as pop-up advertisements and auto-play videos, when one lands on a website. Some examples of browser extensions include Ad-blockers (for blocking ads and content filtering) and StayFocusd (which limits the time of the users on a particular website).
In the aforementioned attack, the publisher of the browser at Cyberhaven received a phishing mail from an attacker posing to be from the Google Chrome Web Store Developer Support. It mentioned that their browser policies were not compatible and encouraged the user to click on the “Go to Policy”action item, which led the user to a page that enabled permissions for a malicious OAuth called Privacy Policy Extension (Open Authorisation is an adopted standard that is used to authorise secure access for temporary tokens). Once the permission was granted, the attacker was able to inject malicious code into the target’s Chrome browser extension and steal user access tokens and session cookies. Further investigation revealed that logins of certain AI and social media platforms were targeted.
CyberPeace Recommendations
As attacks of such range continue to occur, it is encouraged that companies and developers take active measures that would make their browser extensions less susceptible to such attacks. Google also has a few guidelines on how developers can safeguard their extensions from their end. These include:
- Minimal Permissions For Extensions- It is encouraged that minimal permissions for extensions barring the required APIs and websites that it depends on are acquired as limiting extension privileges limits the surface area an attacker can exploit.
- Prioritising Protection Of Developer Accounts- A security breach on this end could lead to compromising all users' data as this would allow attackers to mess with extensions via their malicious codes. A 2FA (2-factor authentication) by setting a security key is endorsed.
- HTTPS over HTTP- HTTPS should be preferred over HTTP as it requires a Secure Sockets Layer (SSL)/ transport layer security(TLS) certificate from an independent certificate authority (CA). This creates an encrypted connection between the server and the web browser.
Lastly, as was done in the case of the attack at Cyberhaven, it is encouraged to promote the practice of transparency when such incidents take place to better deal with them.
References
- https://indianexpress.com/article/technology/tech-news-technology/hackers-hijack-companies-chrome-extensions-cyberhaven-9748454/
- https://indianexpress.com/article/technology/tech-news-technology/google-chrome-extensions-hack-safety-tips-9751656/
- https://www.techtarget.com/whatis/definition/browser-extension
- https://www.forbes.com/sites/daveywinder/2024/12/31/google-chrome-2fa-bypass-attack-confirmed-what-you-need-to-know/
- https://www.cloudflare.com/learning/ssl/why-use-https/

Introduction
The mysteries of the universe have been a subject of curiosity for humans over thousands of years. To solve these unfolding mysteries of the universe, astrophysicists are always busy, and with the growing technology this seems to be achievable. Recently, with the help of Artificial Intelligence (AI), scientists have discovered the depths of the cosmos. AI has revealed the secret equation that properly “weighs” galaxy clusters. This groundbreaking discovery not only sheds light on the formation and behavior of these clusters but also marks a turning point in the investigation and discoveries of new cosmos. Scientists and AI have collaborated to uncover an astounding 430,000 galaxies strewn throughout the cosmos. The large haul includes 30,000 ring galaxies, which are considered the most unusual of all galaxy forms. The discoveries are the first outcomes of the "GALAXY CRUISE" citizen science initiative. They were given by 10,000 volunteers who sifted through data from the Subaru Telescope. After training the AI on 20,000 human-classified galaxies, scientists released it loose on 700,000 galaxies from the Subaru data.
Brief Analysis
A group of astronomers from the National Astronomical Observatory of Japan (NAOJ) have successfully applied AI to ultra-wide field-of-view images captured by the Subaru Telescope. The researchers achieved a high accuracy rate in finding and classifying spiral galaxies, with the technique being used alongside citizen science for future discoveries.
Astronomers are increasingly using AI to analyse and clean raw astronomical images for scientific research. This involves feeding photos of galaxies into neural network algorithms, which can identify patterns in real data more quickly and less prone to error than manual classification. These networks have numerous interconnected nodes and can recognise patterns, with algorithms now 98% accurate in categorising galaxies.
Another application of AI is to explore the nature of the universe, particularly dark matter and dark energy, which make up over 95% energy of the universe. The quantity and changes in these elements have significant implications for everything from galaxy arrangement.
AI is capable of analysing massive amounts of data, as training data for dark matter and energy comes from complex computer simulations. The neural network is fed these findings to learn about the changing parameters of the universe, allowing cosmologists to target the network towards actual data.
These methods are becoming increasingly important as astronomical observatories generate enormous amounts of data. High-resolution photographs of the sky will be produced from over 60 petabytes of raw data by the Vera C. AI-assisted computers are being utilized for this.
Data annotation techniques for training neural networks include simple tagging and more advanced types like image classification, which classify an image to understand it as a whole. More advanced data annotation methods, such as semantic segmentation, involve grouping an image into clusters and giving each cluster a label.
This way, AI is being used for space exploration and is becoming a crucial tool. It also enables the processing and analysis of vast amounts of data. This advanced technology is fostering the understanding of the universe. However, clear policy guidelines and ethical use of technology should be prioritized while harnessing the true potential of contemporary technology.
Policy Recommendation
- Real-Time Data Sharing and Collaboration - Effective policies and frameworks should be established to promote real-time data sharing among astronomers, AI developers and research institutes. Open access to astronomical data should be encouraged to facilitate better innovation and bolster the application of AI in space exploration.
- Ethical AI Use - Proper guidelines and a well-structured ethical framework can facilitate judicious AI use in space exploration. The framework can play a critical role in addressing AI issues pertaining to data privacy, AI Algorithm bias and transparent decision-making processes involving AI-based tech.
- Investing in Research and Development (R&D) in the AI sector - Government and corporate giants should prioritise this opportunity to capitalise on the avenue of AI R&D in the field of space tech and exploration. Such as funding initiatives focusing on developing AI algorithms coded for processing astronomical data, optimising telescope operations and detecting celestial bodies.
- Citizen Science and Public Engagement - Promotion of citizen science initiatives can allow better leverage of AI tools to involve the public in astronomical research. Prominent examples include the SETI @ Home program (Search for Extraterrestrial Intelligence), encouraging better outreach to educate and engage citizens in AI-enabled discovery programs such as the identification of exoplanets, classification of galaxies and discovery of life beyond earth through detecting anomalies in radio waves.
- Education and Training - Training programs should be implemented to educate astronomers in AI techniques and the intricacies of data science. There is a need to foster collaboration between AI experts, data scientists and astronomers to harness the full potential of AI in space exploration.
- Bolster Computing Infrastructure - Authorities should ensure proper computing infrastructure should be implemented to facilitate better application of AI in astronomy. This further calls for greater investment in high-performance computing devices and structures to process large amounts of data and AI modelling to analyze astronomical data.
Conclusion
AI has seen an expansive growth in the field of space exploration. As seen, its multifaceted use cases include discovering new galaxies and classifying celestial objects by analyzing the changing parameters of outer space. Nevertheless, to fully harness its potential, robust policy and regulatory initiatives are required to bolster real-time data sharing not just within the scientific community but also between nations. Policy considerations such as investment in research, promoting citizen scientific initiatives and ensuring education and funding for astronomers. A critical aspect is improving key computing infrastructure, which is crucial for processing the vast amount of data generated by astronomical observatories.
References
- https://mindy-support.com/news-post/astronomers-are-using-ai-to-make-discoveries/
- https://www.space.com/citizen-scientists-artificial-intelligence-galaxy-discovery
- https://www.sciencedaily.com/releases/2024/03/240325114118.htm
- https://phys.org/news/2023-03-artificial-intelligence-secret-equation-galaxy.html
- https://www.space.com/astronomy-research-ai-future

Introduction
In 2025, the internet is entering a new paradigm and it is hard not to witness it. The internet as we know it is rapidly changing into a treasure trove of hyper-optimised material over which vast bot armies battle to the death, thanks to the amazing advancements in artificial intelligence. All of that advancement, however, has a price, primarily in human lives. It turns out that releasing highly personalised chatbots on a populace that is already struggling with economic stagnation, terminal loneliness, and the ongoing destruction of our planet isn’t exactly a formula for improved mental health. This is the truth of 75% of the kids and teen population who have had chats with chatbot-generated fictitious characters. AI, or artificial intelligence, Chatbots are becoming more and more integrated into our daily lives, assisting us with customer service, entertainment, healthcare, and education. But as the impact of these instruments grows, accountability and moral behaviour become more important. An investigation of the internal policies of a major international tech firm last year exposed alarming gaps: AI chatbots were allowed to create content with child romantic roleplaying, racially discriminatory reasoning, and spurious medical claims. Although the firm has since amended aspects of these rules, the exposé underscores an underlying global dilemma - how can we regulate AI to maintain child safety, guard against misinformation, and adhere to ethical considerations without suppressing innovation?
The Guidelines and Their Gaps
The tech giants like Meta and Google are often reprimanded for overlooking Child Safety and the overall increase in Mental health issues in children and adolescents. According to reports, Google introduced Gemini AI kids, a kid-friendly version of its Gemini AI chatbot, which represents a major advancement in the incorporation of generative artificial intelligence (Gen-AI) into early schooling. Users under the age of thirteen can use supervised accounts on the Family Link app to access this version of Gemini AI Kids.
AI operates on the premise of data collection and analysis. To safeguard children’s personal information in the digital world, the Digital Personal Data Protection Act, 2023 (DPDP Act) introduces particular safeguards. According to Section 9, before processing the data of children, who are defined as people under the age of 18, Data Fiduciaries, entities that decide the goals and methods of processing personal data, must get verified consent from a parent or legal guardian. Furthermore, the Act expressly forbids processing activities that could endanger a child’s welfare, such as behavioural surveillance and child-targeted advertising. According to court interpretations, a child's well-being includes not just medical care but also their moral, ethical, and emotional growth.
While the DPDP Act is a big start in the right direction, there are still important lacunae in how it addresses AI and Child Safety. Age-gating systems, thorough risk rating, and limitations specific to AI-driven platforms are absent from the Act, which largely concentrates on consent and damage prevention in data protection. Furthermore, it ignores the threats to children’s emotional safety or the long-term psychological effects of interacting with generative AI models. Current safeguards are self-regulatory in nature and dispersed across several laws, such as the Bhartiya Nyaya Sanhita, 2023. These include platform disclaimers, technology-based detection of child-sexual abuse content, and measures under the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
Child Safety and AI
- The Risks of Romantic Roleplay - Enabling chatbots to engage in romantic roleplaying with youngsters is among the most concerning discoveries. These interactions can result in grooming, psychological trauma, and relaxation to inappropriate behaviour, even if they are not explicitly sexual. Having illicit or sexual conversations with kids in cyberspace is unacceptable, according to child protection experts. However, permitting even "flirtatious" conversation could normalise risky boundaries.
- International Standards and Best Practices - The concept of "safety by design" is highly valued in child online safety guidelines from around the world, including UNICEF's Child Online Protection Guidelines and the UK's Online Safety Bill. This mandating of platforms and developers to proactively remove risks, not reactively to respond to harms, is the bare minimum standard that any AI guidelines must meet if they provide loopholes for child-directed roleplay.
Misinformation and Racism in AI Outputs
- The Disinformation Dilemma - The regulations also allowed AI to create fictional narratives with disclaimers. For example, chatbots were able to write articles promulgating false health claims or smears against public officials, as long as they were labelled as "untrue." While disclaimers might give thin legal cover, they add to the proliferation of misleading information. Indeed, misinformation tends to spread extensively because users disregard caveat labels in favour of provocative assertions.
- Ethical Lines and Discriminatory Content - It is ethically questionable to allow AI systems to generate racist arguments, even when requested. Though scholarly research into prejudice and bias may necessitate such examples, unregulated generation has the potential to normalise damaging stereotypes. Researchers warn that such practice brings platforms from being passive hosts of offensive speech to active generators of discriminatory content. It is a difference that makes a difference, as it places responsibility squarely on developers and corporations.
The Broader Governance Challenge
- Corporate Responsibility and AI Material generated by AI is not equivalent to user speech—it is a direct reflection of corporate training, policy decisions, and system engineering. This fact requires a greater level of accountability. Although companies can update guidelines following public criticism, that there were such allowances in the first place indicates a lack of strong ethical regulation.
- Regulatory Gaps Regulatory regimes for AI are currently in disarray. The EU AI Act, the OECD AI Principles, and national policies all emphasise human rights, transparency, and accountability. The few, though, specify clear guidelines for content risks such as child roleplay or hate narratives. This absence of harmonised international rules leaves companies acting in the shadows, establishing their own limits until contradicted.
An active way forward would include
- Express Child Protection Requirements: AI systems must categorically prohibit interactions with children involving flirting or romance.
- Misinformation Protections: Generative AI must not be allowed to generate knowingly false material, disclaimers being irrelevant.
- Bias Reduction: Developers need to proactively train systems against generating discriminatory accounts, not merely tag them as optional outputs.
- Independent Regulation: External audit and ethics review boards can supply transparency and accountability independent of internal company regulations.
Conclusion
The guidelines that are often contentious are more than the internal folly of just one firm; they point to a deeper systemic issue in AI regulation. The stakes rise as generative AI becomes more and more integrated into politics, healthcare, education, and social interaction. Racism, false information, and inadequate child safety measures are severe issues that require quick resolution. Corporate regulation is only one aspect of the future; other elements include multi-stakeholder participation, stronger global systems, and ethical standards. In the end, rather than just corporate interests, trust in artificial neural networks will be based on their ability to preserve the truth, protect the weak, and represent universal human values.
References
- https://www.esafety.gov.au/newsroom/blogs/ai-chatbots-and-companions-risks-to-children-and-young-people
- https://www.lakshmisri.com/insights/articles/ai-for-children/#
- https://the420.in/meta-ai-chatbot-guidelines-child-safety-racism-misinformation/
- https://www.unicef.org/documents/guidelines-industry-online-child-protection
- https://www.oecd.org/en/topics/sub-issues/ai-principles.html
- https://artificialintelligenceact.eu/