#FactCheck: Old clip of Greenland tsunami depicts as tsunami in Japan
Executive Summary:
A viral video depicting a powerful tsunami wave destroying coastal infrastructure is being falsely associated with the recent tsunami warning in Japan following an earthquake in Russia. Fact-checking through reverse image search reveals that the footage is from a 2017 tsunami in Greenland, triggered by a massive landslide in the Karrat Fjord.

Claim:
A viral video circulating on social media shows a massive tsunami wave crashing into the coastline, destroying boats and surrounding infrastructure. The footage is being falsely linked to the recent tsunami warning issued in Japan following an earthquake in Russia. However, initial verification suggests that the video is unrelated to the current event and may be from a previous incident.

Fact Check:
The video, which shows water forcefully inundating a coastal area, is neither recent nor related to the current tsunami event in Japan. A reverse image search conducted using keyframes extracted from the viral footage confirms that it is being misrepresented. The video actually originates from a tsunami that struck Greenland in 2017. The original footage is available on YouTube and has no connection to the recent earthquake-induced tsunami warning in Japan

The American Geophysical Union (AGU) confirmed in a blog post on June 19, 2017, that the deadly Greenland tsunami on June 17, 2017, was caused by a massive landslide. Millions of cubic meters of rock were dumped into the Karrat Fjord by the landslide, creating a wave that was more than 90 meters high and destroying the village of Nuugaatsiaq. A similar news article from The Guardian can be found.

Conclusion:
Videos purporting to depict the effects of a recent tsunami in Japan are deceptive and repurposed from unrelated incidents. Users of social media are urged to confirm the legitimacy of such content before sharing it, particularly during natural disasters when false information can exacerbate public anxiety and confusion.
- Claim: Recent natural disasters in Russia are being censored
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

AI and other technologies are advancing rapidly. This has ensured the rapid spread of information, and even misinformation. LLMs have their advantages, but they also come with drawbacks, such as confident but inaccurate responses due to limitations in their training data. The evidence-driven retrieval systems aim to address this issue by using and incorporating factual information during response generation to prevent hallucination and retrieve accurate responses.
What is Retrieval-Augmented Response Generation?
Evidence-driven Retrieval Augmented Generation (or RAG) is an AI framework that improves the accuracy and reliability of large language models (LLMs) by grounding them in external knowledge bases. RAG systems combine the generative power of LLMs with a dynamic information retrieval mechanism. The standard AI models rely solely on pre-trained knowledge and pattern recognition to generate text. RAG pulls in credible, up-to-date information from various sources during the response generation process. RAG integrates real-time evidence retrieval with AI-based responses, combining large-scale data with reliable sources to combat misinformation. It follows the pattern of:
- Query Identification: When misinformation is detected or a query is raised.
- Evidence Retrieval: The AI searches databases for relevant, credible evidence to support or refute the claim.
- Response Generation: Using the evidence, the system generates a fact-based response that addresses the claim.
How is Evidence-Driven RAG the key to Fighting Misinformation?
- RAG systems can integrate the latest data, providing information on recent scientific discoveries.
- The retrieval mechanism allows RAG systems to pull specific, relevant information for each query, tailoring the response to a particular user’s needs.
- RAG systems can provide sources for their information, enhancing accountability and allowing users to verify claims.
- Especially for those requiring specific or specialised knowledge, RAG systems can excel where traditional models might struggle.
- By accessing a diverse range of up-to-date sources, RAG systems may offer more balanced viewpoints, unlike traditional LLMs.
Policy Implications and the Role of Regulation
With its potential to enhance content accuracy, RAG also intersects with important regulatory considerations. India has one of the largest internet user bases globally, and the challenges of managing misinformation are particularly pronounced.
- Indian regulators, such as MeitY, play a key role in guiding technology regulation. Similar to the EU's Digital Services Act, the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, mandate platforms to publish compliance reports detailing actions against misinformation. Integrating RAG systems can help ensure accurate, legally accountable content moderation.
- Collaboration among companies, policymakers, and academia is crucial for RAG adaptation, addressing local languages and cultural nuances while safeguarding free expression.
- Ethical considerations are vital to prevent social unrest, requiring transparency in RAG operations, including evidence retrieval and content classification. This balance can create a safer online environment while curbing misinformation.
Challenges and Limitations of RAG
While RAG holds significant promise, it has its challenges and limitations.
- Ensuring that RAG systems retrieve evidence only from trusted and credible sources is a key challenge.
- For RAG to be effective, users must trust the system. Sceptics of content moderation may show resistance to accepting the system’s responses.
- Generating a response too quickly may compromise the quality of the evidence while taking too long can allow misinformation to spread unchecked.
Conclusion
Evidence-driven retrieval systems, such as Retrieval-Augmented Generation, represent a pivotal advancement in the ongoing battle against misinformation. By integrating real-time data and credible sources into AI-generated responses, RAG enhances the reliability and transparency of online content moderation. It addresses the limitations of traditional AI models and aligns with regulatory frameworks aimed at maintaining digital accountability, as seen in India and globally. However, the successful deployment of RAG requires overcoming challenges related to source credibility, user trust, and response efficiency. Collaboration between technology providers, policymakers, and academic experts can foster the navigation of these to create a safer and more accurate online environment. As digital landscapes evolve, RAG systems offer a promising path forward, ensuring that technological progress is matched by a commitment to truth and informed discourse.
References
- https://experts.illinois.edu/en/publications/evidence-driven-retrieval-augmented-response-generation-for-onlin
- https://research.ibm.com/blog/retrieval-augmented-generation-RAG
- https://medium.com/@mpuig/rag-systems-vs-traditional-language-models-a-new-era-of-ai-powered-information-retrieval-887ec31c15a0
- https://www.researchgate.net/publication/383701402_Web_Retrieval_Agents_for_Evidence-Based_Misinformation_Detection
.png)
Introduction
The fast-paced development of technology and the wider use of social media platforms have led to the rapid dissemination of misinformation with characteristics such as diffusion, fast propagation speed, wide influence, and deep impact through these platforms. Social Media Algorithms and their decisions are often perceived as a black box introduction that makes it impossible for users to understand and recognise how the decision-making process works.
Social media algorithms may unintentionally promote false narratives that garner more interactions, further reinforcing the misinformation cycle and making it harder to control its spread within vast, interconnected networks. Algorithms judge the content based on the metrics, which is user engagement. It is the prerequisite for algorithms to serve you the best. Hence, algorithms or search engines enlist relevant items you are more likely to enjoy. This process, initially, was created to cut the clutter and provide you with the best information. However, sometimes it results in unknowingly widespread misinformation due to the viral nature of information and user interactions.
Analysing the Algorithmic Architecture of Misinformation
Social media algorithms, designed to maximize user engagement, can inadvertently promote misinformation due to their tendency to trigger strong emotions, creating echo chambers and filter bubbles. These algorithms prioritize content based on user behaviour, leading to the promotion of emotionally charged misinformation. Additionally, the algorithms prioritize content that has the potential to go viral, which can lead to the spread of false or misleading content faster than corrections or factual content.
Additionally, popular content is amplified by platforms, which spreads it faster by presenting it to more users. Limited fact-checking efforts are particularly difficult since, by the time they are reported or corrected, erroneous claims may have gained widespread acceptance due to delayed responses. Social media algorithms find it difficult to distinguish between real people and organized networks of troll farms or bots that propagate false information. This creates a vicious loop where users are constantly exposed to inaccurate or misleading material, which strengthens their convictions and disseminates erroneous information through networks.
Though algorithms, primarily, aim to enhance user engagement by curating content that aligns with the user's previous behaviour and preferences. Sometimes this process leads to "echo chambers," where individuals are exposed mainly to information that reaffirms their beliefs which existed prior, effectively silencing dissenting voices and opposing viewpoints. This curated experience reduces exposure to diverse opinions and amplifies biased and polarising content, making it arduous for users to discern credible information from misinformation. Algorithms feed into a feedback loop that continuously gathers data from users' activities across digital platforms, including websites, social media, and apps. This data is analysed to optimise user experiences, making platforms more attractive. While this process drives innovation and improves user satisfaction from a business standpoint, it also poses a danger in the context of misinformation. The repetitive reinforcement of user preferences leads to the entrenchment of false beliefs, as users are less likely to encounter fact-checks or corrective information.
Moreover, social networks and their sheer size and complexity today exacerbate the issue. With billions of users participating in online spaces, misinformation spreads rapidly, and attempting to contain it—such as by inspecting messages or URLs for false information—can be computationally challenging and inefficient. The extensive amount of content that is shared daily means that misinformation can be propagated far quicker than it can get fact-checked or debunked.
Understanding how algorithms influence user behaviour is important to tackling misinformation. The personalisation of content, feedback loops, the complexity of network structures, and the role of superspreaders all work together to create a challenging environment where misinformation thrives. Hence, highlighting the importance of countering misinformation through robust measures.
The Role of Regulations in Curbing Algorithmic Misinformation
The EU's Digital Services Act (DSA) applicable in the EU is one of the regulations that aims to increase the responsibilities of tech companies and ensure that their algorithms do not promote harmful content. These regulatory frameworks play an important role they can be used to establish mechanisms for users to appeal against the algorithmic decisions and ensure that these systems do not disproportionately suppress legitimate voices. Independent oversight and periodic audits can ensure that algorithms are not biased or used maliciously. Self-regulation and Platform regulation are the first steps that can be taken to regulate misinformation. By fostering a more transparent and accountable ecosystem, regulations help mitigate the negative effects of algorithmic misinformation, thereby protecting the integrity of information that is shared online. In the Indian context, the Intermediary Guidelines, 2023, Rule 3(1)(b)(v) explicitly prohibits the dissemination of misinformation on digital platforms. The ‘Intermediaries’ are obliged to ensure reasonable efforts to prevent users from hosting, displaying, uploading, modifying, publishing, transmitting, storing, updating, or sharing any information related to the 11 listed user harms or prohibited content. This rule aims to ensure platforms identify and swiftly remove misinformation, and false or misleading content.
Cyberpeace Outlook
Understanding how algorithms prioritise content will enable users to critically evaluate the information they encounter and recognise potential biases. Such cognitive defenses can empower individuals to question the sources of the information and report misleading content effectively. In the future of algorithms in information moderation, platforms should evolve toward more transparent, user-driven systems where algorithms are optimised not just for engagement but for accuracy and fairness. Incorporating advanced AI moderation tools, coupled with human oversight can improve the detection and reduction of harmful and misleading content. Collaboration between regulatory bodies, tech companies, and users will help shape the algorithms landscape to promote a healthier, more informed digital environment.
References:
- https://www.advancedsciencenews.com/misformation-spreads-like-a-nuclear-reaction-on-the-internet/
- https://www.niemanlab.org/2024/09/want-to-fight-misinformation-teach-people-how-algorithms-work/
- Press Release: Press Information Bureau (pib.gov.in)

Executive Summary:
A viral video showing flames and thick smoke from large fuel tanks has been shared widely on social media. Many claimed it showed a recent Russian missile attack on a fuel depot in Ukraine. However, our research found that the video is not related to the Russia-Ukraine conflict. It actually shows a fire that happened at Al Hamriyah Port in Sharjah, United Arab Emirates, on May 31, 2025. The confusion was likely caused by a lack of context and misleading captions.

Claim:
The circulating claim suggests that Russia deliberately bombed Ukraine's fuel reserves and the viral video shows evidence of the bombing. The posts claim the fuel depot was destroyed purposefully during military operations, implying an increase in violence. This narrative is intended to generate feelings and reinforce fears related to war.

Fact Check:
After doing a reverse image search of the key frames of the viral video, we found that the video is actually from Al Hamriyah Port, UAE, not from the Russia-Ukraine conflict. During further research we found the same visuals were also published by regional news outlets in the UAE, including Gulf News and Khaleej Times, which reported on a massive fire at Al Hamriyah Port on 31 May 2025.
As per the news report, a fire broke out at a fuel storage facility in Al Hamriyah Port, UAE. Fortunately, no casualties were reported. Fire Management Services responded promptly and successfully brought the situation under control.


Conclusion:
The belief that the viral video is evidence of a Russian strike in Ukraine is misleading and incorrect. The video is actually of a fire at a commercial port in the UAE. When you share misleading footage like that, you distort reality and incite fear based on lies. It is simply a reminder that not all viral media is what it appears to be, and every viewer should take the time to check and verify the content source and context before accepting or reposting. In this instance, the original claim is untrue and misleading.
- Claim: Fresh attack in Ukraine! Russian military strikes again!
- Claimed On: Social Media
- Fact Check: False and Misleading