#FactCheck - Old Japanese Earthquake Footage Falsely Linked to Tibet
Executive Summary:
A viral post on X (formerly Twitter) gained much attention, creating a false narrative of recent damage caused by the earthquake in Tibet. Our findings confirmed that the clip was not filmed in Tibet, instead it came from an earthquake that occurred in Japan in the past. The origin of the claim is traced in this report. More to this, analysis and verified findings regarding the evidence have been put in place for further clarification of the misinformation around the video.

Claim:
The viral video shows collapsed infrastructure and significant destruction, with the caption or claims suggesting it is evidence of a recent earthquake in Tibet. Similar claims can be found here and here

Fact Check:
The widely circulated clip, initially claimed to depict the aftermath of the most recent earthquake in Tibet, has been rigorously analyzed and proven to be misattributed. A reverse image search based on the Keyframes of the claimed video revealed that the footage originated from a devastating earthquake in Japan in the past. According to an article published by a Japanese news website, the incident occurred in February 2024. The video was authenticated by news agencies, as it accurately depicted the scenes of destruction reported during that event.

Moreover, the same video was already uploaded on a YouTube channel, which proves that the video was not recent. The architecture, the signboards written in Japanese script, and the vehicles appearing in the video also prove that the footage belongs to Japan, not Tibet. The video shows news from Japan that occurred in the past, proving the video was shared with different context to spread false information.

The video was uploaded on February 2nd, 2024.
Snap from viral video

Snap from Youtube video

Conclusion:
The video viral about the earthquake recently experienced by Tibet is, therefore, wrong as it appears to be old footage from Japan, a previous earthquake experienced by this nation. Thus, the need for information verification, such that doing this helps the spreading of true information to avoid giving false data.
- Claim: A viral video claims to show recent earthquake destruction in Tibet.
- Claimed On: X (Formerly Known As Twitter)
- Fact Check: False and Misleading
Related Blogs

Introduction
The term ‘super spreader’ is used to refer to social media and digital platform accounts that are able to quickly transmit information to a significantly large audience base in a short duration. The analogy references the medical term, where a small group of individuals is able to rapidly amplify the spread of an infection across a huge population. The fact that a few handful accounts are able to impact and influence many is attributed to a number of factors like large follower bases, high engagement rates, content attractiveness or virality and perceived credibility.
Super spreader accounts have become a considerable threat on social media because they are responsible for generating a large amount of low-credibility material online. These individuals or groups may create or disseminate low-credibility content for a number of reasons, running from social media fame to garnering political influence, from intentionally spreading propaganda to seeking financial gains. Given the exponential reach of these accounts, identifying, tracing and categorising such accounts as the sources of misinformation can be tricky. It can be equally difficult to actually recognise the content they spread for the misinformation that it actually is.
How Do A Few Accounts Spark Widespread Misinformation?
Recent research suggests that misinformation superspreaders, who consistently distribute low-credibility content, may be the primary cause of the issue of widespread misinformation about different topics. A study[1] by a team of social media analysts at Indiana University has found that a significant portion of tweets spreading misinformation are sent by a small percentage of a given user base. The researchers conducted a review of 2,397,388 tweets posted on Twitter (now X) that were flagged as having low credibility and details on who was sending them. The study found that it does not take a lot of influencers to sway the beliefs and opinions of large numbers. This is attributed to the impact of what they describe as superspreaders. The researchers collected 10 months of data, which added up to 2,397,388 tweets sent by 448,103 users, and then reviewed it, looking for tweets that were flagged as containing low-credibility information. They found that approximately a third of the low-credibility tweets had been posted by people using just 10 accounts, and that just 1,000 accounts were responsible for posting approximately 70% of such tweets.[2]
Case Study
- How Misinformation ‘Superspreaders’ Seed False Election Theories
During the 2020 U.S. presidential election, a small group of "repeat spreaders" aggressively pushed false election claims across various social media platforms for political gain, and this even led to rallies and radicalisation in the U.S.[3] Superspreaders accounts were responsible for disseminating a disproportionately large amount of misinformation related to the election, influencing public opinion and potentially undermining the electoral process.
In the domestic context, India was ranked highest for the risk of misinformation and disinformation according to experts surveyed for the World Economic Forum’s 2024 Global Risk Report. In today's digital age, misinformation, deep fakes, and AI-generated fakes pose a significant threat to the integrity of elections and democratic processes worldwide. With 64 countries conducting elections in 2024, the dissemination of false information carries grave implications that could influence outcomes and shape long-term socio-political landscapes. During the 2024 Indian elections, we witnessed a notable surge in deepfake videos of political personalities, raising concerns about the influence of misinformation on election outcomes.
- Role of Superspreaders During Covid-19
Clarity in public health communication is important when any grey areas or gaps in information can be manipulated so quickly. During the COVID-19 pandemic, misinformation related to the virus, vaccines, and public health measures spread rapidly on social media platforms, including Twitter (Now X). Some prominent accounts or popular pages on platforms like Facebook and Twitter(now X) were identified as superspreaders of COVID-19 misinformation, contributing to public confusion and potentially hindering efforts to combat the pandemic.
As per the Center for Countering Digital Hate Inc (US), The "disinformation dozen," a group of 12 prominent anti-vaccine accounts[4], were found to be responsible for a large amount of anti-vaccine content circulating on social media platforms, highlighting the significant role of superspreaders in influencing public perceptions and behaviours during a health crisis.
There are also incidents where users are unknowingly engaged in spreading misinformation by forwarding information or content which are not always shared by the original source but often just propagated by amplifiers, using other sources, websites, or YouTube videos that help in dissemination. The intermediary sharers amplify these messages on their pages, which is where it takes off. Hence such users do not always have to be the ones creating or deliberately popularising the misinformation, but they are the ones who expose more people to it because of their broad reach. This was observed during the pandemic when a handful of people were able to create a heavy digital impact sharing vaccine/virus-related misinformation.
- Role of Superspreaders in Influencing Investments and Finance
Misinformation and rumours in finance may have a considerable influence on stock markets, investor behaviour, and national financial stability. Individuals or accounts with huge followings or influence in the financial niche can operate as superspreaders of erroneous information, potentially leading to market manipulation, panic selling, or incorrect impressions about individual firms or investments.
Superspreaders in the finance domain can cause volatility in markets, affect investor confidence, and even trigger regulatory responses to address the spread of false information that may harm market integrity. In fact, there has been a rise in deepfake videos, and fake endorsements, with multiple social media profiles providing unsanctioned investing advice and directing followers to particular channels. This leads investors into dangerous financial decisions. The issue intensifies when scammers employ deepfake videos of notable personalities to boost their reputation and can actually shape people’s financial decisions.
Bots and Misinformation Spread on Social Media
Bots are automated accounts that are designed to execute certain activities, such as liking, sharing, or retweeting material, and they can broaden the reach of misinformation by swiftly spreading false narratives and adding to the virality of a certain piece of content. They can also artificially boost the popularity of disinformation by posting phony likes, shares, and comments, making it look more genuine and trustworthy to unsuspecting users. Bots can exploit social network algorithms by establishing false identities that interact with one another and with real users, increasing the spread of disinformation and pushing it to the top of users' feeds and search results.
Bots can use current topics or hashtags to introduce misinformation into popular conversations, allowing misleading information to acquire traction and reach a broader audience. They can lead to the construction of echo chambers, in which users are exposed to a narrow variety of perspectives and information, exacerbating the spread of disinformation inside restricted online groups. There are incidents reported where bot's were found as the sharers of content from low-credibility sources.
Bots are frequently employed as part of planned misinformation campaigns designed to propagate false information for political, ideological, or commercial gain. Bots, by automating the distribution of misleading information, can make it impossible to trace the misinformation back to its source. Understanding how bots work and their influence on information ecosystems is critical for combatting disinformation and increasing digital literacy among social media users.
CyberPeace Policy Recommendations
- Recommendations/Advisory for Netizens:
- Educating oneself: Netizens need to stay informed about current events, reliable fact-checking sources, misinformation counter-strategies, and common misinformation tactics, so that they can verify potentially problematic content before sharing.
- Recognising the threats and vulnerabilities: It is important for netizens to understand the consequences of spreading or consuming inaccurate information, fake news, or misinformation. Netizens must be cautious of sensationalised content spreading on social media as it might attempt to provoke strong reactions or to mold public opinions. Netizens must consider questioning the credibility of information, verifying its sources, and developing cognitive skills to identify low-credibility content and counter misinformation.
- Practice caution and skepticism: Netizens are advised to develop a healthy skepticism towards online information, and critically analyse the veracity of all information sources. Before spreading any strong opinions or claims, one must seek supporting evidence, factual data, and expert opinions, and verify and validate claims with reliable sources or fact-checking entities.
- Good netiquette on the Internet, thinking before forwarding any information: It is important for netizens to practice good netiquette in the online information landscape. One must exercise caution while sharing any information, especially if the information seems incorrect, unverified or controversial. It's important to critically examine facts and recognise and understand the implications of sharing false, manipulative, misleading or fake information/content. Netizens must also promote critical thinking and encourage their loved ones to think critically, verify information, seek reliable sources and counter misinformation.
- Adopting and promoting Prebunking and Debunking strategies: Prebunking and debunking are two effective strategies to counter misinformation. Netizens are advised to engage in sharing only accurate information and do fact-checking to debunk any misinformation. They can rely on reputable fact-checking experts/entities who are regularly engaged in producing prebunking and debunking reports and material. Netizens are further advised to familiarise themselves with fact-checking websites, and resources and verify the information.
- Recommendations for tech/social media platforms
- Detect, report and block malicious accounts: Tech/social media platforms must implement strict user authentication mechanisms to verify account holders' identities to minimise the formation of fraudulent or malicious accounts. This is imperative to weed out suspicious social media accounts, misinformation superspreader accounts and bots accounts. Platforms must be capable of analysing public content, especially viral or suspicious content to ascertain whether it is misleading, AI-generated, fake or deliberately misleading. Upon detection, platform operators must block malicious/ superspreader accounts. The same approach must apply to other community guidelines’ violations as well.
- Algorithm Improvements: Tech/social media platform operators must develop and deploy advanced algorithm mechanisms to detect suspicious accounts and recognise repetitive posting of misinformation. They can utilise advanced algorithms to identify such patterns and flag any misleading, inaccurate, or fake information.
- Dedicated Reporting Tools: It is important for the tech/social media platforms to adopt robust policies to take action against social media accounts engaged in malicious activities such as spreading misinformation, disinformation, and propaganda. They must empower users on the platforms to flag/report suspicious accounts, and misleading content or misinformation through user-friendly reporting tools.
- Holistic Approach: The battle against online mis/disinformation necessitates a thorough examination of the processes through which it spreads. This involves investing in information literacy education, modifying algorithms to provide exposure to varied viewpoints, and working on detecting malevolent bots that spread misleading information. Social media sites can employ similar algorithms internally to eliminate accounts that appear to be bots. All stakeholders must encourage digital literacy efforts that enable consumers to critically analyse information, verify sources, and report suspect content. Implementing prebunking and debunking strategies. These efforts can be further supported by collaboration with relevant entities such as cybersecurity experts, fact-checking entities, researchers, policy analysts and the government to combat the misinformation warfare on the Internet.
References:
- https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0302201 {1}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette {2}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette {3}
- https://counterhate.com/research/the-disinformation-dozen/ {4}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette
- https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0302201
- https://www.nytimes.com/2020/11/23/technology/election-misinformation-facebook-twitter.html
- https://www.wbur.org/onpoint/2021/08/06/vaccine-misinformation-and-a-look-inside-the-disinformation-dozen
- https://healthfeedback.org/misinformation-superspreaders-thriving-on-musk-owned-twitter/
- https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8139392/
- https://www.jmir.org/2021/5/e26933/
- https://www.yahoo.com/news/7-ways-avoid-becoming-misinformation-121939834.html

Executive Summary:
An old video dated 2023 showing the arrest of a Bangladeshi migrant for murdering a Polish woman has been going viral massively on social media claiming that he is an Indian national. This viral video was fact checked and debunked.
Claim:
The video circulating on social media alleges that an Indian migrant was arrested in Greece for assaulting a young Christian girl. It has been shared with narratives maligning Indian migrants. The post was first shared on Facebook by an account known as “Voices of hope” and has been shared in the report as well.

Facts:
The CyberPeace Research team has utilized Google Image Search to find the original source of the claim. Upon searching we find the original news report published by Greek City Times in June 2023.


The person arrested in the video clip is a Bangladeshi migrant and not of Indian origin. CyberPeace Research Team assessed the available police reports and other verifiable sources to confirm that the arrested person is Bangladeshi.
The video has been dated 2023, relating to a case that occurred in Poland and relates to absolutely nothing about India migrants.
Neither the Polish government nor authorized news agency outlets reported Indian citizens for the controversy in question.

Conclusion:
The viral video falsely implicating an Indian migrant in a Polish woman’s murder is misleading. The accused is a Bangladeshi migrant, and the incident has been misrepresented to spread misinformation. This highlights the importance of verifying such claims to prevent the spread of xenophobia and false narratives.
- Claim: Video shows an Indian immigrant being arrested in Greece for allegedly assaulting a young Christian girl.
- Claimed On: X (Formerly Known As Twitter) and Facebook.
- Fact Check: Misleading.

Overview:
A recent addition to the list of cybercrime is SharpRhino, a RAT (Remote Access Trojan) actively used by Hunters International ransomware group. SharpRhino is highly developed and penetrates into the network mask of IT specialists, primarily due to the belief in the tools’ legitimacy. Going under the genuine software installer, SharpRhino started functioning in mid-June 2024. However, Quorum Cyber discovered it in early August 2024 while investigating ransomware.
About Hunters International Group:
Hunters International emerged as one of the most notorious groups focused on ransomware attacks, having compromised over 134 targets worldwide in the first seven months of 2024. It is believed that the group is the rebranding of Hive ransomware group that was previously active, and there are considerable similarities in the code. Its focus on IT employees in particular demonstrates the fact that they move tactically in gaining access to the organizations’ networks.
Modus Operandi:
1. Typosquatting Technique
SharpRhino is mainly distributed by a domain that looks like the genuine Angry IP Scanner, which is a popular network discovery tool. The malware installer, labeled as ipscan-3.9.1-setup. It is a 32-bit Nullsoft installer which embeds a password protected 7z archive in it.
2. Installation Process
- Execution of Installer: When the victim downloads and executes the installer and changes the windows registry in order to attain persistence. This is done by generating a registry entry that starts a harmful file, Microsoft. AnyKey. exe, are fakes originating from fake versions of true legitimate Microsoft Visual Studio tools.
- Creation of Batch File: This drops a batch file qualified as LogUpdate at the installer.bat, that runs the PowerShell scripts on the device. These scripts are to compile C# code into memory to serve as a means of making the malware covert in its operation.
- Directory Creation: The installer establishes two directories that allow the C2 communication – C:\ProgramData\Microsoft: WindowsUpdater24 and LogUpdateWindows.
3. Execution and Functionality:
- Command Execution: The malware can execute PowerShell commands on the infected system, these actions may involve privilege escalation and other extended actions such as lateral movement.
- C2 Communication: SharpRhino interacts with command and control servers located on domains from platforms such as Cloudflare. This communication is necessary for receiving commands from the attackers and for returning any data of interest to the attackers.
- Data Exfiltration and Ransomware Deployment: Once SharpRhino has gained control, it can steal information and then proceed to encrypt it with a .locked extension. The procedure generally concludes with a ransom message, which informs users on how to purchase the decryption key.
4. Propagation Techniques:
Also, SharpRhino can spread through the self-copying method, this is the virus may copy itself to other computers using the network account of the victim and pretending to be trustworthy senders such as emails or network-shared files. Moreover, the victim’s machine may then proceed to propagate the malware to other systems like sharing in the company with other employees.
Indicators of Compromise (IOCs):
- LogUpdate.bat
- Wiaphoh7um.t
- ipscan-3.9.1-setup.exe
- kautix2aeX.t
- WindowsUpdate.bat
Command and Control Servers:
- cdn-server-1.xiren77418.workers.dev
- cdn-server-2.wesoc40288.workers.dev
- Angryipo.org
- Angryipsca.com
Analysis:

Graph:

Precautionary measures to be taken:
To mitigate the risks posed by SharpRhino and similar malware, organizations should implement the following measures:
- Implement Security Best Practices: It is important only to download software from official sites and avoid similar sites to confuse the user by changing a few letters.
- Enhance Detection Capabilities: Use technology in detection that can detect the IOCs linked to Sharp Rhino.
- Educate Employees: Educate IT people and employees on phishing scams and the requirement to check the origin of the application.
- Regular Backups: It is also important to back up important files from systems and networks in order to minimize the effects of ransomware attacks on a business.
Conclusion:
SharpRhino could be deemed as the evolution of the strategies used by organizations like Hunters International and others involved in the distribution of ransomware. SharpRhino primarily focuses on the audience of IT professionals and employs complex delivery and execution schemes, which makes it an extremely serious threat for corporate networks. To do so it is imperative that organizations have an understanding of its inner workings in order to fortify their security measures against this relatively new threat. Through the enforcement of proper security measures and constant enlightenment of organizations on the importance of cybersecurity, firms can prevent the various risks associated with SharpRhino and related malware. Be safe, be knowledgeable, and most importantly, be secure when it comes to cyber security for your investments.
Reference:
https://cybersecuritynews.com/sharprhino-ransomware-alert/
https://cybersecsentinel.com/sharprhino-explained-key-facts-and-how-to-protect-your-data/
https://www.dataprivacyandsecurityinsider.com/2024/08/sharprhino-malware-targeting-it-professionals/