#FactCheck - Viral Video Misleadingly Tied to Recent Taiwan Earthquake
Executive Summary:
In the context of the recent earthquake in Taiwan, a video has gone viral and is being spread on social media claiming that the video was taken during the recent earthquake that occurred in Taiwan. However, fact checking reveals it to be an old video. The video is from September 2022, when Taiwan had another earthquake of magnitude 7.2. It is clear that the reversed image search and comparison with old videos has established the fact that the viral video is from the 2022 earthquake and not the recent 2024-event. Several news outlets had covered the 2022 incident, mentioning additional confirmation of the video's origin.

Claims:
There is a news circulating on social media about the earthquake in Taiwan and Japan recently. There is a post on “X” stating that,
“BREAKING NEWS :
Horrific #earthquake of 7.4 magnitude hit #Taiwan and #Japan. There is an alert that #Tsunami might hit them soon”.

Similar Posts:


Fact Check:
We started our investigation by watching the videos thoroughly. We divided the video into frames. Subsequently, we performed reverse search on the images and it took us to an X (formally Twitter) post where a user posted the same viral video on Sept 18, 2022. Worth to notice, the post has the caption-
“#Tsunami warnings issued after Taiwan quake. #Taiwan #Earthquake #TaiwanEarthquake”

The same viral video was posted on several news media in September 2022.

The viral video was also shared on September 18, 2022 on NDTV News channel as shown below.

Conclusion:
To conclude, the viral video that claims to depict the 2024 Taiwan earthquake was from September 2022. In the course of the rigorous inspection of the old proof and the new evidence, it has become clear that the video does not refer to the recent earthquake that took place as stated. Hence, the recent viral video is misleading . It is important to validate the information before sharing it on social media to prevent the spread of misinformation.
Claim: Video circulating on social media captures the recent 2024 earthquake in Taiwan.
Claimed on: X, Facebook, YouTube
Fact Check: Fake & Misleading, the video actually refers to an incident from 2022.
Related Blogs
.webp)
Introduction
The ongoing armed conflict between Israel and Hamas/ Palestine is in the news all across the world. The latest conflict was triggered by unprecedented attacks against Israel by Hamas militants on October 7, killing thousands of people. Israel has launched a massive counter-offensive against the Islamic militant group. Amid the war, the bad information and propaganda spreading on various social media platforms, tech researchers have detected a network of 67 accounts that posted false content about the war and received millions of views. The ‘European Commission’ has sent a letter to Elon Musk, directing them to remove illegal content and disinformation; otherwise, penalties can be imposed. The European Commission has formally requested information from several social media giants on their handling of content related to the Israel-Hamas war. This widespread disinformation impacts and triggers the nature of war and also impacts the world and affects the goodwill of the citizens. The bad group, in this way, weaponise the information and fuels online hate activity, terrorism and extremism, flooding political polarisation with hateful content on social media. Online misinformation about the war is inciting extremism, violence, hate and different propaganda-based ideologies. The online information environment surrounding this conflict is being flooded with disinformation and misinformation, which amplifies the nature of war and too many fake narratives and videos are flooded on social media platforms.
Response of social media platforms
As there is a proliferation of online misinformation and violent content surrounding the war, It imposes a question on social media companies in terms of content moderation and other policy shifts. It is notable that Instagram, Facebook and X(Formerly Twitter) all have certain features in place giving users the ability to decide what content they want to view. They also allow for limiting the potentially sensitive content from being displayed in search results.
The experts say that It is of paramount importance to get a sort of control in this regard and define what is permissible online and what is not, Hence, what is required is expertise to determine the situation, and most importantly, It requires robust content moderation policies.
During wartime, people who are aggrieved or provoked are often targeted by this internet disinformation that blends ideological beliefs and spreads conspiracy theories and hatred. This is not a new phenomenon, it is often observed that disinformation-spreading groups emerged and became active during such war and emergency times and spread disinformation and propaganda-based ideologies and influence the society at large by misrepresenting the facts and planted stories. Social media has made it easier to post user-generated content without properly moderating it. However, it is a shared responsibility of tech companies, users, government guidelines and policies to collectively define and follow certain mechanisms to fight against disinformation and misinformation.
Digital Services Act (DSA)
The newly enacted EU law, i.e. Digital Services Act, pushes various larger online platforms to prevent posts containing illegal content and also puts limits on targeted advertising. DSA enables to challenge the of illegal online content and also poses requirements to prevent misinformation and disinformation and ensure more transparency over what the users see on the platforms. Rules under the DSA cover everything from content moderation & user privacy to transparency in operations. DSA is a landmark EU legislation moderating online platforms. Large tech platforms are now subject to content-related regulation under this new EU law ‘The Digital Services Act’, which also requires them to prevent the spread of misinformation and disinformation and overall ensure a safer online environment.
Indian Scenario
The Indian government introduced the Intermediary Guidelines (Intermediary Guidelines and Digital Media Ethics Code) Rules, updated in 2023 which talks about the establishment of a "fact check unit" to identify false or misleading online content. Digital Personal Data Protection, 2023 has also been enacted which aims to protect personal data. The upcoming Digital India bill is also proposed to be tabled in the parliament, this act will replace the current Information & Technology Act, of 2000. The upcoming Digital India bill can be seen as future-ready legislation to strengthen India’s current cybersecurity posture. It will comprehensively deal with the aspects of ensuring privacy, data protection, and fighting growing cyber crimes in the evolving digital landscape and ensuring a safe digital environment. Certain other entities including civil societies are also actively engaged in fighting misinformation and spreading awareness for safe and responsible use of the Internet.
Conclusion:
The widespread disinformation and misinformation content amid the Israel-Hamas war showcases how user-generated content on social media shows you the illusion of reality. There is widespread misinformation, misleading content or posts on social media platforms, and misuse of new advanced AI technologies that even make it easier for bad actors to create synthetic media content. It is also notable that social media has connected us like never before. Social media is a great platform with billions of active social media users around the globe, it offers various conveniences and opportunities to individuals and businesses. It is just certain aspects that require the attention of all of us to prevent the bad use of social media. The social media platforms and regulatory authorities need to be vigilant and active in clearly defining and improving the policies for content regulation and safe and responsible use of social media which can effectively combat and curtail the bad actors from misusing social media for their bad motives. As a user, it's the responsibility of users to exercise certain duties and promote responsible use of social media. With the increasing penetration of social media and the internet, misinformation is rampant all across the world and remains a global issue which needs to be addressed properly by implementing strict policies and adopting best practices to fight the misinformation. Users are encouraged to flag and report misinformative or misleading content on social media and should always verify it from authentic sources. Hence creating a safer Internet environment for everyone.
References:
- https://abcnews.go.com/US/experts-fear-hate-extremism-social-media-israel-hamas-war/story?id=104221215
- https://edition.cnn.com/2023/10/14/tech/social-media-misinformation-israel-hamas/index.html
- https://www.nytimes.com/2023/10/13/business/israel-hamas-misinformation-social-media-x.html
- https://www.africanews.com/2023/10/24/fact-check-misinformation-about-the-israel-hamas-war-is-flooding-social-media-here-are-the//
- https://www.theverge.com/23845672/eu-digital-services-act-explained

Introduction
The ongoing debate on whether AI scaling has hit a wall has been rehashed by the underwhelming response to OpenAI’s ChatGPT v5. AI scaling laws, which describe that machine learning models perform better with increased training data, model parameters and computational resources, have guided the rapid progress of Large Language Models (LLMs) so far. But many AI researchers suggest that further improvements in LLMs will have to be effected through large computational costs by orders of magnitude, which does not justify the returns. The question, then, is whether scaling remains a viable path or whether the field must explore new approaches. This is not just a tech issue but a profound innovation challenge for countries like India, charting their own AI course.
The Scaling Wall: Gaps and Innovation Opportunities
Escalating costs, data scarcity, and diminishing gains mean that simply building larger AI models may no longer guarantee breakthroughs. In such a scenario, LLM developers will have to refine new approaches to training these models, for example, by diversifying data types and redefining training techniques.
This global challenge has a bearing on India’s AI ambitions. For India, where compute and data resources are relatively scarce, this scaling slowdown poses both a challenge and an opportunity. While the India AI Mission embodies smart priorities such as democratising compute resources and developing local datasets, looming scaling challenges could prove a roadblock. Realising these ambitions requires strong input from research and academia, and improved coordination between policymakers and startups. The scaling wall highlights systemic innovation gaps where sustained support is needed, not only in hardware but also in talent development, safety research, and efficient model design.
Way Forward
To truly harness AI’s transformative power, India must prioritise policy actions and ecosystem shifts that support smarter, safer, and context-rich research through the following measures:
- Driving Efficiency and Compute Innovation: Instead of relying on brute-force scaling, India should invest in research and startups working on efficient architectures, energy-conscious training methods, and compute optimisation.
- Investing in Multimodal and Diverse Data: While indigenous datasets are being developed under the India AI Mission through AI Kosha, they must be ethically sourced from speech, images, video, sensor data, and regional content, apart from text, to enable context-rich AI models truly tailored to Indian needs.
- Addressing Core Problems for Trustworthy AI: LLMs offered by all major companies, like OpenAI, Grok, and Deepseek, have the problem of unreliability, hallucinations, and biases, since they are primarily built on scaling large datasets and parameters, which have inherent limitations. India should invest in capabilities to solve these issues and design more trustworthy LLMs.
- Supporting Talent Development and Training: Despite its substantial AI talent pool, India faces an impending demand-supply gap. It will need to launch national programs and incentives to upskill engineers, researchers, and students in advanced AI skills such as model efficiency, safety, interpretability, and new training paradigms
Conclusion
The AI scaling wall debate is a reminder that the future of LLMs will depend not on ever-larger models but on smarter, safer, and more sustainable innovation. A new generation of AI is approaching us, and India can help shape its future. The country’s AI Mission and startup ecosystem are well-positioned to lead this shift by focusing on localised needs, efficient technologies, and inclusive growth, if implemented effectively. How India approaches this new set of challenges and translates its ambitions into action, however, remains to be seen.
References
- https://blogs.nvidia.com/blog/ai-scaling-laws/
- https://www.marketingaiinstitute.com/blog/scaling-laws-ai-wall
- https://fortune.com/2025/02/19/generative-ai-scaling-agi-deep-learning/
- https://indiaai.gov.in/
- https://www.deloitte.com/in/en/about/press-room/bridging-the-ai-talent-gap-to-boost-indias-tech-and-economic-impact-deloitte-nasscom-report.html
.webp)
Executive Summary
This report analyses a recently launched social engineering attack that took advantage of Microsoft Teams and AnyDesk to deliver DarkGate malware, a MaaS tool. This way, through Microsoft Teams and by tricking users into installing AnyDesk, attackers received unauthorized remote access to deploy DarkGate that offers such features as credential theft, keylogging, and fileless persistence. The attack was executed using obfuscated AutoIt scripts for the delivery of malware which shows how threat actors are changing their modus operandi. The case brings into focus the need to put into practice preventive security measures for instance endpoint protection, staff awareness, limited utilization of off-ice-connection tools, and compartmentalization to safely work with the new and increased risks that contemporary cyber threats present.
Introduction
Hackers find new technologies and application that are reputable for spreading campaigns. The latest use of Microsoft Teams and AnyDesk platforms for launching the DarkGate malware is a perfect example of how hackers continue to use social engineering and technical vulnerabilities to penetrate the defenses of organizations. This paper focuses on the details of the technical aspect of the attack, the consequences of the attack together with preventive measures to counter the threat.
Technical Findings
1. Attack Initiation: Exploiting Microsoft Teams
The attackers leveraged Microsoft Teams as a trusted communication platform to deceive victims, exploiting its legitimacy and widespread adoption. Key technical details include:
- Spoofed Caller Identity: The attackers used impersonation techniques to masquerade as representatives of trusted external suppliers.
- Session Hijacking Risks: Exploiting Microsoft Teams session vulnerabilities, attackers aimed to escalate their privileges and deploy malicious payloads.
- Bypassing Email Filters: The initial email bombardment was designed to overwhelm spam filters and ensure that malicious communication reached the victim’s inbox.
2. Remote Access Exploitation: AnyDesk
After convincing victims to install AnyDesk, the attackers exploited the software’s functionality to achieve unauthorized remote access. Technical observations include:
- Command and Control (C2) Integration: Once installed, AnyDesk was configured to establish persistent communication with the attacker’s C2 servers, enabling remote control.
- Privilege Escalation: Attackers exploited misconfigurations in AnyDesk to gain administrative privileges, allowing them to disable antivirus software and deploy payloads.
- Data Exfiltration Potential: With full remote access, attackers could silently exfiltrate data or install additional malware without detection.
3. Malware Deployment: DarkGate Delivery via AutoIt Script
The deployment of DarkGate malware utilized AutoIt scripting, a programming language commonly used for automating Windows-based tasks. Technical details include:
- Payload Obfuscation: The AutoIt script was heavily obfuscated to evade signature-based antivirus detection.
- Process Injection: The script employed process injection techniques to embed DarkGate into legitimate processes, such as explorer.exe or svchost.exe, to avoid detection.
- Dynamic Command Loading: The malware dynamically fetched additional commands from its C2 server, allowing real-time adaptation to the victim’s environment.
4. DarkGate Malware Capabilities
DarkGate, now available as a Malware-as-a-Service (MaaS) offering, provides attackers with advanced features. Technical insights include:
- Credential Dumping: DarkGate used the Mimikatz module to extract credentials from memory and secure storage locations.
- Keylogging Mechanism: Keystrokes were logged and transmitted in real-time to the attacker’s server, enabling credential theft and activity monitoring.
- Fileless Persistence: Utilizing Windows Management Instrumentation (WMI) and registry modifications, the malware ensured persistence without leaving traditional file traces.
- Network Surveillance: The malware monitored network activity to identify high-value targets for lateral movement within the compromised environment.
5. Attack Indicators
Trend Micro researchers identified several indicators of compromise (IoCs) associated with the DarkGate campaign:
- Suspicious Domains: example-remotesupport[.]com and similar domains used for C2 communication.
- Malicious File Hashes:some text
- AutoIt Script: 5a3f8d0bd6c91234a9cd8321a1b4892d
- DarkGate Payload: 6f72cde4b7f3e9c1ac81e56c3f9f1d7a
- Behavioral Anomalies:some text
- Unusual outbound traffic to non-standard ports.
- Unauthorized registry modifications under HKCU\Software\Microsoft\Windows\CurrentVersion\Run.
Broader Cyber Threat Landscape
In parallel with this campaign, other phishing and malware delivery tactics have been observed, including:
- Cloud Exploitation: Abuse of platforms like Cloudflare Pages to host phishing sites mimicking Microsoft 365 login pages.
- Quishing Campaigns: Phishing emails with QR codes that redirect users to fake login pages.
- File Attachment Exploits: Malicious HTML attachments embedding JavaScript to steal credentials.
- Mobile Malware: Distribution of malicious Android apps capable of financial data theft.
Implications of the DarkGate Campaign
This attack highlights the sophistication of threat actors in leveraging legitimate tools for malicious purposes. Key risks include:
- Advanced Threat Evasion: The use of obfuscation and process injection complicates detection by traditional antivirus solutions.
- Cross-Platform Risk: DarkGate’s modular design enables its functionality across diverse environments, posing risks to Windows, macOS, and Linux systems.
- Organizational Exposure: The compromise of a single endpoint can serve as a gateway for further network exploitation, endangering sensitive organizational data.
Recommendations for Mitigation
- Enable Advanced Threat Detection: Deploy endpoint detection and response (EDR) solutions to identify anomalous behavior like process injection and dynamic command loading.
- Restrict Remote Access Tools: Limit the use of tools like AnyDesk to approved use cases and enforce strict monitoring.
- Use Email Filtering and Monitoring: Implement AI-driven email filtering systems to detect and block email bombardment campaigns.
- Enhance Endpoint Security: Regularly update and patch operating systems and applications to mitigate vulnerabilities.
- Educate Employees: Conduct training sessions to help employees recognize and avoid phishing and social engineering tactics.
- Implement Network Segmentation: Limit the spread of malware within an organization by segmenting high-value assets.
Conclusion
Using Microsoft Teams and AnyDesk to spread DarkGate malware shows the continuous growth of the hackers’ level. The campaign highlights how organizations have to start implementing adequate levels of security preparedness to threats, including, Threat Identification, Training employees, and Rights to Access.
The DarkGate malware is a perfect example of how these attacks have developed into MaaS offerings, meaning that the barrier to launch highly complex attacks is only decreasing, which proves once again why a layered defense approach is crucial. Both awareness and flexibility are still the key issues in addressing the constantly evolving threat in cyberspace.