#FactCheck: Misleading Clip of Nepal Crash Shared as Air India’s AI-171 Ahmedabad Accident
Executive Summary:
A viral video circulating on social media platforms, claimed to show the final moments of an Air India flight carrying passengers inside the cabin just before it crashed near Ahmedabad on June 12, 2025, is false. However, upon further research, the footage was found to originate from the Yeti Airlines Flight 691 crash that occurred in Pokhara, Nepal, on January 15, 2023. For all details, please follow the report.

Claim:
Viral videos circulating on social media claiming to show the final moments inside Air India flight AI‑171 before it crashed near Ahmedabad on June 12, 2025. The footage appears to have been recorded by a passenger during the flight and is being shared as real-time visuals from the recent tragedy. Many users have believed the clip to be genuine and linked it directly to the Air India incident.


Fact Check:
To confirm the validity of the video going viral depicting the alleged final moments of Air India's AI-171 that crashed near Ahmedabad on 12 June 2025, we engaged in a comprehensive reverse image search and keyframe analysis then we got to know that the footage occurs back in January 2023, namely Yeti Airlines Flight 691 that crashed in Pokhara, Nepal. The visuals shared in the viral video match up, including cabin and passenger details, identically to the original livestream made by a passenger aboard the Nepal flight, confirming that the video is being reused out of context.

Moreover, well-respected and reliable news organisations, including New York Post and NDTV, have shared reports confirming that the video originated from the 2023 Nepal plane crash and has no relation to the recent Air India incident. The Press Information Bureau (PIB) also released a clarification dismissing the video as disinformation. Reliable reports from the past, visual evidence, and reverse search verification all provide complete agreement in that the viral video is falsely attributed to the AI-171 tragedy.


Conclusion:
The viral footage does not show the AI-171 crash near Ahmedabad on 12 June 2025. It is an irrelevant, previously recorded livestream from the January 2023 Yeti Airlines crash in Pokhara, Nepal, falsely repurposed as breaking news. It’s essential to rely on verified and credible news agencies. Please refer to official investigation reports when discussing such sensitive events.
- Claim: A dramatic clip of passengers inside a crashing plane is being falsely linked to the recent Air India tragedy in Ahmedabad.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
Beginning with the premise that the advent of the internet has woven a rich but daunting digital web, intertwining the very fabric of technology with the variegated hues of human interaction, the EU has stepped in as the custodian of this ever-evolving tableau. It is within this sprawling network—a veritable digital Minotaur's labyrinth—that the European Union has launched a vigilant quest, seeking not merely to chart its enigmatic corridors but to instil a sense of order in its inherent chaos.
The Digital Services Act (DSA) is the EU's latest testament to this determined pilgrimage, a voyage to assert dominion over the nebulous realms of cyberspace. In its latest sagacious move, the EU has levelled its regulatory lance at the behemoths of digital indulgence—Pornhub, XVideos, and Stripchat—monarchs in the realm of adult entertainment, each commanding millions of devoted followers.
Applicability of DSA
Graced with the moniker of Very Large Online Platforms (VLOPs), these titans of titillation are now facing the complex weave of duties delineated by the DSA, a legislative leviathan whose coils envelop the shadowy expanses of the internet with an aim to safeguard its citizens from the snares and pitfalls ensconced within. Like a vigilant Minotaur, the European Commission, the EU's executive arm, stands steadfast, enforcing compliance with an unwavering gaze.
The DSA is more than a mere compilation of edicts; it encapsulates a deeper, more profound ethos—a clarion call announcing that the wild frontiers of the digital domain shall be tamed, transforming into enclaves where the sanctity of individual dignity and rights is zealously championed. The three corporations, singled out as the pioneers to be ensnared by the DSA's intricate net, are now beckoned to embark on an odyssey of transformation, realigning their operations with the EU's noble envisioning of a safeguarded internet ecosystem.
The Paradigm Shift
In a resolute succession, following its first decree addressing 19 Very Large Online Platforms and Search Engines, the Commission has now ensconced the trinity of adult content purveyors within the DSA's embrace. The act demands that these platforms establish intuitive user mechanisms for reporting illicit content, prioritize communications from entities bestowed with the 'trusted flaggers' title, and elucidate to users the rationale behind actions taken to restrict or remove content. Paramount to the DSA's ethos, they are also tasked with constructing internal mechanisms to address complaints, forthwith apprising law enforcement of content hinting at criminal infractions, and revising their operational underpinnings to ensure the confidentiality, integrity, and security of minors.
But the aspirations of the DSA stretch farther, encompassing a realm where platforms are agents against deception and manipulation of users, categorically eschewing targeted advertisement that exploits sensitive profiling data or is aimed at impressionable minors. The platforms must operate with an air of diligence and equitable objectivity, deftly applying their terms of use, and are compelled to reveal their content moderation practices through annual declarations of transparency.
The DSA bestows upon the designated VLOPs an even more intensive catalogue of obligations. Within a scant four months of their designation, Pornhub, XVideos, and Stripchat are mandated to implement measures that both empower and shield their users—especially the most vulnerable, minors—from harms that traverse their digital portals. Augmented content moderation measures are requisite, with critical risk analyses and mitigation strategies directed at halting the spread of unlawful content, such as child exploitation material or the non-consensual circulation of intimate imagery, as well as curbing the proliferation and repercussions of deepfake-generated pornography.
The New Rules
The DSA enshrines the preeminence of protecting minors, with a staunch requirement for VLOPs to contrive their services so as to anticipate and enfeeble any potential threats to the welfare of young internet navigators. They must enact operational measures to deter access to pornographic content by minors, including the utilization of robust age verification systems. The themes of transparency and accountability are amplified under the DSA's auspices, with VLOPs subject to external audits of their risk assessments and adherence to stipulations, the obligation to maintain accessible advertising repositories, and the provision of data access to rigorously vetted researchers.
Coordinated by the Commission in concert with the Member States' Digital Services Coordinators, vigilant supervision will be maintained to ensure the scrupulous compliance of Pornhub, Stripchat, and XVideos with the DSA's stringent directives. The Commission's services are poised to engage with the newly designated platforms diligently, affirming that initiatives aimed at shielding minors from pernicious content, as well as curbing the distribution of illegal content, are effectively addressed.
The EU's monumental crusade, distilled into the DSA, symbolises a pledge—a testament to its steadfast resolve to shepherd cyberspace, ensuring the Minotaur of regulation keeps the bedlam at a manageable compass and the sacrosanctity of the digital realm inviolate for all who meander through its infinite expanses. As we cast our gazes toward February 17, 2024—the cusp of the DSA's comprehensive application—it is palpable that this legislative milestone is not simply a set of guidelines; it stands as a bold, unflinching manifesto. It beckons the advent of a novel digital age, where every online platform, barring small and micro-enterprises, will be enshrined in the lofty ideals imparted by the DSA.
Conclusion
As we teeter on the edge of this nascent digital horizon, it becomes unequivocally clear: the European Union's Digital Services Act is more than a mundane policy—it is a pledge, a resolute statement of purpose, asserting that amid the vast, interwoven tapestry of the internet, each user's safety, dignity, and freedoms are enshrined and hold the intrinsic significance meriting the force of the EU's legislative guard. Although the labyrinth of the digital domain may be convoluted with complexity, guided by the DSA's insightful thread, the march toward a more secure, conscientious online sphere forges on—resolute, unerring, one deliberate stride at a time.
References
https://ec.europa.eu/commission/presscorner/detail/en/ip_23_6763https://www.breakingnews.ie/world/three-of-the-biggest-porn-sites-must-verify-ages-under-eus-new-digital-law-1566874.html
.webp)
Executive Summary
This report analyses a recently launched social engineering attack that took advantage of Microsoft Teams and AnyDesk to deliver DarkGate malware, a MaaS tool. This way, through Microsoft Teams and by tricking users into installing AnyDesk, attackers received unauthorized remote access to deploy DarkGate that offers such features as credential theft, keylogging, and fileless persistence. The attack was executed using obfuscated AutoIt scripts for the delivery of malware which shows how threat actors are changing their modus operandi. The case brings into focus the need to put into practice preventive security measures for instance endpoint protection, staff awareness, limited utilization of off-ice-connection tools, and compartmentalization to safely work with the new and increased risks that contemporary cyber threats present.
Introduction
Hackers find new technologies and application that are reputable for spreading campaigns. The latest use of Microsoft Teams and AnyDesk platforms for launching the DarkGate malware is a perfect example of how hackers continue to use social engineering and technical vulnerabilities to penetrate the defenses of organizations. This paper focuses on the details of the technical aspect of the attack, the consequences of the attack together with preventive measures to counter the threat.
Technical Findings
1. Attack Initiation: Exploiting Microsoft Teams
The attackers leveraged Microsoft Teams as a trusted communication platform to deceive victims, exploiting its legitimacy and widespread adoption. Key technical details include:
- Spoofed Caller Identity: The attackers used impersonation techniques to masquerade as representatives of trusted external suppliers.
- Session Hijacking Risks: Exploiting Microsoft Teams session vulnerabilities, attackers aimed to escalate their privileges and deploy malicious payloads.
- Bypassing Email Filters: The initial email bombardment was designed to overwhelm spam filters and ensure that malicious communication reached the victim’s inbox.
2. Remote Access Exploitation: AnyDesk
After convincing victims to install AnyDesk, the attackers exploited the software’s functionality to achieve unauthorized remote access. Technical observations include:
- Command and Control (C2) Integration: Once installed, AnyDesk was configured to establish persistent communication with the attacker’s C2 servers, enabling remote control.
- Privilege Escalation: Attackers exploited misconfigurations in AnyDesk to gain administrative privileges, allowing them to disable antivirus software and deploy payloads.
- Data Exfiltration Potential: With full remote access, attackers could silently exfiltrate data or install additional malware without detection.
3. Malware Deployment: DarkGate Delivery via AutoIt Script
The deployment of DarkGate malware utilized AutoIt scripting, a programming language commonly used for automating Windows-based tasks. Technical details include:
- Payload Obfuscation: The AutoIt script was heavily obfuscated to evade signature-based antivirus detection.
- Process Injection: The script employed process injection techniques to embed DarkGate into legitimate processes, such as explorer.exe or svchost.exe, to avoid detection.
- Dynamic Command Loading: The malware dynamically fetched additional commands from its C2 server, allowing real-time adaptation to the victim’s environment.
4. DarkGate Malware Capabilities
DarkGate, now available as a Malware-as-a-Service (MaaS) offering, provides attackers with advanced features. Technical insights include:
- Credential Dumping: DarkGate used the Mimikatz module to extract credentials from memory and secure storage locations.
- Keylogging Mechanism: Keystrokes were logged and transmitted in real-time to the attacker’s server, enabling credential theft and activity monitoring.
- Fileless Persistence: Utilizing Windows Management Instrumentation (WMI) and registry modifications, the malware ensured persistence without leaving traditional file traces.
- Network Surveillance: The malware monitored network activity to identify high-value targets for lateral movement within the compromised environment.
5. Attack Indicators
Trend Micro researchers identified several indicators of compromise (IoCs) associated with the DarkGate campaign:
- Suspicious Domains: example-remotesupport[.]com and similar domains used for C2 communication.
- Malicious File Hashes:some text
- AutoIt Script: 5a3f8d0bd6c91234a9cd8321a1b4892d
- DarkGate Payload: 6f72cde4b7f3e9c1ac81e56c3f9f1d7a
- Behavioral Anomalies:some text
- Unusual outbound traffic to non-standard ports.
- Unauthorized registry modifications under HKCU\Software\Microsoft\Windows\CurrentVersion\Run.
Broader Cyber Threat Landscape
In parallel with this campaign, other phishing and malware delivery tactics have been observed, including:
- Cloud Exploitation: Abuse of platforms like Cloudflare Pages to host phishing sites mimicking Microsoft 365 login pages.
- Quishing Campaigns: Phishing emails with QR codes that redirect users to fake login pages.
- File Attachment Exploits: Malicious HTML attachments embedding JavaScript to steal credentials.
- Mobile Malware: Distribution of malicious Android apps capable of financial data theft.
Implications of the DarkGate Campaign
This attack highlights the sophistication of threat actors in leveraging legitimate tools for malicious purposes. Key risks include:
- Advanced Threat Evasion: The use of obfuscation and process injection complicates detection by traditional antivirus solutions.
- Cross-Platform Risk: DarkGate’s modular design enables its functionality across diverse environments, posing risks to Windows, macOS, and Linux systems.
- Organizational Exposure: The compromise of a single endpoint can serve as a gateway for further network exploitation, endangering sensitive organizational data.
Recommendations for Mitigation
- Enable Advanced Threat Detection: Deploy endpoint detection and response (EDR) solutions to identify anomalous behavior like process injection and dynamic command loading.
- Restrict Remote Access Tools: Limit the use of tools like AnyDesk to approved use cases and enforce strict monitoring.
- Use Email Filtering and Monitoring: Implement AI-driven email filtering systems to detect and block email bombardment campaigns.
- Enhance Endpoint Security: Regularly update and patch operating systems and applications to mitigate vulnerabilities.
- Educate Employees: Conduct training sessions to help employees recognize and avoid phishing and social engineering tactics.
- Implement Network Segmentation: Limit the spread of malware within an organization by segmenting high-value assets.
Conclusion
Using Microsoft Teams and AnyDesk to spread DarkGate malware shows the continuous growth of the hackers’ level. The campaign highlights how organizations have to start implementing adequate levels of security preparedness to threats, including, Threat Identification, Training employees, and Rights to Access.
The DarkGate malware is a perfect example of how these attacks have developed into MaaS offerings, meaning that the barrier to launch highly complex attacks is only decreasing, which proves once again why a layered defense approach is crucial. Both awareness and flexibility are still the key issues in addressing the constantly evolving threat in cyberspace.
Reference:
.webp)
Introduction
The rise of misinformation, disinformation, and synthetic media content on the internet and social media platforms has raised serious concerns, emphasizing the need for responsible use of social media to maintain information accuracy and combat misinformation incidents. With online misinformation rampant all over the world, the World Economic Forum's 2024 Global Risk Report, notably ranks India amongst the highest in terms of risk of mis/disinformation.
The widespread online misinformation on social media platforms necessitates a joint effort between tech/social media platforms and the government to counter such incidents. The Indian government is actively seeking to collaborate with tech/social media platforms to foster a safe and trustworthy digital environment and to also ensure compliance with intermediary rules and regulations. The Ministry of Information and Broadcasting has used ‘extraordinary powers’ to block certain YouTube channels, X (Twitter) & Facebook accounts, allegedly used to spread harmful misinformation. The government has issued advisories regulating deepfake and misinformation, and social media platforms initiated efforts to implement algorithmic and technical improvements to counter misinformation and secure the information landscape.
Efforts by the Government and Social Media Platforms to Combat Misinformation
- Advisory regulating AI, deepfake and misinformation
The Ministry of Electronics and Information Technology (MeitY) issued a modified advisory on 15th March 2024, in suppression of the advisory issued on 1st March 2024. The latest advisory specifies that the platforms should inform all users about the consequences of dealing with unlawful information on platforms, including disabling access, removing non-compliant information, suspension or termination of access or usage rights of the user to their user account and imposing punishment under applicable law. The advisory necessitates identifying synthetically created content across various formats, and instructs platforms to employ labels, unique identifiers, or metadata to ensure transparency.
- Rules related to content regulation
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (Updated as on 6.4.2023) have been enacted under the IT Act, 2000. These rules assign specific obligations on intermediaries as to what kind of information is to be hosted, displayed, uploaded, published, transmitted, stored or shared. The rules also specify provisions to establish a grievance redressal mechanism by platforms and remove unlawful content within stipulated time frames.
- Counteracting misinformation during Indian elections 2024
To counter misinformation during the Indian elections the government and social media platforms made their best efforts to ensure the electoral integrity was saved from any threat of mis/disinformation. The Election Commission of India (ECI) further launched the 'Myth vs Reality Register' to combat misinformation and to ensure the integrity of the electoral process during the general elections in 2024. The ECI collaborated with Google to empower the citizenry by making it easy to find critical voting information on Google Search and YouTube. In this way, Google has supported the 2024 Indian General Election by providing high-quality information to voters and helping people navigate AI-generated content. Google connected voters to helpful information through product features that show data from trusted institutions across its portfolio. YouTube showcased election information panels, featuring content from authoritative sources.
- YouTube and X (Twitter) new ‘Notes Feature’
- Notes Feature on YouTube: YouTube is testing an experimental feature that allows users to add notes to provide relevant, timely, and easy-to-understand context for videos. This initiative builds on previous products that display helpful information alongside videos, such as information panels and disclosure requirements when content is altered or synthetic. YouTube clarified that the pilot will be available on mobiles in the U.S. and in the English language, to start with. During this test phase, viewers, participants, and creators are invited to give feedback on the quality of the notes.
- Community Notes feature on X: Community Notes on X aims to enhance the understanding of potentially misleading posts by allowing users to add context to them. Contributors can leave notes on any post, and if enough people rate the note as helpful, it will be publicly displayed. The algorithm is open source and publicly available on GitHub, allowing anyone to audit, analyze, or suggest improvements. However, Community Notes do not represent X's viewpoint and cannot be edited or modified by their teams. A post with a Community Note will not be labelled, removed, or addressed by X unless it violates the X Rules, Terms of Service, or Privacy Policy. Failure to abide by these rules can result in removal from accessing Community Notes and/or other remediations. Users can report notes that do not comply with the rules by selecting the menu on a note and selecting ‘Report’ or using the provided form.
CyberPeace Policy Recommendations
Countering widespread online misinformation on social media platforms requires a multipronged approach that involves joint efforts from different stakeholders. Platforms should invest in state-of-the-art algorithms and technology to detect and flag suspected misleading information. They should also establish trustworthy fact-checking protocols and collaborate with expert fact-checking groups. Campaigns, seminars, and other educational materials must be encouraged by the government to increase public awareness and digital literacy about the mis/disinformation risks and impacts. Netizens should be empowered with the necessary skills and ability to discern fact and misleading information to successfully browse true information in the digital information age. The joint efforts by Government authorities, tech companies, and expert cyber security organisations are vital in promoting a secure and honest online information landscape and countering the spread of mis/disinformation. Platforms must encourage netizens/users to foster appropriate online conduct while using platforms and abiding by the terms & conditions and community guidelines of the platforms. Encouraging a culture of truth and integrity on the internet, honouring differing points of view, and confirming facts all help to create a more reliable and information-resilient environment.
References:
- https://www.meity.gov.in/writereaddata/files/Advisory%2015March%202024.pdf
- https://blog.google/intl/en-in/company-news/outreach-initiatives/supporting-the-2024-indian-general-election/
- https://blog.youtube/news-and-events/new-ways-to-offer-viewers-more-context/
- https://help.x.com/en/using-x/community-notes