#FactCheck - Viral Video Claiming to Show Kashmir Avalanche Is AI-Generated
Executive Summary
A video is being shared on social media claiming to show an avalanche in Kashmir. The caption of the post alleges that the incident occurred on February 6. Several users sharing the video are also urging people to avoid unnecessary travel to hilly regions. CyberPeace’s research found that the video being shared as footage of a Kashmir avalanche is not real. The research revealed that the viral video is AI-generated.
Claim
The video is circulating widely on social media platforms, particularly Instagram, with users claiming it shows an avalanche in Kashmir on February 6. The archived version of the post can be accessed here. Similar posts were also found online. (Links and archived links provided)

Fact Check:
To verify the claim, we searched relevant keywords on Google. During this process, we found a video posted on the official Instagram account of the BBC. The BBC post reported that an avalanche occurred near a resort in Sonamarg, Kashmir, on January 27. However, the BBC post does not contain the viral video that is being shared on social media, indicating that the circulating clip is unrelated to the real incident.

A close examination of the viral video revealed several inconsistencies. For instance, during the alleged avalanche, people present at the site are not seen panicking, running for cover, or moving toward safer locations. Additionally, the movement and flow of the falling snow appear unnatural. Such visual anomalies are commonly observed in videos generated using artificial intelligence. As part of the research , the video was analyzed using the AI detection tool Hive Moderation. The tool indicated a 99.9% probability that the video was AI-generated.

Conclusion
Based on the evidence gathered during our research , it is clear that the video being shared as footage of a Kashmir avalanche is not genuine. The clip is AI-generated and misleading. The viral claim is therefore false.
Related Blogs

Introduction
In the contemporary information environment, misinformation has emerged as a subtle yet powerful force capable of shaping public perception, influencing behavior, and undermining institutional credibility. Unlike overt falsehoods, misinformation often gains traction because it appears authentic, familiar, and authoritative. The rapid circulation of content through digital platforms has intensified this challenge, allowing altered or misleading material to reach wide audiences before verification mechanisms can respond. When misinformation mimics official communication, its impact becomes especially concerning, as citizens tend to place implicit trust in documents that carry the appearance of state authority. This growing vulnerability of public information systems was illustrated by the calendar incident in Himachal Pradesh in January 2026.
The calendar incident of Himachal Pradesh in January 2026 shows how a small lie can lead to large social and governance problems. A person whose identity is still unknown posted a modified version of the Government Calendar 2026, changing the official dates and resulting in public confusion and reputational damage to the Printing and Stationery Department. The incident may not appear very serious at first sight, but it indicates a deeper systemic issue. Misinformation is posing increasing dangers to public information ecosystems, especially when official documents are misrepresented and disseminated through digital platforms.
Misinformation as a Governance Challenge
Government calendars and official documents are necessary for public awareness and administrative coordination, and their manipulation impedes the credibility of institutions and the trustworthiness of governance. In Himachal Pradesh, modified dates might have led to confusion regarding public holidays, interference in school and administrative planning, and misinformation among the people. Such misinformation is a direct interference in the social contract that exists between the citizens and the State, where accurate information is the foundation of trust, compliance, and participation.
Impact on Citizens: Confusion, Distrust, and Digital Fatigue
For the general public, the dissemination of fake government information leads to a situation where people are confused and, at the same time, lose their trust in the government communication channels. If someone continuously gets to see the changed or misleading information misrepresented as credible, that person will find it hard to differentiate the truth from lies in the end.
This results in:
- Decision paralysis occurs when the public cannot make up their minds and either postpones or refrains from action due to the doubts they have
- Erosion of trust, not only in one department but also in the whole government communications department
- Digital fatigue occurs when people stop following public information completely, since they think that all content can be unreliable
Misinformation in a digital society is not limited to one platform only. It spreads quickly through direct messaging apps, community groups, and social networks, thus creating greater confusion among people before the official clarifications can reach the same audience.
Institutional Harm and Reputational Damage
The intentional tampering with official documents is not only a violation of ethics but also a crime and an immoral act from a governance perspective. The Printing and Stationery Department noted that such practices tarnish the public image of government bodies, which are based on accuracy, neutrality, and trust.
When untrue material gets to be known as official content:
- Departments have to communicate reactively.
- Money and manpower that could have been used for the normal administrative work are now spent on the control of the situation.
The registration of a First Information Report (FIR) in this matter is an indication of the gradual shift in the perception of law enforcement agencies that misinformation is not a playful act but rather a technology-assisted crime with serious consequences.
The Role of Verifiable Information and Trusted Sources
Such occurrences stress the need for trustworthy information as well as confirmed sources to be at the centre of the digital era. It should be the responsibility of the authorities to lead the citizens to practice and ENABLING to depend on official websites, verified social media accounts, government portals, and press releases for authentication.
Platform Responsibility and Digital Literacy
The spread of misinformation poses a significant challenge for social media platforms, which frequently amplify highly engaging content. There are some ways that the social media networks can try to limit the damage, and these are: tagging of non-verified material, limiting the sharing and working with authorities in the area of fact-checking support. However, one more thing which is crucial here is ‘public knowledge’ about digital platforms, as even unintentional dissemination of fake “official” materials can lead to legal and social repercussions. The advice of the Himachal state government is a good thing, but constantly informing the public is still a requirement.
Legal Accountability as a Deterrent
The active participation of the Cyber Crime Cells unequivocally indicates that digital misinformation, especially involving government documents, will face severe consequences. The establishment of legal responsibility acts as a preventive measure and reiterates the notion that the right to speak one's mind does not cover the right to lie or undermine public institutions. Nonetheless, to have an effective enforcement, it has to be accompanied by preventive actions such as good communication, strong governance, and public trust-building. Consistent enforcement against digital misinformation can contribute to greater accountability within society. Digital Literacy programs should be conducted periodically for netizens and institutions.
Conclusion
The incident of the creation of fake calendars in Himachal Pradesh served as a signal for the authorities to adopt accurate communication strategies. The ratification of misinformation can be achieved only if there is shared participation of governments, digital platforms, citizens and civil societies. The main goal of all this is to maintain public trust and the dissemination of information in democratic processes.

Introduction
How Generative Artificial Intelligence, or GenAI, is changing the employee workday is no longer limited to writing emails or debugging code, but now also includes analysing contracts, generating reports, and much more. The use of AI tools in everyday work has become commonplace, but the speed at which companies have adopted these technologies has created a new kind of risk. Unlike threats that come from an outside attacker, Shadow AI is created inside an organisation by a legitimate employee who uses unapproved AI tools to make their work more efficient and productive. In many cases, the employee is unaware of the potential security, data privacy, and compliance risks involved in using such tools to perform their job duties.
What Is Shadow AI?
Shadow AI is when individuals use AI tools at work that aren’t provided by the company, like tools or other software programs, without the knowledge or permission of the employer. Examples of shadow AI include:
- Using personal ChatGPT or other chatbot accounts to complete tasks at the office
- Uploading business-related documents to online AI technologies for analysis or summarisation.
- Copying proprietary source code into an online AI model for debugging
- Installing browser extensions and add-ons that are not approved by IT or Security personnel.
How Shadow AI Is Harmful
1. Uncontrolled Data Exposure
When employees access or input information into their user-created AI, it becomes outside the controls of the company, such as both employee personal information and any third-party personal information, private company information (such as source code or contracts), and company internal strategies. After a user enters data into their user-created AIs, the company loses all ability to monitor how that data is stored, processed, or maintained. A data leak situation exists without a malicious cyberattack. The biggest risk of a data leak is not maliciousness but rather the loss of control and governance over sensitive data.
2. Regulatory and Legal Non-Compliance
Data protection laws like GDPR, India’s Digital Personal Data Protection (DPDP) Act, HIPAA, and other relevant sectoral laws require businesses to process data in accordance with the law, to minimise the amount of data they use, and to be accountable for their actions. Shadow AI often results in the unlawful use of personal data due to a lack of a legal basis for the processing, unauthorised cross-border data transfers, and not having appropriate contractual protections in place with their AI service providers. Regulators do not see the convenience of employees as an excuse for not complying with the law, and therefore, the organisation is ultimately responsible for any violations that occur.
3. Loss of Intellectual Property
Employees frequently use AI tools to speed up tasks involving proprietary information—debugging code, reviewing contracts, or summarising internal research. When done using unapproved AI platforms, this can expose trade secrets and intellectual property, eroding competitive advantage and creating long-term business risk.
Real-Life Example: Samsung’s ChatGPT Data Leak
In 2023, a case study exemplifying the Shadow AI risk occurred when Samsung Electronics placed a temporary ban on employee access to ChatGPT and other AI tools after reports from engineers revealed they were using ChatGPT to create debugging processes for internal source code and to summarise meeting notes. Consequently, confidential source code related to semiconductors was inadvertently uploaded onto a public AI platform. While there were no known incursions into the company’s system due to this incident, Samsung faced a significant challenge: once sensitive information is input into a public AI tool, it exists on external servers that are outside of the company’s purview or control.
As a result of this incident, Samsung restricted employee use of ChatGPT on corporate devices, issued a series of internal communications prohibiting the sharing of corporate data with public AI tools, and increased the urgency of their discussions regarding the adoption of secure, enterprise-level AI (artificial intelligence) solutions.
What Organisations Are Doing Today
Many organisations respond to Shadow AI risk by:
- Blocking access at the network level
- Circulating warning emails or policies
While these actions may reduce immediate exposure, they fail to address the root cause: employees still need AI to perform their jobs efficiently. As a result, bans often push AI usage underground, increasing Shadow AI rather than eliminating it.
Why Blocking AI Does Not Work—Governance Does
History has demonstrated that prohibition does not work - we see this when trying to block access to cloud storage, instant messaging and collaboration tools. Employees are forced to use personal devices and/or accounts when their employers block AI, which means employers do not have real-time visibility into how their employees are using these technologies, and creates friction with the security and compliance team as they try to enforce the types of tools their employees can use. Prohibiting AI adoption will not stop it from being adopted; it will just create a challenge for employers regarding how safe and responsible it is. The challenge for effective organisations is therefore to shift from denial and develop governance-first AI strategies aimed at controlling data usage, protection and security, rather than merely restricting access to a list of specific tools.
Shadow AI: A Silent Legal Liability Under the GDPR
Shadow AI isn't a problem for the Information Technology Department; it is a failure of Governance, Compliance and Law. By using AI tools that have not been approved as a result, the organisation processes personal data without a lawful basis (Article 6 of the General Data Protection Regulation (GDPR)), repurposes data for use beyond its original intent and in breach of the Purpose Limitation (Article 5(1)(b)), and routinely exceeds necessity and in breach of Data Minimisation (Article 5(1)(c)). The outcome of these actions is the use of tools that involve International Data Transfers Without Authorisation and are therefore in breach of Chapter V, and violate Article 32 because there are no enforceable safeguards in place. Most significantly, the failure to demonstrate Oversight, Logging and Control under Articles 5(2) and 24 constitutes a failure in Accountability. Therefore, from a Regulatory perspective, Shadow AI is not accidental and is not defensible.
The Right Solution: Secure and Governed AI Adoption
1. Provide Approved AI Tools
Employers have an obligation to supply business-approved AI technology for helping workers to be productive while maintaining maximum protections, like storing data separately and not using employees' data for training a model; defining how long data is kept, and the rules around deleting that data. When employees are provided with verified and secure AI options that align with their work processes, they will rely significantly less on Shadow AI.
2. Enforce Zero-Trust Data Access
The governance of AI systems must follow the principles of "zero trust," granting access to data only through the principle of "least privilege," which means that data access will only be allowed by the system user, and providing continuous verification of user-identity and context; this supports and helps establish context-aware controls to monitor and track all user activities, which will be especially important as agent-like AI systems become increasingly autonomous and are capable of operating at machine-speed where even small errors in configuration, will result in rapid and large expose to data.
3. Apply DLP and Audit Logging
It is important to have robust data loss prevention measures in place to protect sensitive data that is sent outside an organisation. The first end user or machine that accesses the data should be detailed in a comprehensive audit log that indicates when and how the data is accessed. In combination with other controls, these measures create accountability, comply with regulations, and assist with appropriately detecting and responding to incidents.
4. Maintain Visibility Across AI, Cloud, and SaaS
Security teams need unified visibility across AI tools, personal cloud applications, and SaaS platforms. Risks move across systems, and controls must follow the data wherever it flows.
Conclusion
This new threat exposes an organisation to the risk of data loss through leaks, regulatory fines, liability for the loss of intellectual property, and reputational damage, all of which can occur without any intent to cause harm. The way forward is not to block AI, but to adopt a clear framework built on governance, visibility, and secure enablement. This approach allows organisations to use AI with confidence, while ensuring trust, accountability, and effective oversight to protect data and support AI in reaching its full transformative potential. AI use is encouraged, but it must be done responsibly, ethically, and securely.
References
- https://bronson.ai/resources/shadow-ai/
- https://www.varonis.com/blog/shadow-ai
- https://www.waymakeros.com/learn/gdpr-hipaa-shadow-ai-compliance-nightmare
- https://www.forbes.com/sites/siladityaray/2023/05/02/samsung-bans-chatgpt-and-other-chatbots-for-employees-after-sensitive-code-leak/
- https://www.usatoday.com/story/special/contributor-content/2025/05/23/shadow-ai-the-hidden-risk-in-todays-workplace/83822081007
.webp)
Introduction
In the vast expanse of the digital cosmos, where the tendrils of the internet weave an intricate tapestry of connectivity, the channels through which information cascades have become a labyrinth of enigma and complexity. As we traverse this boundless virtual landscape, the line demarcating fact from fiction blurs, leaving the essence of truth adrift in a deluge of data. Amidst this ceaseless flow, platforms such as YouTube, Meta, and Twitter emerge as bulwarks in a pivotal struggle against the insidious spectres of fake news and disinformation—a struggle as fervent and consequential as any historical skirmish over the dominion of truth and influence.
Let us delve into a few case studies that illustrate the multifaceted nature of this digital warfare, where the stakes are nothing less than the integrity of public discourse and the sanctity of societal harmony.
Case 1: A Chief Minister's Stand Against Digital Deception
In the northeastern reaches of India, Assam's Chief Minister, Himanta Biswa Sarma, confronted disinformation head-on. With the spectre of elections looming like a storm on the horizon, he took to the microblogging site X to unveil a nefarious scheme—a doctored video intended to distort his speech and sow seeds of communal discord. 'See for yourself, as elections approach, how vested groups distort a speech with the criminal intention of spreading disinformation and communal disharmony. The long arms of the law will catch up with these elements,' declared Sarma, his words a clarion call for vigilance.
The counterfeit video, crafted to smear the Chief Minister's reputation, elicited a swift and decisive response from Assam's Director General of Police, G.P. Singh. 'Noted Sir. CID Assam would register a criminal case and investigate the people behind this,' assured Singh, signalling the readiness of the law to pursue the purveyors of falsehood.
Case 2: Waves of Deceit: Unverified Claims of Cancellations in the Maldives Tourism Controversy
The narrative shifts to the idyllic archipelago of the Maldives, where the azure waters belie a tumultuous undercurrent of diplomatic discord with India. Following disparaging remarks by Maldivian officials directed at Indian Prime Minister Narendra Modi, the social media sphere became rife with claims of Indian tourists en masse cancelling their sojourns to the island nation. Screenshots purporting to show cancelled bookings flooded platforms like X, with one user claiming to have annulled a reservation at the Palms Retreat, Fulhadhoo, to the tune of at least Rs 5 lakh, citing the officials' 'racist remarks.'
Initial reports from a few media outlets lent credence to this narrative of widespread cancellations. However, upon closer scrutiny, the veracity of these claims crumbled like a sandcastle at high tide. Concrete evidence to substantiate the alleged boycott was conspicuously absent, and neither travel agencies nor airlines corroborated the supposed trend.
The controversy was inflamed when PM Modi's visit to Lakshadweep, and subsequent social media posts praising the archipelago, spurred Indian users to champion Lakshadweep as an alternative to the Maldives. The vitriolic response from Maldivian ministers, who labelled Modi with derogatory remarks, ignited a firestorm on X, with hashtags like #BoycottMaldives and #MaldivesBoycott trending fervently.
Yet, the truth behind the cacophony of cancellation numbers remains shrouded in ambiguity, with no official acknowledgement from either government and a conspicuous absence of data from the tourism industry.
Case 3: Misinformation Highway: Unraveling the Fabrications in Bollywood's rumours or misinformation: Lies, Thumbnails, and Digital Dalliances
Gaze now turns to the bustling fabricated thumbnails or rumour taglines on uploaded videos on YouTube, where thumbnails emblazoned with tantalising texts beckon viewers with the promise of scandalous revelations. 'Pregnant? Divorced?' they shout, luring millions into their web with the allure of salacious 'news.' Yet, these are but mirages, baseless rumours masquerading as fact, or worse, complete fabrications.
The platform teems with counterfeit narratives and rumours, targeting the luminaries of Bollywood. Factors such as easy content uploading without strict scrutiny, a burgeoning digital footprint, and India's insatiable appetite for celebrity culture have created a fertile ground for the proliferation of such content. It is a testament to the power of the digital age, where anyone with a connection can craft a narrative and cast it into the ether, regardless of its foundation in reality.
We must arm ourselves with discernment and scepticism in this relentless onslaught of misinformation. The digital realm, for all its wonders, is also a battleground where the currency is truth, and the price of negligence is the erosion of our collective understanding. As we navigate this ever-evolving landscape, let us hold fast to the principles of verification and evidence, for they are the compass by which we can chart a course through the maelstrom of misinformation that seeks to engulf us.
Conclusion
In this era of digital enlightenment, it is incumbent upon us to discern the chaff from the wheat, to elevate the discourse beyond the mire of falsehoods. Let us endeavour to foster a digital polity that values truth, champions authenticity, and resolutely stands against the tide of disinformation that threatens to undermine the very fabric of our society.
References:
- https://www.indiatodayne.in/assam/video/assam-cm-exposes-fake-video-scheme-dgp-promises-swift-action-743097-2024-01-08
- https://www.thequint.com/news/webqoof/boycott-maldives-misinformation-on-trip-booking-cancellations
- https://www.thequint.com/news/webqoof/bollywood-fake-news-on-youtube-uses-divorce-pregnancy-and-arrests-for-misinformation