#FactCheck - Viral Video Misrepresented as Reaction to Pakistan’s Defeat in T20 World Cup
Executive Summary
A video is being widely shared on social media with the claim that Baloch people celebrated by dancing after Pakistan’s crushing defeat to India in the T20 World Cup. However, research by the CyberPeace found the claim to be misleading. The video is actually from a Lohri celebration held on January 23 at Government College University in Lahore, and is unrelated to any cricket match. India defeated Pakistan by 61 runs in the T20 World Cup 2026 match held in Colombo last Sunday. India scored 175 runs for the loss of seven wickets in 20 overs, while Pakistan were bowled out for 114 runs in 18 overs.
Claim
The 30-second video was shared on X with the caption, “Baloch people celebrate India’s victory.” The footage shows a group of men dressed in traditional attire dancing around a fire, while a large crowd gathers around and applauds.

Fact Check
To verify the authenticity of the viral claim, key frames from the video were extracted and subjected to reverse image search. The search led to an Instagram post uploaded on January 26, 2026, by an account associated with Government College University Lahore. The caption described the performance as a Balochistan cultural dance held at the university’s amphitheatre.

Further research also uncovered another video of the same event, recorded from a different angle and uploaded on January 24, 2026, on Instagram. The caption again confirmed that the event took place at Government College University Lahore.

Conclusion
The evidence confirms that the viral video does not show Baloch people celebrating Pakistan’s defeat in the T20 World Cup. Instead, it depicts a cultural dance performance during a Lohri celebration at Government College University Lahore, and has been shared with a misleading claim.
Related Blogs
.webp)
Introduction
Conversations surrounding the scourge of misinformation online typically focus on the risks to social order, political stability, economic safety and personal security. An oft-overlooked aspect of this phenomenon is the fact that it also takes a very real emotional and mental toll on people. Even as we grapple with the big picture questions about financial fraud or political rumors or inaccurate medical information online, we must also appreciate the fact that being exposed to misinformation and becoming aware of one’s own vulnerability are both significant sources of mental stress in today’s digital ecosystem.
Inaccurate information causes confusion and worry, which has negative consequences for mental health. Misinformation may also impair people's sense of well-being by undermining their trust in institutions, authority figures, and their own judgment. The constant bombardment of misinformation can lead to information overload, wherein people are unable to discriminate between legitimate sources and misleading content, resulting in mental exhaustion and a sense of being overwhelmed by the sheer volume of information available. Vulnerable groups such as children, the elderly, and those with pre-existing health conditions are more sensitive or susceptible to the negative effects of misinformation.
How Does Misinformation Endanger Mental Health?
Misinformation on social media platforms is a matter of public health because it has the potential to confuse people, lead to poor decision-making and result in cognitive dissonance, anxiety and unwanted behavioural changes.
Unconstrained misinformation can also lead to social disorder and the prevalence of negative emotions amongst larger numbers, ultimately causing a huge impact on society. Therefore, understanding the spread and diffusion characteristics of misinformation on Internet platforms is crucial.
The spread of misinformation can elicit different emotions of the public, and the emotions also change with the spread of misinformation. Factors such as user engagement, number of comments, and time of discussion all have an impact on the change of emotions in misinformation. Active users tend to make more comments, engage longer in discussions, and display more dominant negative emotions when triggered by misinformation. Understanding the evolution pattern of emotions triggered by misinformation is also important in view of the public’s emotional fluctuations under the influence of misinformation, and social media often magnifies the impact of emotions and makes emotions spread rapidly in social networks. For example, the sentiment of misinformation increases when there are sensitive topics such as political elections, viral trending topics, health-related information, communal and local information, information about natural disasters and more. Active misinformation on the Internet not only affects the public's psychology, mental health and behavior, but also has an impact on the stability of social order and the maintenance of social security.
Prebunking and Debunking To Build Mental Guards Against Misinformation
As the spread of misinformation and disinformation rises, so do the techniques aimed to tackle their spread. Prebunking or attitudinal inoculation is a technique for training individuals to recogniseand resist deceptive communications before they can take root. Prebunking is a psychological method for mitigating the effects of misinformation, strengthening resilience and creating cognitive defenses against future misinformation. Debunking provides individuals with accurate information to counter false claims and myths, correcting misconceptions and preventing the spread of misinformation. By presenting evidence-based refutations, debunking helps individuals distinguish fact from fiction.
What do health experts say about online misinformation?
“In the21st century, mental health is crucial due to the overwhelming amount of information available online. The COVID-19 pandemic-related misinformation was a prime example of this, with misinformation spreading online, leading to increased anxiety, panic buying, fear of leaving home, and mistrust in health measures. To protect our mental health, it is essential to cultivate a discerning mindset, question sources, and verify information before consumption. Fostering a supportive community that encourages open dialogue and fact-checking can help navigate the digital information landscape with confidence and emotional support. Prioritising self-care routines, mindfulness practices, and seeking professional guidance are also crucial for safeguarding mental health in the digital information era.”
In conversation with CyberPeace ~ Says Dubai-based psychologist, Aishwarya Menon, (BA,in Psychology and Criminology from the University of Westen Ontario, London and MA in Mental Health and Addictions (Humber College, University of Guelph),Toronto.
CyberPeace Policy Recommendations:
1) Countering misinformation is everyone's shared responsibility. To mitigate the negative effects of infodemics online, we must look at developing strong legal policies, creating and promoting awareness campaigns, relying on authenticated content on mass media, and increasing people's digital literacy.
2) Expert organisations actively verifying the information through various strategies including prebunking and debunking efforts are among those best placed to refute misinformation and direct users to evidence-based information sources. It is recommended that countermeasures for users on platforms be increased with evidence-based data or accurate information.
3) The role of social media platforms is crucial in the misinformation crisis, hence it is recommended that social media platforms actively counter the production of misinformation on their platforms. Local, national, and international efforts and additional research are required to implement the robust misinformation counterstrategies.
4) Netizens are advised or encouraged to follow official sources to check the reliability of any news or information. They must recognise the red flags by recognising the signs such as questionable facts, poorly written texts, surprising or upsetting news, fake social media accounts and fake websites designed to look like legitimate ones. Netizens are also encouraged to develop cognitive skills to discern fact and reality. Netizens are advised to approach information with a healthy dose of skepticism and curiosity.
Final Words:
It is crucial to protect mental health by escalating and disturbing the rise of misinformation incidents on various subjects, safeguarding our minds requires cognitive skills, building media literacy and verifying the information from trusted sources, prioritising mental health by self-care practices and staying connected with supportive authenticated networks. Promoting prebunking and debunking initiatives is necessary. Netizen scan protect themselves against the negative effects of misinformation and cultivate a resilient mindset in the digital information age.
References:
- https://www.hindawi.com/journals/scn/2021/7999760/
- https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8502082/
.webp)
Introduction
The rise of misinformation, disinformation, and synthetic media content on the internet and social media platforms has raised serious concerns, emphasizing the need for responsible use of social media to maintain information accuracy and combat misinformation incidents. With online misinformation rampant all over the world, the World Economic Forum's 2024 Global Risk Report, notably ranks India amongst the highest in terms of risk of mis/disinformation.
The widespread online misinformation on social media platforms necessitates a joint effort between tech/social media platforms and the government to counter such incidents. The Indian government is actively seeking to collaborate with tech/social media platforms to foster a safe and trustworthy digital environment and to also ensure compliance with intermediary rules and regulations. The Ministry of Information and Broadcasting has used ‘extraordinary powers’ to block certain YouTube channels, X (Twitter) & Facebook accounts, allegedly used to spread harmful misinformation. The government has issued advisories regulating deepfake and misinformation, and social media platforms initiated efforts to implement algorithmic and technical improvements to counter misinformation and secure the information landscape.
Efforts by the Government and Social Media Platforms to Combat Misinformation
- Advisory regulating AI, deepfake and misinformation
The Ministry of Electronics and Information Technology (MeitY) issued a modified advisory on 15th March 2024, in suppression of the advisory issued on 1st March 2024. The latest advisory specifies that the platforms should inform all users about the consequences of dealing with unlawful information on platforms, including disabling access, removing non-compliant information, suspension or termination of access or usage rights of the user to their user account and imposing punishment under applicable law. The advisory necessitates identifying synthetically created content across various formats, and instructs platforms to employ labels, unique identifiers, or metadata to ensure transparency.
- Rules related to content regulation
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (Updated as on 6.4.2023) have been enacted under the IT Act, 2000. These rules assign specific obligations on intermediaries as to what kind of information is to be hosted, displayed, uploaded, published, transmitted, stored or shared. The rules also specify provisions to establish a grievance redressal mechanism by platforms and remove unlawful content within stipulated time frames.
- Counteracting misinformation during Indian elections 2024
To counter misinformation during the Indian elections the government and social media platforms made their best efforts to ensure the electoral integrity was saved from any threat of mis/disinformation. The Election Commission of India (ECI) further launched the 'Myth vs Reality Register' to combat misinformation and to ensure the integrity of the electoral process during the general elections in 2024. The ECI collaborated with Google to empower the citizenry by making it easy to find critical voting information on Google Search and YouTube. In this way, Google has supported the 2024 Indian General Election by providing high-quality information to voters and helping people navigate AI-generated content. Google connected voters to helpful information through product features that show data from trusted institutions across its portfolio. YouTube showcased election information panels, featuring content from authoritative sources.
- YouTube and X (Twitter) new ‘Notes Feature’
- Notes Feature on YouTube: YouTube is testing an experimental feature that allows users to add notes to provide relevant, timely, and easy-to-understand context for videos. This initiative builds on previous products that display helpful information alongside videos, such as information panels and disclosure requirements when content is altered or synthetic. YouTube clarified that the pilot will be available on mobiles in the U.S. and in the English language, to start with. During this test phase, viewers, participants, and creators are invited to give feedback on the quality of the notes.
- Community Notes feature on X: Community Notes on X aims to enhance the understanding of potentially misleading posts by allowing users to add context to them. Contributors can leave notes on any post, and if enough people rate the note as helpful, it will be publicly displayed. The algorithm is open source and publicly available on GitHub, allowing anyone to audit, analyze, or suggest improvements. However, Community Notes do not represent X's viewpoint and cannot be edited or modified by their teams. A post with a Community Note will not be labelled, removed, or addressed by X unless it violates the X Rules, Terms of Service, or Privacy Policy. Failure to abide by these rules can result in removal from accessing Community Notes and/or other remediations. Users can report notes that do not comply with the rules by selecting the menu on a note and selecting ‘Report’ or using the provided form.
CyberPeace Policy Recommendations
Countering widespread online misinformation on social media platforms requires a multipronged approach that involves joint efforts from different stakeholders. Platforms should invest in state-of-the-art algorithms and technology to detect and flag suspected misleading information. They should also establish trustworthy fact-checking protocols and collaborate with expert fact-checking groups. Campaigns, seminars, and other educational materials must be encouraged by the government to increase public awareness and digital literacy about the mis/disinformation risks and impacts. Netizens should be empowered with the necessary skills and ability to discern fact and misleading information to successfully browse true information in the digital information age. The joint efforts by Government authorities, tech companies, and expert cyber security organisations are vital in promoting a secure and honest online information landscape and countering the spread of mis/disinformation. Platforms must encourage netizens/users to foster appropriate online conduct while using platforms and abiding by the terms & conditions and community guidelines of the platforms. Encouraging a culture of truth and integrity on the internet, honouring differing points of view, and confirming facts all help to create a more reliable and information-resilient environment.
References:
- https://www.meity.gov.in/writereaddata/files/Advisory%2015March%202024.pdf
- https://blog.google/intl/en-in/company-news/outreach-initiatives/supporting-the-2024-indian-general-election/
- https://blog.youtube/news-and-events/new-ways-to-offer-viewers-more-context/
- https://help.x.com/en/using-x/community-notes

Introduction
In the sprawling online world, trusted relationships are frequently taken advantage of by cybercriminals seeking to penetrate guarded systems. The Watering Hole Attack is one advanced method, which focuses on a user’s ecosystem by compromising the genuine sites they often use. This attack method is different from phishing or direct attacks as it quietly exploits the everyday browsing of the target to serve malicious content. The quiet and exact nature of watering hole attacks makes them prevalent amongst Advanced Persistent Threat (APT) groups, especially in conjunction with state-sponsored cyber-espionage operations.
What Qualifies as a Watering Hole Attack?
A Watering Hole Attack targets and infects a trusted website. The targeted website is one that is used by a particular organization or community, such as a specific industry sector. This type of cyberattack is analogous to the method of attack used by animals and predators waiting by the water’s edge for prey to drink. Attackers prey on their targets by injecting malicious code, such as an exploit kit or malware loader, into websites that are popular with their victims. These victims are then infected when they visit said websites unknowingly. This opens as a gateway for attackers to infiltrate corporate systems, harvest credentials, and pivot across internal networks.
How Watering Hole Attacks Unfold
The attack lifecycle usually progresses as follows:
- Reconnaissance - Attackers gather intelligence on the websites frequented by the target audience, including specialized communities, partner websites, or local news sites.
- Website Exploitation - Through the use of outdated CMS software and insecure plugins, attackers gain access to the target website and insert malicious code such as JS or iframe redirections.
- Delivery and Exploitation - The visitor’s browser executes the malicious code injected into the page. The code might include a redirection payload which sends the user to an exploit kit that checks the user’s browser, plugins, operating system, and other components for vulnerabilities.
- Infection and Persistence - The infected system malware such as RATs, keyloggers, or backdoors. These enable lateral and long-term movements within the organisation for espionage.
- Command and Control (C2) - For further instructions, additional payload delivery, and stolen data retrieval, infected devices connect to servers managed by the attackers.
Key Features of Watering Hole Attacks
- Indirect Approach: Instead of going after the main target, attackers focus on sites that the main target trusts.
- Supply-Chain-Like Impact: An infected industry portal can affect many companies at the same time.
- Low Profile: It is difficult to identify since the traffic comes from real websites.
- Advanced Customization: Exploit kits are known to specialize in making custom payloads for specific browsers or OS versions to increase the chance of success.
Why Are These Attacks Dangerous?
Worming hole attacks shift the battlefield to new grounds in cyber warfare on the web. They eliminate the need for firewalls, email shields, and other security measures because they operate on the traffic to and from real, trusted websites. When the attacks work as intended, the following consequences can be expected:
- Stealing Credentials: Including privileged accounts and VPN credentials.
- Espionage: Theft of intellectual property, defense blueprints, or government confidential information.
- Supply Chain Attacks: Resulting in a series of infections among related companies.
- Zero-Day Exploits: Including automated attacks using zero-day exploits for full damage.
Incidents of Primary Concern
The implications of watering hole attacks have been felt in the real world for quite some time. An example from 2019 reveals this, where a known VoIP firm’s site was compromised and used to spread data-stealing malware to its users. Likewise, in 2014, the Operation Snowman campaign—which seems to have a state-backed origin—attempted to infect users of a U.S. veterans’ portal in order to gain access to visitors from government, defense, and related fields. Rounding up the list, in 2021, cybercriminals attacked regional publications focusing on energy, using the publications to spread malware to company officials and engineers working on critical infrastructure, as well as to steal data from their systems. These attacks show the widespread and dangerous impact of watering hole attacks in the world of cybersecurity.
Detection Issues
Due to the following reasons, traditional approaches to security fail to detect watering hole attacks:
- Use of Authentic Websites: Attacks involving trusted and popular domains evade detection via blacklisting.
- Encrypted Traffic: Delivering payloads over HTTPS conceals malicious scripts from being inspected at the network level.
- Fileless Methods: Using in-memory execution is a modern campaign technique, and detection based on signatures is futile.
Mitigation Strategies
To effectively neutralize the threat of watering hole attacks, an organization should implement a defense-in-depth strategy that incorporates the following elements:
- Patch Management and Hardening -
- Conduct routine updates on operating systems, web browsers, and extensions to eliminate exploit opportunities.
- Either remove or reduce the use of high-risk elements such as Flash and Java, if feasible.
- Network Segmentation - Minimize lateral movement by isolating critical systems from the general user network.
- Behavioral Analytics - Implement Endpoint Detection and Response (EDR) tools to oversee unusual behaviors on processes—for example, script execution or dubious outgoing connections.
- DNS Filtering and Web Isolation - Implement DNS-layer security to deny access to known malicious domains and use browser isolation for dangerous sites.
- Threat Intelligence Integration - Track watering hole threats and campaigns for indicators of compromise (IoCs) on advisories and threat feeds.
- Multi-Layer Email and Web Security - Use web gateways integrated with dynamic content scanning, heuristic analysis, and sandboxing.
- Zero Trust Architecture - Apply least privilege access, require device attestation, and continuous authentication for accessing sensitive resources.
Incident Response Best Practices
- Forensic Analysis: Check affected endpoints for any mechanisms set up for persistence and communication with C2 servers.
- Log Review: Look through proxy, DNS, and firewall logs to detect suspicious traffic.
- Threat Hunting: Search your environment for known Indicators of Compromise (IoCs) related to recent watering hole attacks.
- User Awareness Training: Help employees understand the dangers related to visiting external industry websites and promote safe browsing practices.
The Immediate Need for Action
The adoption of cloud computing and remote working models has significantly increased the attack surface for watering hole attacks. Trust and healthcare sectors are increasingly targeted by nation-state groups and cybercrime gangs using this technique. Not taking action may lead to data leaks, legal fines, and break-ins through the supply chain, which damage the trustworthiness and operational capacity of the enterprise.
Conclusion
Watering hole attacks demonstrate how phishing attacks evolve from a broad attack to a very specific, trust-based attack. Protecting against these advanced attacks requires the zero-trust mindset, adaptive defenses, and continuous monitoring, which is multicentral security. Advanced response measures, proactive threat intelligence, and detection technologies integration enable organizations to turn this silent threat from a lurking predator to a manageable risk.
References
- https://www.fortinet.com/resources/cyberglossary/watering-hole-attack
- https://en.wikipedia.org/wiki/Watering_hole_attack
- https://www.proofpoint.com/us/threat-reference/watering-hole
- https://www.techtarget.com/searchsecurity/definition/watering-hole-attack