From Clicks to Consequences: How the Dharmendra Death Hoax Exposes the Dangers of Misinformation
Rahul Sahi,
Intern - Policy & Advocacy, CyberPeace
PUBLISHED ON
Nov 15, 2025
10
Introduction
The misinformation crisis has evolved from being merely an abstract risk to a clear-cut and measurable danger to individuals, families, institutions and the whole information ecosystem. The recent death hoax with the famous actor Dharmendra is a perfect illustration of how the monster of falsehoods rises, conquers the world and does damage before the mechanisms of correction have a chance to operate. The first week of November 2025 saw the first wave of reports from different social media accounts and even some online news outlets that claimed Dharmendra had died at the age of 89. The news travelled like wildfire, causing confusion, grief and emotional suffering in large circles of fans, one could say the whole world. But then, the family came to the scene with the loudest, clearest, and most conclusive denial of them all. This case is not a one-time event. It is part of a cycle of misinformation that goes through the stages from one unverified claim to the next due to the emotional value, the virality of platforms, and the accelerating online engagement.
How One Wrong Post Can Create Worry and Fear
This kind of false news spreads fast on social media because people share emotional posts without checking the source, and automated accounts often repeat the same claim which makes it look true. Such hoaxes create fear, sadness and stress for fans, and they place sudden pressure on the family who must deal with public worry at a time when they need calm and privacy. The message shared by Hema Malini who is the wife of the actor shows how hurtful and careless misinformation can be, and it reminds everyone that even one false post can create real emotional damage for many people.
Why This Hoax Spread So Quickly
Sensationalism Drives Engagement: Rumours regarding the passing of a public figure, particularly someone who is universally loved, cause an immediate outburst of feelings. Such news is practically taken for granted by the online public, who are very likely to share it, most of the time without checking its authenticity, which, in turn, leads to viral spread.
Very Fast Weaving-in on social media: Social media networks are very much made for swift sharing. Long before the official sources were able to either confirm or dismiss the matter, posts, reels, and messages ripped through the networks.
Digital Users Not Verifying Source: A large part of the audience depends on screenshots, forwards, and unverified posts for keeping up with the news. This opens a very nice environment for the spreading of hoaxes.
Weak Verification Protocols: Although there have been measures to inform the public about misinformation, most news companies still give priority to the speed of reporting rather than its correctness, though not all the time, especially for the more entertaining and attractive topics like the health or death of famous people.
Algorithmic Amplification Risks: The engagement is mainly driven by algorithms that bring to the surface the posts that evoke strong emotions. In a way, it is very unfair because the false or sensational claims are getting in front at the same time as the corrective updates, hence, the public is getting misled. In the absence of algorithmic safeguards, misinformation is on the rise and becoming stronger.
Best Practices For Users:
Make sure to verify before sharing, especially if the topic is about health or death.
Get updates by following official accounts rather than through sharing of viral forwards.
Be aware of the emotional manipulation tactics used in misleading information.
Conclusion
The rumour surrounding Dharmendra's death is yet another example that misinformation, whether promptly corrected or not, can still inflict distress, cause loss of trust and damage to reputation. It also emphasises the need for stronger information governance, responsible digital journalism, and platform intervention mechanisms as a matter of urgency. This incident, from clicks to consequences, points to a basic truth: misinformation in the digital age is quicker to spread than facts, and the responsibility of putting a stop to it falls on all the stakeholders’ platforms, media, and users.
The rapid digitization of educational institutions in India has created both opportunities and challenges. While technology has improved access to education and administrative efficiency, it has also exposed institutions to significant cyber threats. This report, published by CyberPeace, examines the types, causes, impacts, and preventive measures related to cyber risks in Indian educational institutions. It highlights global best practices, national strategies, and actionable recommendations to mitigate these threats.
Image: Recent CyberAttack on Eindhoven University
Significance of the Study:
The pandemic-induced shift to online learning, combined with limited cybersecurity budgets, has made educational institutions prime targets for cyberattacks. These threats compromise sensitive student, faculty, and institutional data, leading to operational disruptions, financial losses, and reputational damage. Globally, educational institutions face similar challenges, emphasizing the need for universal and localized responses.
Threat Faced by Education Institutions:
Based on the insights from the CyberPeace’s report titled 'Exploring Cyber Threats and Digital Risks in Indian Educational Institutions', this concise blog provides a comprehensive overview of cybersecurity threats and risks faced by educational institutions, along with essential details to address these challenges.
🎣 Phishing: Phishing is a social engineering tactic where cyber criminals impersonate trusted sources to steal sensitive information, such as login credentials and financial details. It often involves deceptive emails or messages that lead to counterfeit websites, pressuring victims to provide information quickly. Variants include spear phishing, smishing, and vishing.
💰 Ransomware: Ransomware is malware that locks users out of their systems or data until a ransom is paid. It spreads through phishing emails, malvertising, and exploiting vulnerabilities, causing downtime, data leaks, and theft. Ransom demands can range from hundreds to hundreds of thousands of dollars.
🌐 Distributed Denial of Service (DDoS): DDoS attacks overwhelm servers, denying users access to websites and disrupting daily operations, which can hinder students and teachers from accessing learning resources or submitting assignments. These attacks are relatively easy to execute, especially against poorly protected networks, and can be carried out by amateur cybercriminals, including students or staff, seeking to cause disruptions for various reasons
🕵️ Cyber Espionage: Higher education institutions, particularly research-focused universities, are vulnerable to spyware, insider threats, and cyber espionage. Spyware is unauthorized software that collects sensitive information or damages devices. Insider threats arise from negligent or malicious individuals, such as staff or vendors, who misuse their access to steal intellectual property or cause data leaks..
🔒 Data Theft: Data theft is a major threat to educational institutions, which store valuable personal and research information. Cybercriminals may sell this data or use it for extortion, while stealing university research can provide unfair competitive advantages. These attacks can go undetected for long periods, as seen in the University of California, Berkeley breach, where hackers allegedly stole 160,000 medical records over several months.
🛠️ SQL Injection: SQL injection (SQLI) is an attack that uses malicious code to manipulate backend databases, granting unauthorized access to sensitive information like customer details. Successful SQLI attacks can result in data deletion, unauthorized viewing of user lists, or administrative access to the database.
🔍Eavesdropping attack: An eavesdropping breach, or sniffing, is a network attack where cybercriminals steal information from unsecured transmissions between devices. These attacks are hard to detect since they don't cause abnormal data activity. Attackers often use network monitors, like sniffers, to intercept data during transmission.
🤖 AI-Powered Attacks: AI enhances cyber attacks like identity theft, password cracking, and denial-of-service attacks, making them more powerful, efficient, and automated. It can be used to inflict harm, steal information, cause emotional distress, disrupt organizations, and even threaten national security by shutting down services or cutting power to entire regions
Insights from Project eKawach
The CyberPeace Research Wing, in collaboration with SAKEC CyberPeace Center of Excellence (CCoE) and Autobot Infosec Private Limited, conducted a study simulating educational institutions' networks to gather intelligence on cyber threats. As part of the e-Kawach project, a nationwide initiative to strengthen cybersecurity, threat intelligence sensors were deployed to monitor internet traffic and analyze real-time cyber attacks from July 2023 to April 2024, revealing critical insights into the evolving cyber threat landscape.
Cyber Attack Trends
Between July 2023 and April 2024, the e-Kawach network recorded 217,886 cyberattacks from IP addresses worldwide, with a significant portion originating from countries including the United States, China, Germany, South Korea, Brazil, Netherlands, Russia, France, Vietnam, India, Singapore, and Hong Kong. However, attributing these attacks to specific nations or actors is complex, as threat actors often use techniques like exploiting resources from other countries, or employing VPNs and proxies to obscure their true locations, making it difficult to pinpoint the real origin of the attacks.
Brute Force Attack:
The analysis uncovered an extensive use of automated tools in brute force attacks, with 8,337 unique usernames and 54,784 unique passwords identified. Among these, the most frequently targeted username was “root,” which accounted for over 200,000 attempts. Other commonly targeted usernames included: "admin", "test", "user", "oracle", "ubuntu", "guest", "ftpuser", "pi", "support"
Similarly, the study identified several weak passwords commonly targeted by attackers. “123456” was attempted over 3,500 times, followed by “password” with over 2,500 attempts. Other frequently targeted passwords included: "1234", "12345", "12345678", "admin", "123", "root", "test", "raspberry", "admin123", "123456789"
Insights from Threat Landscape Analysis
Research done by the USI - CyberPeace Centre of Excellence (CCoE) and Resecurity has uncovered several breached databases belonging to public, private, and government universities in India, highlighting significant cybersecurity threats in the education sector. The research aims to identify and mitigate cybersecurity risks without harming individuals or assigning blame, based on data available at the time, which may evolve with new information. Institutions were assigned risk ratings that descend from A to F, with most falling under a D rating, indicating numerous security vulnerabilities. Institutions rated D or F are 5.4 times more likely to experience data breaches compared to those rated A or B. Immediate action is recommended to address the identified risks.
Risk Findings :
The risk findings for the institutions are summarized through a pie chart, highlighting factors such as data breaches, dark web activity, botnet activity, and phishing/domain squatting. Data breaches and botnet activity are significantly higher compared to dark web leakages and phishing/domain squatting. The findings show 393,518 instances of data breaches, 339,442 instances of botnet activity, 7,926 instances related to the dark web and phishing & domain activity - 6711.
Key Indicators: Multiple instances of data breaches containing credentials (email/passwords) in plain text.
Botnet activity indicating network hosts compromised by malware.
Credentials from third-party government and non-governmental websites linked to official institutional emails
Details of software applications, drivers installed on compromised hosts.
Sensitive cookie data exfiltrated from various browsers.
IP addresses of compromised systems.
Login credentials for different Android applications.
Below is the sample detail of one of the top educational institutions that provides the insights about the higher rate of data breaches, botnet activity, dark web activities and phishing & domain squatting.
Risk Detection:
It indicates the number of data breaches, network hygiene, dark web activities, botnet activities, cloud security, phishing & domain squatting, media monitoring and miscellaneous risks. In the below example, we are able to see the highest number of data breaches and botnet activities in the sample particular domain.
Risk Changes:
Risk by Categories:
Risk is categorized with factors such as high, medium and low, the risk is at high level for data breaches and botnet activities.
Challenges Faced by Educational Institutions
Educational institutions face cyberattack risks, the challenges leading to cyberattack incidents in educational institutions are as follows:
🔒 Lack of a Security Framework: A key challenge in cybersecurity for educational institutions is the lack of a dedicated framework for higher education. Existing frameworks like ISO 27001, NIST, COBIT, and ITIL are designed for commercial organizations and are often difficult and costly to implement. Consequently, many educational institutions in India do not have a clearly defined cybersecurity framework.
🔑 Diverse User Accounts: Educational institutions manage numerous accounts for staff, students, alumni, and third-party contractors, with high user turnover. The continuous influx of new users makes maintaining account security a challenge, requiring effective systems and comprehensive security training for all users.
📚 Limited Awareness: Cybersecurity awareness among students, parents, teachers, and staff in educational institutions is limited due to the recent and rapid integration of technology. The surge in tech use, accelerated by the pandemic, has outpaced stakeholders' ability to address cybersecurity issues, leaving them unprepared to manage or train others on these challenges.
📱 Increased Use of Personal/Shared Devices: The growing reliance on unvetted personal/Shared devices for academic and administrative activities amplifies security risks.
💬 Lack of Incident Reporting: Educational institutions often neglect reporting cyber incidents, increasing vulnerability to future attacks. It is essential to report all cases, from minor to severe, to strengthen cybersecurity and institutional resilience.
Impact of Cybersecurity Attacks on Educational Institutions
Cybersecurity attacks on educational institutions lead to learning disruptions, financial losses, and data breaches. They also harm the institution's reputation and pose security risks to students. The following are the impacts of cybersecurity attacks on educational institutions:
📚Impact on the Learning Process: A report by the US Government Accountability Office (GAO) found that cyberattacks on school districts resulted in learning losses ranging from three days to three weeks, with recovery times taking between two to nine months.
💸Financial Loss: US schools reported financial losses ranging from $50,000 to $1 million due to expenses like hardware replacement and cybersecurity upgrades, with recovery taking an average of 2 to 9 months.
🔒Data Security Breaches: Cyberattacks exposed sensitive data, including grades, social security numbers, and bullying reports. Accidental breaches were often caused by staff, accounting for 21 out of 25 cases, while intentional breaches by students, comprising 27 out of 52 cases, frequently involved tampering with grades.
⚠️Data Security Breach: Cyberattacks on schools result in breaches of personal information, including grades and social security numbers, causing emotional, physical, and financial harm. These breaches can be intentional or accidental, with a US study showing staff responsible for most accidental breaches (21 out of 25) and students primarily behind intentional breaches (27 out of 52) to change grades.
🏫Impact on Institutional Reputation: Cyberattacks damaged the reputation of educational institutions, eroding trust among students, staff, and families. Negative media coverage and scrutiny impacted staff retention, student admissions, and overall credibility.
🛡️ Impact on Student Safety: Cyberattacks compromised student safety and privacy. For example, breaches like live-streaming school CCTV footage caused severe distress, negatively impacting students' sense of security and mental well-being.
CyberPeace Advisory:
CyberPeace emphasizes the importance of vigilance and proactive measures to address cybersecurity risks:
Develop effective incident response plans: Establish a clear and structured plan to quickly identify, respond to, and recover from cyber threats. Ensure that staff are well-trained and know their roles during an attack to minimize disruption and prevent further damage.
Implement access controls with role-based permissions: Restrict access to sensitive information based on individual roles within the institution. This ensures that only authorized personnel can access certain data, reducing the risk of unauthorized access or data breaches.
Regularly update software and conduct cybersecurity training: Keep all software and systems up-to-date with the latest security patches to close vulnerabilities. Provide ongoing cybersecurity awareness training for students and staff to equip them with the knowledge to prevent attacks, such as phishing.
Ensure regular and secure backups of critical data: Perform regular backups of essential data and store them securely in case of cyber incidents like ransomware. This ensures that, if data is compromised, it can be restored quickly, minimizing downtime.
Adopt multi-factor authentication (MFA): Enforce Multi-Factor Authentication(MFA) for accessing sensitive systems or information to strengthen security. MFA adds an extra layer of protection by requiring users to verify their identity through more than one method, such as a password and a one-time code.
Deploy anti-malware tools: Use advanced anti-malware software to detect, block, and remove malicious programs. This helps protect institutional systems from viruses, ransomware, and other forms of malware that can compromise data security.
Monitor networks using intrusion detection systems (IDS): Implement IDS to monitor network traffic and detect suspicious activity. By identifying threats in real time, institutions can respond quickly to prevent breaches and minimize potential damage.
Conduct penetration testing: Regularly conduct penetration testing to simulate cyberattacks and assess the security of institutional networks. This proactive approach helps identify vulnerabilities before they can be exploited by actual attackers.
Collaborate with cybersecurity firms: Partner with cybersecurity experts to benefit from specialized knowledge and advanced security solutions. Collaboration provides access to the latest technologies, threat intelligence, and best practices to enhance the institution's overall cybersecurity posture.
Share best practices across institutions: Create forums for collaboration among educational institutions to exchange knowledge and strategies for cybersecurity. Sharing successful practices helps build a collective defense against common threats and improves security across the education sector.
Conclusion:
The increasing cyber threats to Indian educational institutions demand immediate attention and action. With vulnerabilities like data breaches, botnet activities, and outdated infrastructure, institutions must prioritize effective cybersecurity measures. By adopting proactive strategies such as regular software updates, multi-factor authentication, and incident response plans, educational institutions can mitigate risks and safeguard sensitive data. Collaborative efforts, awareness, and investment in cybersecurity will be essential to creating a secure digital environment for academia.
The World Economic Forum reported that AI-generated misinformation and disinformation are the second most likely threat to present a material crisis on a global scale in 2024 at 53% (Sept. 2023). Artificial intelligence is automating the creation of fake news at a rate disproportionate to its fact-checking. It is spurring an explosion of web content mimicking factual articles that instead disseminate false information about grave themes such as elections, wars and natural disasters.
According to a report by the Centre for the Study of Democratic Institutions, a Canadian think tank, the most prevalent effect of Generative AI is the ability to flood the information ecosystem with misleading and factually-incorrect content. As reported by Democracy Reporting International during the 2024 elections of the European Union, Google's Gemini, OpenAI’s ChatGPT 3.5 and 4.0, and Microsoft’s AI interface ‘CoPilot’ were inaccurate one-third of the time when engaged for any queries regarding the election data. Therefore, a need for an innovative regulatory approach like regulatory sandboxes which can address these challenges while encouraging responsible AI innovation is desired.
What Is AI-driven Misinformation?
False or misleading information created, amplified, or spread using artificial intelligence technologies is AI-driven misinformation. Machine learning models are leveraged to automate and scale the creation of false and deceptive content. Some examples are deep fakes, AI-generated news articles, and bots that amplify false narratives on social media.
The biggest challenge is in the detection and management of AI-driven misinformation. It is difficult to distinguish AI-generated content from authentic content, especially as these technologies advance rapidly.
AI-driven misinformation can influence elections, public health, and social stability by spreading false or misleading information. While public adoption of the technology has undoubtedly been rapid, it is yet to achieve true acceptance and actually fulfill its potential in a positive manner because there is widespread cynicism about the technology - and rightly so. The general public sentiment about AI is laced with concern and doubt regarding the technology’s trustworthiness, mainly due to the absence of a regulatory framework maturing on par with the technological development.
Regulatory Sandboxes: An Overview
Regulatory sandboxes refer to regulatory tools that allow businesses to test and experiment with innovative products, services or businesses under the supervision of a regulator for a limited period. They engage by creating a controlled environment where regulators allow businesses to test new technologies or business models with relaxed regulations.
Regulatory sandboxes have been in use for many industries and the most recent example is their use in sectors like fintech, such as the UK’s Financial Conduct Authority sandbox. These models have been known to encourage innovation while allowing regulators to understand emerging risks. Lessons from the fintech sector show that the benefits of regulatory sandboxes include facilitating firm financing and market entry and increasing speed-to-market by reducing administrative and transaction costs. For regulators, testing in sandboxes informs policy-making and regulatory processes. Looking at the success in the fintech industry, regulatory sandboxes could be adapted to AI, particularly for overseeing technologies that have the potential to generate or spread misinformation.
The Role of Regulatory Sandboxes in Addressing AI Misinformation
Regulatory sandboxes can be used to test AI tools designed to identify or flag misinformation without the risks associated with immediate, wide-scale implementation. Stakeholders like AI developers, social media platforms, and regulators work in collaboration within the sandbox to refine the detection algorithms and evaluate their effectiveness as content moderation tools.
These sandboxes can help balance the need for innovation in AI and the necessity of protecting the public from harmful misinformation. They allow the creation of a flexible and adaptive framework capable of evolving with technological advancements and fostering transparency between AI developers and regulators. This would lead to more informed policymaking and building public trust in AI applications.
CyberPeace Policy Recommendations
Regulatory sandboxes offer a mechanism to predict solutions that will help to regulate the misinformation that AI tech creates. Some policy recommendations are as follows:
Create guidelines for a global standard for including regulatory sandboxes that can be adapted locally and are useful in ensuring consistency in tackling AI-driven misinformation.
Regulators can propose to offer incentives to companies that participate in sandboxes. This would encourage innovation in developing anti-misinformation tools, which could include tax breaks or grants.
Awareness campaigns can help in educating the public about the risks of AI-driven misinformation and the role of regulatory sandboxes can help manage public expectations.
Periodic and regular reviews and updates to the sandbox frameworks should be conducted to keep pace with advancements in AI technology and emerging forms of misinformation should be emphasized.
Conclusion and the Challenges for Regulatory Frameworks
Regulatory sandboxes offer a promising pathway to counter the challenges that AI-driven misinformation poses while fostering innovation. By providing a controlled environment for testing new AI tools, these sandboxes can help refine technologies aimed at detecting and mitigating false information. This approach ensures that AI development aligns with societal needs and regulatory standards, fostering greater trust and transparency. With the right support and ongoing adaptations, regulatory sandboxes can become vital in countering the spread of AI-generated misinformation, paving the way for a more secure and informed digital ecosystem.
In the face of escalating cybercrimes in India, criminals are adopting increasingly inventive methods to deceive victims. Imagine opening your phone to the notification of an incoming message from a stranger with a friendly introduction - a beginning that appears harmless, but is the beginning of an awful financial nightmare. "Pig Butchering '' scam—an increasingly sophisticated form of deception that's gaining more widespread popularity. Unlike any other scams, this one plays a long game, spinning a web of trust before it strikes. It's a modern-day financial thriller happening in the real world, with real victims. "pig butchering" scam, involves building trust through fake profiles and manipulating victims emotionally to extort money. The scale of such scams has raised concerns, emphasising the need for awareness and vigilance in the face of evolving cyber threats.
How does 'Pig Butchering' Scam Work?
At its core, the scam starts innocuously, often with a stranger reaching out via text, social media, or apps like WhatsApp or WeChat. The scammer, hiding behind a well-crafted and realistic online persona, seeks to forge a connection. This could be under the pretence of friendship or romance, employing fake photos and stories to seem authentic. Gradually, the scammer builds a rapport, engaging in personal and often non-financial conversations. They may portray themselves as a widow, single parent, or even a military member to evoke empathy and trust. Over time, this connection pivots to investment opportunities, with the scammer presenting lucrative tips or suggestions in stocks or cryptocurrencies. Initially, modest investments are encouraged, and falsified returns are shown to lure in larger sums. Often, the scammer claims affiliation with a profitable financial institution or success in cryptocurrency trading. They direct victims to specific, usually fraudulent, trading platforms under their control. The scam reaches its peak when significant investments are made, only for the scammer to manipulate the situation, block access to the trading platform, or vanish, leaving the victim with substantial losses.
Real-Life Examples and Global Reach
These scams are not confined to one region. In India, for instance, scammers use emotional manipulation, often starting with a WhatsApp message from an unknown, attractive individual. They pose as professionals offering part-time jobs, leading victims through tasks that escalate in investment and complexity. These usually culminate in cryptocurrency investments, with victims unable to withdraw their funds, the money often traced to accounts in Dubai.
In the West, several cases highlight the scam's emotional and financial toll: A Michigan woman was lured by an online boyfriend claiming to make money from gold trading. She invested through a fake brokerage, losing money while being emotionally entangled. A Canadian man named Sajid Ikram lost nearly $400,000 in a similar scam, initially misled by a small successful withdrawal. In California, a man lost $440,000, succumbing to pressure to invest more, including retirement savings and borrowed money. A Maryland victim faced continuous demands from scammers, losing almost $1.4 million in hopes of recovering previous losses. A notable case involved US authorities seizing about $9 million in cryptocurrency linked to a global pig butchering scam, showcasing its extensive reach.
Safeguarding Against Such Scams
Vigilance is crucial to prevent falling victim to these scams. Be skeptical of unsolicited contacts and wary of investment advice from strangers. Conduct thorough research before any financial engagement, particularly on unfamiliar platforms. Indian Cyber Crime Coordination Center warns of red flags like sudden large virtual currency transactions, interest in high-return investments mentioned by new online contacts, and atypical customer behaviour.
Victims should report incidents to various Indian and foreign websites and the Securities Exchange Commission. Financial institutions are advised to report suspicious activities related to these scams. In essence, the pig butchering scam is a cunning blend of emotional manipulation and financial fraud. Staying informed and cautious is key to avoiding these sophisticated traps.
Conclusion
The Pig Butchering Scams are one of the many new breeds of emerging cyber scams that have become a bone of contention for cyber security organisations. It is imperative for netizens to stay vigilant and well-informed about the dynamics of cyberspace and emerging cyber crimes.
Your institution or organization can partner with us in any one of our initiatives or policy research activities and complement the region-specific resources and talent we need.