#FactCheck - False Claim of Hindu Sadhvi Marrying Muslim Man Debunked
Executive Summary:
A viral image circulating on social media claims to show a Hindu Sadhvi marrying a Muslim man; however, this claim is false. A thorough investigation by the Cyberpeace Research team found that the image has been digitally manipulated. The original photo, which was posted by Balmukund Acharya, a BJP MLA from Jaipur, on his official Facebook account in December 2023, he was posing with a Muslim man in his election office. The man wearing the Muslim skullcap is featured in several other photos on Acharya's Instagram account, where he expressed gratitude for the support from the Muslim community. Thus, the claimed image of a marriage between a Hindu Sadhvi and a Muslim man is digitally altered.

Claims:
An image circulating on social media claims to show a Hindu Sadhvi marrying a Muslim man.


Fact Check:
Upon receiving the posts, we reverse searched the image to find any credible sources. We found a photo posted by Balmukund Acharya Hathoj Dham on his facebook page on 6 December 2023.

This photo is digitally altered and posted on social media to mislead. We also found several different photos with the skullcap man where he was featured.

We also checked for any AI fabrication in the viral image. We checked using a detection tool named, “content@scale” AI Image detection. This tool found the image to be 95% AI Manipulated.

We also checked with another detection tool for further validation named, “isitai” image detection tool. It found the image to be 38.50% of AI content, which concludes to the fact that the image is manipulated and doesn’t support the claim made. Hence, the viral image is fake and misleading.

Conclusion:
The lack of credible source and the detection of AI manipulation in the image explains that the viral image claiming to show a Hindu Sadhvi marrying a Muslim man is false. It has been digitally altered. The original image features BJP MLA Balmukund Acharya posing with a Muslim man, and there is no evidence of the claimed marriage.
- Claim: An image circulating on social media claims to show a Hindu Sadhvi marrying a Muslim man.
- Claimed on: X (Formerly known as Twitter)
- Fact Check: Fake & Misleading
Related Blogs

Introduction
The integration of Artificial Intelligence into our daily workflows has compelled global policymakers to develop legislative frameworks to govern its impact efficiently. The question that we arrive at here is: While AI is undoubtedly transforming global economies, who governs the transformation? The EU AI Act was the first of its kind legislation to govern Artificial Intelligence, making the EU a pioneer in the emerging technology regulation space. This blog analyses the EU's Draft AI Rules and Code of Practice, exploring their implications for ethics, innovation, and governance.
Background: The Need for AI Regulation
AI adoption has been happening at a rapid pace and is projected to contribute $15.7 trillion to the global economy by 2030. The AI market size is expected to grow by at least 120% year-over-year. Both of these statistics have been stated in arguments citing concrete examples of AI risks (e.g., bias in recruitment tools, misinformation spread through deepfakes). Unlike the U.S., which relies on sector-specific regulations, the EU proposes a unified framework to address AI's challenges comprehensively, especially with the vacuum that exists in the governance of emerging technologies such as AI. It should be noted that the GDPR or the General Data Protection Regulation has been a success with its global influence on data privacy laws and has started a domino effect for the creation of privacy regulations all over the world. This precedent emphasises the EU's proactive approach towards regulations which are population-centric.
Overview of the Draft EU AI Rules
This Draft General Purpose AI Code of Practice details the AI rules for the AI Act rules and the providers of general-purpose AI models with systemic risks. The European AI Office facilitated the drawing up of the code, and was chaired by independent experts and involved nearly 1000 stakeholders and EU member state representatives and observers both European and international observers.
14th November 2024 marks the publishing of the first draft of the EU’s General-Purpose AI Code of Practice, established by the EU AI Act. As per Article 56 of the EU AI Act, the code outlines the rules that operationalise the requirements, set out for General-Purpose AI (GPAI) model under Article 53 and GPAI models with systemic risks under Article 55. The AI Act is legislation that finds its base in product safety and relies on setting harmonised standards in order to support compliance. These harmonised standards are essentially sets of operational rules that have been established by the European Standardisation bodies, such as the European Committee for Standardisation (CEN), the European Committee for Electrotechnical Standardisation (CENELEC) and the European Telecommunications Standards Institute. Industry experts, civil society and trade unions are translating the requirements set out by the EU sectoral legislation into the specific mandates set by the European Commission. The AI Act obligates the developers, deployers and users of AI on mandates for transparency, risk management and compliance mechanisms
The Code of Practice for General Purpose AI
The most popular applications of GPAI include ChatGPT and other foundational models such as CoPilot from Microsoft, BERT from Google, Llama from Meta AI and many others and they are under constant development and upgradation. The 36-pages long draft Code of Practice for General Purpose AI is meant to serve as a roadmap for tech companies to comply with the AI Act and avoid paying penalties. It focuses on transparency, copyright compliance, risk assessment, and technical/governance risk mitigation as the core areas for the companies that are developing GPAIs. It also lays down guidelines that look to enable greater transparency on what goes into developing GPAIs.
The Draft Code's provisions for risk assessment focus on preventing cyber attacks, large-scale discrimination, nuclear and misinformation risks, and the risk of the models acting autonomously without oversight.
Policy Implications
The EU’s Draft AI Rules and Code of Practice represent a bold step in shaping the governance of general-purpose AI, positioning the EU as a global pioneer in responsible AI regulation. By prioritising harmonised standards, ethical safeguards, and risk mitigation, these rules aim to ensure AI benefits society while addressing its inherent risks. While the code is a welcome step, the compliance burdens on MSMEs and startups could hinder innovation, whereas, the voluntary nature of the Code raises concerns about accountability. Additionally, harmonising these ambitious standards with varying global frameworks, especially in regions like the U.S. and India, presents a significant challenge to achieving a cohesive global approach.
Conclusion
The EU’s initiative to regulate general-purpose AI aligns with its legacy of proactive governance, setting the stage for a transformative approach to balancing innovation with ethical accountability. However, challenges remain. Striking the right balance is crucial to avoid stifling innovation while ensuring robust enforcement and inclusivity for smaller players. Global collaboration is the next frontier. As the EU leads, the world must respond by building bridges between regional regulations and fostering a unified vision for AI governance. This demands active stakeholder engagement, adaptive frameworks, and a shared commitment to addressing emerging challenges in AI. The EU’s Draft AI Rules are not just about regulation, they are about leading a global conversation.
References
- https://indianexpress.com/article/technology/artificial-intelligence/new-eu-ai-code-of-practice-draft-rules-9671152/
- https://digital-strategy.ec.europa.eu/en/policies/ai-code-practice
- https://www.csis.org/analysis/eu-code-practice-general-purpose-ai-key-takeaways-first-draft#:~:text=Drafting%20of%20the%20Code%20of%20Practice%20is%20taking%20place%20under,the%20drafting%20of%20the%20code.
- https://copyrightblog.kluweriplaw.com/2024/12/16/first-draft-of-the-general-purpose-ai-code-of-practice-has-been-released/

Introduction
As the sun rises on the Indian subcontinent, a nation teeters on the precipice of a democratic exercise of colossal magnitude. The Lok Sabha elections, a quadrennial event that mobilises the will of over a billion souls, is not just a testament to the robustness of India's democratic fabric but also a crucible where the veracity of information is put to the sternest of tests. In this context, the World Economic Forum's 'Global Risks Report 2024' emerges as a harbinger of a disconcerting trend: the spectre of misinformation and disinformation that threatens to distort the electoral landscape.
The report, a carefully crafted document that shares the insights of 1,490 experts from the interests of academia, government, business and civil society, paints a tableau of the global risks that loom large over the next decade. These risks, spawned by the churning cauldron of rapid technological change, economic uncertainty, a warming planet, and simmering conflict, are not just abstract threats but tangible realities that could shape the future of nations.
India’s Electoral Malice
India, as it strides towards the general elections scheduled in the spring of 2024, finds itself in the vortex of this hailstorm. The WEF survey positions India at the zenith of vulnerability to disinformation and misinformation, a dubious distinction that underscores the challenges facing the world's largest democracy. The report depicts misinformation and disinformation as the chimaeras of false information—whether inadvertent or deliberate—that are dispersed through the arteries of media networks, skewing public opinion towards a pervasive distrust in facts and authority. This encompasses a panoply of deceptive content: fabricated, false, manipulated and imposter.
The United States, the European Union, and the United Kingdom too, are ensnared in this web of varying degrees of misinformation. South Africa, another nation on the cusp of its own electoral journey, is ranked 22nd, a reflection of the global reach of this phenomenon. The findings, derived from a survey conducted over the autumnal weeks of September to October 2023, reveal a world grappling with the shadowy forces of untruth.
Global Scenario
The report prognosticates that as close to three billion individuals across diverse economies—Bangladesh, India, Indonesia, Mexico, Pakistan, the United Kingdom, and the United States—prepare to exercise their electoral rights, the rampant use of misinformation and disinformation, and the tools that propagate them, could erode the legitimacy of the governments they elect. The repercussions could be dire, ranging from violent protests and hate crimes to civil confrontation and terrorism.
Beyond the electoral arena, the fabric of reality itself is at risk of becoming increasingly polarised, seeping into the public discourse on issues as varied as public health and social justice. As the bedrock of truth is undermined, the spectre of domestic propaganda and censorship looms large, potentially empowering governments to wield control over information based on their own interpretation of 'truth.'
The report further warns that disinformation will become increasingly personalised and targeted, honing in on specific groups such as minority communities and disseminating through more opaque messaging platforms like WhatsApp or WeChat. This tailored approach to deception signifies a new frontier in the battle against misinformation.
In a world where societal polarisation and economic downturn are seen as central risks in an interconnected 'risks network,' misinformation and disinformation have ascended rapidly to the top of the threat hierarchy. The report's respondents—two-thirds of them—cite extreme weather, AI-generated misinformation and disinformation, and societal and/or political polarisation as the most pressing global risks, followed closely by the 'cost-of-living crisis,' 'cyberattacks,' and 'economic downturn.'
Current Situation
In this unprecedented year for elections, the spectre of false information looms as one of the major threats to the global populace, according to the experts surveyed for the WEF's 2024 Global Risk Report. The report offers a nuanced analysis of the degrees to which misinformation and disinformation are perceived as problems for a selection of countries over the next two years, based on a ranking of 34 economic, environmental, geopolitical, societal, and technological risks.
India, the land of ancient wisdom and modern innovation, stands at the crossroads where the risk of disinformation and misinformation is ranked highest. Out of all the risks, these twin scourges were most frequently selected as the number one risk for the country by the experts, eclipsing infectious diseases, illicit economic activity, inequality, and labor shortages. The South Asian nation's next general election, set to unfurl between April and May 2024, will be a litmus test for its 1.4 billion people.
The spectre of fake news is not a novel adversary for India. The 2019 election was rife with misinformation, with reports of political parties weaponising platforms like WhatsApp and Facebook to spread incendiary messages, stoking fears that online vitriol could spill over into real-world violence. The COVID-19 pandemic further exacerbated the issue, with misinformation once again proliferating through WhatsApp.
Other countries facing a high risk of the impacts of misinformation and disinformation include El Salvador, Saudi Arabia, Pakistan, Romania, Ireland, Czechia, the United States, Sierra Leone, France, and Finland, all of which consider the threat to be one of the top six most dangerous risks out of 34 in the coming two years. In the United Kingdom, misinformation/disinformation is ranked 11th among perceived threats.
The WEF analysts conclude that the presence of misinformation and disinformation in these electoral processes could seriously destabilise the real and perceived legitimacy of newly elected governments, risking political unrest, violence, and terrorism, and a longer-term erosion of democratic processes.
The 'Global Risks Report 2024' of the World Economic Forum ranks India first in facing the highest risk of misinformation and disinformation in the world at a time when it faces general elections this year. The report, released in early January with the 19th edition of its Global Risks Report and Global Risk Perception Survey, claims to reveal the varying degrees to which misinformation and disinformation are rated as problems for a selection of analyzed countries in the next two years, based on a ranking of 34 economic, environmental, geopolitical, societal, and technological risks.
Some governments and platforms aiming to protect free speech and civil liberties may fail to act effectively to curb falsified information and harmful content, making the definition of 'truth' increasingly contentious across societies. State and non-state actors alike may leverage false information to widen fractures in societal views, erode public confidence in political institutions, and threaten national cohesion and coherence.
Trust in specific leaders will confer trust in information, and the authority of these actors—from conspiracy theorists, including politicians, and extremist groups to influencers and business leaders—could be amplified as they become arbiters of truth.
False information could not only be used as a source of societal disruption but also of control by domestic actors in pursuit of political agendas. The erosion of political checks and balances and the growth in tools that spread and control information could amplify the efficacy of domestic disinformation over the next two years.
Global internet freedom is already in decline, and access to more comprehensive sets of information has dropped in numerous countries. The implication: Falls in press freedoms in recent years and a related lack of strong investigative media are significant vulnerabilities set to grow.
Advisory
Here are specific best practices for citizens to help prevent the spread of misinformation during electoral processes:
- Verify Information:Double-check the accuracy of information before sharing it. Use reliable sources and fact-checking websites to verify claims.
- Cross-Check Multiple Sources:Consult multiple reputable news sources to ensure that the information is consistent across different platforms.
- Be Wary of Social Media:Social media platforms are susceptible to misinformation. Be cautious about sharing or believing information solely based on social media posts.
- Check Dates and Context:Ensure that information is current and consider the context in which it is presented. Misinformation often thrives when details are taken out of context.
- Promote Media Literacy:Educate yourself and others on media literacy to discern reliable sources from unreliable ones. Be skeptical of sensational headlines and clickbait.
- Report False Information:Report instances of misinformation to the platform hosting the content and encourage others to do the same. Utilise fact-checking organisations or tools to report and debunk false information.
- Critical Thinking:Foster critical thinking skills among your community members. Encourage them to question information and think critically before accepting or sharing it.
- Share Official Information:Share official statements and information from reputable sources, such as government election commissions, to ensure accuracy.
- Avoid Echo Chambers:Engage with diverse sources of information to avoid being in an 'echo chamber' where misinformation can thrive.
- Be Responsible in Sharing:Before sharing information, consider the potential impact it may have. Refrain from sharing unverified or sensational content that can contribute to misinformation.
- Promote Open Dialogue:Open discussions should be promoted amongst their community about the significance of factual information and the dangers of misinformation.
- Stay Calm and Informed:During critical periods, such as election days, stay calm and rely on official sources for updates. Avoid spreading unverified information that can contribute to panic or confusion.
- Support Media Literacy Programs:Media Literacy Programs in schools should be promoted to provide individuals with essential skills to sail through the information sea properly.
Conclusion
Preventing misinformation requires a collective effort from individuals, communities, and platforms. By adopting these best practices, citizens can play a vital role in reducing the impact of misinformation during electoral processes.
References:
- https://thewire.in/media/survey-finds-false-information-risk-highest-in-india
- https://thesouthfirst.com/pti/india-faces-highest-risk-of-disinformation-in-general-elections-world-economic-forum/

Overview:
The rapid digitization of educational institutions in India has created both opportunities and challenges. While technology has improved access to education and administrative efficiency, it has also exposed institutions to significant cyber threats. This report, published by CyberPeace, examines the types, causes, impacts, and preventive measures related to cyber risks in Indian educational institutions. It highlights global best practices, national strategies, and actionable recommendations to mitigate these threats.

Significance of the Study:
The pandemic-induced shift to online learning, combined with limited cybersecurity budgets, has made educational institutions prime targets for cyberattacks. These threats compromise sensitive student, faculty, and institutional data, leading to operational disruptions, financial losses, and reputational damage. Globally, educational institutions face similar challenges, emphasizing the need for universal and localized responses.
Threat Faced by Education Institutions:
Based on the insights from the CyberPeace’s report titled 'Exploring Cyber Threats and Digital Risks in Indian Educational Institutions', this concise blog provides a comprehensive overview of cybersecurity threats and risks faced by educational institutions, along with essential details to address these challenges.
🎣 Phishing: Phishing is a social engineering tactic where cyber criminals impersonate trusted sources to steal sensitive information, such as login credentials and financial details. It often involves deceptive emails or messages that lead to counterfeit websites, pressuring victims to provide information quickly. Variants include spear phishing, smishing, and vishing.
💰 Ransomware: Ransomware is malware that locks users out of their systems or data until a ransom is paid. It spreads through phishing emails, malvertising, and exploiting vulnerabilities, causing downtime, data leaks, and theft. Ransom demands can range from hundreds to hundreds of thousands of dollars.
🌐 Distributed Denial of Service (DDoS): DDoS attacks overwhelm servers, denying users access to websites and disrupting daily operations, which can hinder students and teachers from accessing learning resources or submitting assignments. These attacks are relatively easy to execute, especially against poorly protected networks, and can be carried out by amateur cybercriminals, including students or staff, seeking to cause disruptions for various reasons
🕵️ Cyber Espionage: Higher education institutions, particularly research-focused universities, are vulnerable to spyware, insider threats, and cyber espionage. Spyware is unauthorized software that collects sensitive information or damages devices. Insider threats arise from negligent or malicious individuals, such as staff or vendors, who misuse their access to steal intellectual property or cause data leaks..
🔒 Data Theft: Data theft is a major threat to educational institutions, which store valuable personal and research information. Cybercriminals may sell this data or use it for extortion, while stealing university research can provide unfair competitive advantages. These attacks can go undetected for long periods, as seen in the University of California, Berkeley breach, where hackers allegedly stole 160,000 medical records over several months.
🛠️ SQL Injection: SQL injection (SQLI) is an attack that uses malicious code to manipulate backend databases, granting unauthorized access to sensitive information like customer details. Successful SQLI attacks can result in data deletion, unauthorized viewing of user lists, or administrative access to the database.
🔍Eavesdropping attack: An eavesdropping breach, or sniffing, is a network attack where cybercriminals steal information from unsecured transmissions between devices. These attacks are hard to detect since they don't cause abnormal data activity. Attackers often use network monitors, like sniffers, to intercept data during transmission.
🤖 AI-Powered Attacks: AI enhances cyber attacks like identity theft, password cracking, and denial-of-service attacks, making them more powerful, efficient, and automated. It can be used to inflict harm, steal information, cause emotional distress, disrupt organizations, and even threaten national security by shutting down services or cutting power to entire regions
Insights from Project eKawach
The CyberPeace Research Wing, in collaboration with SAKEC CyberPeace Center of Excellence (CCoE) and Autobot Infosec Private Limited, conducted a study simulating educational institutions' networks to gather intelligence on cyber threats. As part of the e-Kawach project, a nationwide initiative to strengthen cybersecurity, threat intelligence sensors were deployed to monitor internet traffic and analyze real-time cyber attacks from July 2023 to April 2024, revealing critical insights into the evolving cyber threat landscape.
Cyber Attack Trends
Between July 2023 and April 2024, the e-Kawach network recorded 217,886 cyberattacks from IP addresses worldwide, with a significant portion originating from countries including the United States, China, Germany, South Korea, Brazil, Netherlands, Russia, France, Vietnam, India, Singapore, and Hong Kong. However, attributing these attacks to specific nations or actors is complex, as threat actors often use techniques like exploiting resources from other countries, or employing VPNs and proxies to obscure their true locations, making it difficult to pinpoint the real origin of the attacks.
Brute Force Attack:
The analysis uncovered an extensive use of automated tools in brute force attacks, with 8,337 unique usernames and 54,784 unique passwords identified. Among these, the most frequently targeted username was “root,” which accounted for over 200,000 attempts. Other commonly targeted usernames included: "admin", "test", "user", "oracle", "ubuntu", "guest", "ftpuser", "pi", "support"
Similarly, the study identified several weak passwords commonly targeted by attackers. “123456” was attempted over 3,500 times, followed by “password” with over 2,500 attempts. Other frequently targeted passwords included: "1234", "12345", "12345678", "admin", "123", "root", "test", "raspberry", "admin123", "123456789"

Insights from Threat Landscape Analysis
Research done by the USI - CyberPeace Centre of Excellence (CCoE) and Resecurity has uncovered several breached databases belonging to public, private, and government universities in India, highlighting significant cybersecurity threats in the education sector. The research aims to identify and mitigate cybersecurity risks without harming individuals or assigning blame, based on data available at the time, which may evolve with new information. Institutions were assigned risk ratings that descend from A to F, with most falling under a D rating, indicating numerous security vulnerabilities. Institutions rated D or F are 5.4 times more likely to experience data breaches compared to those rated A or B. Immediate action is recommended to address the identified risks.


Risk Findings :
The risk findings for the institutions are summarized through a pie chart, highlighting factors such as data breaches, dark web activity, botnet activity, and phishing/domain squatting. Data breaches and botnet activity are significantly higher compared to dark web leakages and phishing/domain squatting. The findings show 393,518 instances of data breaches, 339,442 instances of botnet activity, 7,926 instances related to the dark web and phishing & domain activity - 6711.

Key Indicators: Multiple instances of data breaches containing credentials (email/passwords) in plain text.


- Botnet activity indicating network hosts compromised by malware.

- Credentials from third-party government and non-governmental websites linked to official institutional emails

- Details of software applications, drivers installed on compromised hosts.

- Sensitive cookie data exfiltrated from various browsers.


- IP addresses of compromised systems.
- Login credentials for different Android applications.

Below is the sample detail of one of the top educational institutions that provides the insights about the higher rate of data breaches, botnet activity, dark web activities and phishing & domain squatting.
Risk Detection:
It indicates the number of data breaches, network hygiene, dark web activities, botnet activities, cloud security, phishing & domain squatting, media monitoring and miscellaneous risks. In the below example, we are able to see the highest number of data breaches and botnet activities in the sample particular domain.

Risk Changes:

Risk by Categories:

Risk is categorized with factors such as high, medium and low, the risk is at high level for data breaches and botnet activities.

Challenges Faced by Educational Institutions
Educational institutions face cyberattack risks, the challenges leading to cyberattack incidents in educational institutions are as follows:
🔒 Lack of a Security Framework: A key challenge in cybersecurity for educational institutions is the lack of a dedicated framework for higher education. Existing frameworks like ISO 27001, NIST, COBIT, and ITIL are designed for commercial organizations and are often difficult and costly to implement. Consequently, many educational institutions in India do not have a clearly defined cybersecurity framework.
🔑 Diverse User Accounts: Educational institutions manage numerous accounts for staff, students, alumni, and third-party contractors, with high user turnover. The continuous influx of new users makes maintaining account security a challenge, requiring effective systems and comprehensive security training for all users.
📚 Limited Awareness: Cybersecurity awareness among students, parents, teachers, and staff in educational institutions is limited due to the recent and rapid integration of technology. The surge in tech use, accelerated by the pandemic, has outpaced stakeholders' ability to address cybersecurity issues, leaving them unprepared to manage or train others on these challenges.
📱 Increased Use of Personal/Shared Devices: The growing reliance on unvetted personal/Shared devices for academic and administrative activities amplifies security risks.
💬 Lack of Incident Reporting: Educational institutions often neglect reporting cyber incidents, increasing vulnerability to future attacks. It is essential to report all cases, from minor to severe, to strengthen cybersecurity and institutional resilience.
Impact of Cybersecurity Attacks on Educational Institutions
Cybersecurity attacks on educational institutions lead to learning disruptions, financial losses, and data breaches. They also harm the institution's reputation and pose security risks to students. The following are the impacts of cybersecurity attacks on educational institutions:
📚Impact on the Learning Process: A report by the US Government Accountability Office (GAO) found that cyberattacks on school districts resulted in learning losses ranging from three days to three weeks, with recovery times taking between two to nine months.
💸Financial Loss: US schools reported financial losses ranging from $50,000 to $1 million due to expenses like hardware replacement and cybersecurity upgrades, with recovery taking an average of 2 to 9 months.
🔒Data Security Breaches: Cyberattacks exposed sensitive data, including grades, social security numbers, and bullying reports. Accidental breaches were often caused by staff, accounting for 21 out of 25 cases, while intentional breaches by students, comprising 27 out of 52 cases, frequently involved tampering with grades.
⚠️Data Security Breach: Cyberattacks on schools result in breaches of personal information, including grades and social security numbers, causing emotional, physical, and financial harm. These breaches can be intentional or accidental, with a US study showing staff responsible for most accidental breaches (21 out of 25) and students primarily behind intentional breaches (27 out of 52) to change grades.
🏫Impact on Institutional Reputation: Cyberattacks damaged the reputation of educational institutions, eroding trust among students, staff, and families. Negative media coverage and scrutiny impacted staff retention, student admissions, and overall credibility.
🛡️ Impact on Student Safety: Cyberattacks compromised student safety and privacy. For example, breaches like live-streaming school CCTV footage caused severe distress, negatively impacting students' sense of security and mental well-being.
CyberPeace Advisory:
CyberPeace emphasizes the importance of vigilance and proactive measures to address cybersecurity risks:
- Develop effective incident response plans: Establish a clear and structured plan to quickly identify, respond to, and recover from cyber threats. Ensure that staff are well-trained and know their roles during an attack to minimize disruption and prevent further damage.
- Implement access controls with role-based permissions: Restrict access to sensitive information based on individual roles within the institution. This ensures that only authorized personnel can access certain data, reducing the risk of unauthorized access or data breaches.
- Regularly update software and conduct cybersecurity training: Keep all software and systems up-to-date with the latest security patches to close vulnerabilities. Provide ongoing cybersecurity awareness training for students and staff to equip them with the knowledge to prevent attacks, such as phishing.
- Ensure regular and secure backups of critical data: Perform regular backups of essential data and store them securely in case of cyber incidents like ransomware. This ensures that, if data is compromised, it can be restored quickly, minimizing downtime.
- Adopt multi-factor authentication (MFA): Enforce Multi-Factor Authentication(MFA) for accessing sensitive systems or information to strengthen security. MFA adds an extra layer of protection by requiring users to verify their identity through more than one method, such as a password and a one-time code.
- Deploy anti-malware tools: Use advanced anti-malware software to detect, block, and remove malicious programs. This helps protect institutional systems from viruses, ransomware, and other forms of malware that can compromise data security.
- Monitor networks using intrusion detection systems (IDS): Implement IDS to monitor network traffic and detect suspicious activity. By identifying threats in real time, institutions can respond quickly to prevent breaches and minimize potential damage.
- Conduct penetration testing: Regularly conduct penetration testing to simulate cyberattacks and assess the security of institutional networks. This proactive approach helps identify vulnerabilities before they can be exploited by actual attackers.
- Collaborate with cybersecurity firms: Partner with cybersecurity experts to benefit from specialized knowledge and advanced security solutions. Collaboration provides access to the latest technologies, threat intelligence, and best practices to enhance the institution's overall cybersecurity posture.
- Share best practices across institutions: Create forums for collaboration among educational institutions to exchange knowledge and strategies for cybersecurity. Sharing successful practices helps build a collective defense against common threats and improves security across the education sector.
Conclusion:
The increasing cyber threats to Indian educational institutions demand immediate attention and action. With vulnerabilities like data breaches, botnet activities, and outdated infrastructure, institutions must prioritize effective cybersecurity measures. By adopting proactive strategies such as regular software updates, multi-factor authentication, and incident response plans, educational institutions can mitigate risks and safeguard sensitive data. Collaborative efforts, awareness, and investment in cybersecurity will be essential to creating a secure digital environment for academia.