Colorado’s Special Session and the Future of Algorithmic Accountability: Lessons for India
Introduction
Earlier this month, lawmakers in Colorado, a U.S. state, were summoned to a special legislative session to rewrite their newly passed Artificial Intelligence (AI) law before it even takes effect. Although the discussion taking place in Denver may seem distant, evolving regulations like this one directly address issues that India will soon encounter as we forge our own course for AI governance.
The Colorado Artificial Intelligence Act
Colorado became the first U.S. state to pass a comprehensive AI accountability law, set to come into force in 2026. It aims to protect people from bias, discrimination, and harm caused by predictive algorithms since AI tools have been known to reproduce societal biases by sidelining women from hiring processes, penalising loan applicants from poor neighbourhoods, or through welfare systems that wrongly deny citizens their benefits. But the law met resistance from tech companies who threatened to pull out form the state, claiming it is too broad in scope in its current form and would stifle innovation. This brings critical questions about AI regulation to the forefront:
- Who should be responsible when AI causes harm? Developers, deployers, or both?
- How should citizens seek justice?
- How can tech companies be incentivised to develop safe technologies?
Colorado’s governor has called a special session to update the law before it kicks in.
What This Means for India
India is on its path towards framing a dedicated AI-specific law or directions, and discussions are underway through the IndiaAI Mission, the proposed Digital India Act, committee set by the Delhi High Court on deepfake and other measures. But the dilemmas Colorado is wrestling with are also relevant here.
- AI uptake is growing in public service delivery in India. Facial recognition systems are expanding in policing, despite accuracy and privacy concerns. Fintech apps using AI-driven credit scoring raise questions of fairness and transparency.
- Accountability is unclear. If an Indian AI-powered health app gives faulty advice, who should be liable- the global developer, the Indian startup deploying it, or the regulator who failed to set safeguards?
- India has more than 1,500 AI startups (NASSCOM), which, like Colorado’s firms, fear that onerous compliance could choke growth. But weak guardrails could undermine public trust in AI altogether.
Lessons for India
India’s Ministry of Electronics and IT ( MEITy) favours a light-touch approach to AI regulation, and exploring and advancing ways for a future-proof guideline. Further, lessons from other global frameworks can guide its way.
- Colorado’s case shows us the necessity of incorporating feedback loops in the policy-making process. India should utilise regulatory sandboxes and open, transparent consultation processes before locking in rigid rules.
- It will also need to explore proportionate obligations, lighter for low-risk applications and stricter for high-risk use cases such as policing, healthcare, or welfare delivery.
- Europe’s AI Act is heavy on compliance, the U.S. federal government leans toward deregulation, and Colorado is somewhere in between. India has the chance to create a middle path, grounded in our democratic and developmental context.
Conclusion
As AI becomes increasingly embedded in hiring, banking, education, and welfare, opportunities for ordinary Indians are being redefined. To shape how this pans out, states like Tamil Nadu and Telangana have taken early steps to frame AI policies. Lessons will emerge from their initiative in addressing AI governance. Policy and regulation will always be contested, but contestations are a part of the process.
The Colorado debate shows us how participative law-making, with room for debate, revision, and iteration, is not a weakness but a necessity. For India’s emerging AI governance landscape, the challenge will be to embrace this process while ensuring that citizen rights and inclusion are balanced well with industry concerns. CyberPeace advocates for responsible AI regulation that balances innovation and accountability.
References
- https://www.cbsnews.com/colorado/news/colorado-lawmakers-look-repeal-replace-controversial-artificial-intelligence-law/
- https://www.naag.org/attorney-general-journal/a-deep-dive-into-colorados-artificial-intelligence-act/
- https://carnegieendowment.org/research/2024/11/indias-advance-on-ai-regulation?lang=en
- https://the-captable.com/2024/12/india-ai-regulation-light-touch/
- https://indiaai.gov.in/article/tamilnadu-s-ai-policy-six-step-tamdef-guidance-framework-and-deepmax-scorecard
Related Blogs

Overview:
The rapid digitization of educational institutions in India has created both opportunities and challenges. While technology has improved access to education and administrative efficiency, it has also exposed institutions to significant cyber threats. This report, published by CyberPeace, examines the types, causes, impacts, and preventive measures related to cyber risks in Indian educational institutions. It highlights global best practices, national strategies, and actionable recommendations to mitigate these threats.

Significance of the Study:
The pandemic-induced shift to online learning, combined with limited cybersecurity budgets, has made educational institutions prime targets for cyberattacks. These threats compromise sensitive student, faculty, and institutional data, leading to operational disruptions, financial losses, and reputational damage. Globally, educational institutions face similar challenges, emphasizing the need for universal and localized responses.
Threat Faced by Education Institutions:
Based on the insights from the CyberPeace’s report titled 'Exploring Cyber Threats and Digital Risks in Indian Educational Institutions', this concise blog provides a comprehensive overview of cybersecurity threats and risks faced by educational institutions, along with essential details to address these challenges.
🎣 Phishing: Phishing is a social engineering tactic where cyber criminals impersonate trusted sources to steal sensitive information, such as login credentials and financial details. It often involves deceptive emails or messages that lead to counterfeit websites, pressuring victims to provide information quickly. Variants include spear phishing, smishing, and vishing.
💰 Ransomware: Ransomware is malware that locks users out of their systems or data until a ransom is paid. It spreads through phishing emails, malvertising, and exploiting vulnerabilities, causing downtime, data leaks, and theft. Ransom demands can range from hundreds to hundreds of thousands of dollars.
🌐 Distributed Denial of Service (DDoS): DDoS attacks overwhelm servers, denying users access to websites and disrupting daily operations, which can hinder students and teachers from accessing learning resources or submitting assignments. These attacks are relatively easy to execute, especially against poorly protected networks, and can be carried out by amateur cybercriminals, including students or staff, seeking to cause disruptions for various reasons
🕵️ Cyber Espionage: Higher education institutions, particularly research-focused universities, are vulnerable to spyware, insider threats, and cyber espionage. Spyware is unauthorized software that collects sensitive information or damages devices. Insider threats arise from negligent or malicious individuals, such as staff or vendors, who misuse their access to steal intellectual property or cause data leaks..
🔒 Data Theft: Data theft is a major threat to educational institutions, which store valuable personal and research information. Cybercriminals may sell this data or use it for extortion, while stealing university research can provide unfair competitive advantages. These attacks can go undetected for long periods, as seen in the University of California, Berkeley breach, where hackers allegedly stole 160,000 medical records over several months.
🛠️ SQL Injection: SQL injection (SQLI) is an attack that uses malicious code to manipulate backend databases, granting unauthorized access to sensitive information like customer details. Successful SQLI attacks can result in data deletion, unauthorized viewing of user lists, or administrative access to the database.
🔍Eavesdropping attack: An eavesdropping breach, or sniffing, is a network attack where cybercriminals steal information from unsecured transmissions between devices. These attacks are hard to detect since they don't cause abnormal data activity. Attackers often use network monitors, like sniffers, to intercept data during transmission.
🤖 AI-Powered Attacks: AI enhances cyber attacks like identity theft, password cracking, and denial-of-service attacks, making them more powerful, efficient, and automated. It can be used to inflict harm, steal information, cause emotional distress, disrupt organizations, and even threaten national security by shutting down services or cutting power to entire regions
Insights from Project eKawach
The CyberPeace Research Wing, in collaboration with SAKEC CyberPeace Center of Excellence (CCoE) and Autobot Infosec Private Limited, conducted a study simulating educational institutions' networks to gather intelligence on cyber threats. As part of the e-Kawach project, a nationwide initiative to strengthen cybersecurity, threat intelligence sensors were deployed to monitor internet traffic and analyze real-time cyber attacks from July 2023 to April 2024, revealing critical insights into the evolving cyber threat landscape.
Cyber Attack Trends
Between July 2023 and April 2024, the e-Kawach network recorded 217,886 cyberattacks from IP addresses worldwide, with a significant portion originating from countries including the United States, China, Germany, South Korea, Brazil, Netherlands, Russia, France, Vietnam, India, Singapore, and Hong Kong. However, attributing these attacks to specific nations or actors is complex, as threat actors often use techniques like exploiting resources from other countries, or employing VPNs and proxies to obscure their true locations, making it difficult to pinpoint the real origin of the attacks.
Brute Force Attack:
The analysis uncovered an extensive use of automated tools in brute force attacks, with 8,337 unique usernames and 54,784 unique passwords identified. Among these, the most frequently targeted username was “root,” which accounted for over 200,000 attempts. Other commonly targeted usernames included: "admin", "test", "user", "oracle", "ubuntu", "guest", "ftpuser", "pi", "support"
Similarly, the study identified several weak passwords commonly targeted by attackers. “123456” was attempted over 3,500 times, followed by “password” with over 2,500 attempts. Other frequently targeted passwords included: "1234", "12345", "12345678", "admin", "123", "root", "test", "raspberry", "admin123", "123456789"

Insights from Threat Landscape Analysis
Research done by the USI - CyberPeace Centre of Excellence (CCoE) and Resecurity has uncovered several breached databases belonging to public, private, and government universities in India, highlighting significant cybersecurity threats in the education sector. The research aims to identify and mitigate cybersecurity risks without harming individuals or assigning blame, based on data available at the time, which may evolve with new information. Institutions were assigned risk ratings that descend from A to F, with most falling under a D rating, indicating numerous security vulnerabilities. Institutions rated D or F are 5.4 times more likely to experience data breaches compared to those rated A or B. Immediate action is recommended to address the identified risks.


Risk Findings :
The risk findings for the institutions are summarized through a pie chart, highlighting factors such as data breaches, dark web activity, botnet activity, and phishing/domain squatting. Data breaches and botnet activity are significantly higher compared to dark web leakages and phishing/domain squatting. The findings show 393,518 instances of data breaches, 339,442 instances of botnet activity, 7,926 instances related to the dark web and phishing & domain activity - 6711.

Key Indicators: Multiple instances of data breaches containing credentials (email/passwords) in plain text.


- Botnet activity indicating network hosts compromised by malware.

- Credentials from third-party government and non-governmental websites linked to official institutional emails

- Details of software applications, drivers installed on compromised hosts.

- Sensitive cookie data exfiltrated from various browsers.


- IP addresses of compromised systems.
- Login credentials for different Android applications.

Below is the sample detail of one of the top educational institutions that provides the insights about the higher rate of data breaches, botnet activity, dark web activities and phishing & domain squatting.
Risk Detection:
It indicates the number of data breaches, network hygiene, dark web activities, botnet activities, cloud security, phishing & domain squatting, media monitoring and miscellaneous risks. In the below example, we are able to see the highest number of data breaches and botnet activities in the sample particular domain.

Risk Changes:

Risk by Categories:

Risk is categorized with factors such as high, medium and low, the risk is at high level for data breaches and botnet activities.

Challenges Faced by Educational Institutions
Educational institutions face cyberattack risks, the challenges leading to cyberattack incidents in educational institutions are as follows:
🔒 Lack of a Security Framework: A key challenge in cybersecurity for educational institutions is the lack of a dedicated framework for higher education. Existing frameworks like ISO 27001, NIST, COBIT, and ITIL are designed for commercial organizations and are often difficult and costly to implement. Consequently, many educational institutions in India do not have a clearly defined cybersecurity framework.
🔑 Diverse User Accounts: Educational institutions manage numerous accounts for staff, students, alumni, and third-party contractors, with high user turnover. The continuous influx of new users makes maintaining account security a challenge, requiring effective systems and comprehensive security training for all users.
📚 Limited Awareness: Cybersecurity awareness among students, parents, teachers, and staff in educational institutions is limited due to the recent and rapid integration of technology. The surge in tech use, accelerated by the pandemic, has outpaced stakeholders' ability to address cybersecurity issues, leaving them unprepared to manage or train others on these challenges.
📱 Increased Use of Personal/Shared Devices: The growing reliance on unvetted personal/Shared devices for academic and administrative activities amplifies security risks.
💬 Lack of Incident Reporting: Educational institutions often neglect reporting cyber incidents, increasing vulnerability to future attacks. It is essential to report all cases, from minor to severe, to strengthen cybersecurity and institutional resilience.
Impact of Cybersecurity Attacks on Educational Institutions
Cybersecurity attacks on educational institutions lead to learning disruptions, financial losses, and data breaches. They also harm the institution's reputation and pose security risks to students. The following are the impacts of cybersecurity attacks on educational institutions:
📚Impact on the Learning Process: A report by the US Government Accountability Office (GAO) found that cyberattacks on school districts resulted in learning losses ranging from three days to three weeks, with recovery times taking between two to nine months.
💸Financial Loss: US schools reported financial losses ranging from $50,000 to $1 million due to expenses like hardware replacement and cybersecurity upgrades, with recovery taking an average of 2 to 9 months.
🔒Data Security Breaches: Cyberattacks exposed sensitive data, including grades, social security numbers, and bullying reports. Accidental breaches were often caused by staff, accounting for 21 out of 25 cases, while intentional breaches by students, comprising 27 out of 52 cases, frequently involved tampering with grades.
⚠️Data Security Breach: Cyberattacks on schools result in breaches of personal information, including grades and social security numbers, causing emotional, physical, and financial harm. These breaches can be intentional or accidental, with a US study showing staff responsible for most accidental breaches (21 out of 25) and students primarily behind intentional breaches (27 out of 52) to change grades.
🏫Impact on Institutional Reputation: Cyberattacks damaged the reputation of educational institutions, eroding trust among students, staff, and families. Negative media coverage and scrutiny impacted staff retention, student admissions, and overall credibility.
🛡️ Impact on Student Safety: Cyberattacks compromised student safety and privacy. For example, breaches like live-streaming school CCTV footage caused severe distress, negatively impacting students' sense of security and mental well-being.
CyberPeace Advisory:
CyberPeace emphasizes the importance of vigilance and proactive measures to address cybersecurity risks:
- Develop effective incident response plans: Establish a clear and structured plan to quickly identify, respond to, and recover from cyber threats. Ensure that staff are well-trained and know their roles during an attack to minimize disruption and prevent further damage.
- Implement access controls with role-based permissions: Restrict access to sensitive information based on individual roles within the institution. This ensures that only authorized personnel can access certain data, reducing the risk of unauthorized access or data breaches.
- Regularly update software and conduct cybersecurity training: Keep all software and systems up-to-date with the latest security patches to close vulnerabilities. Provide ongoing cybersecurity awareness training for students and staff to equip them with the knowledge to prevent attacks, such as phishing.
- Ensure regular and secure backups of critical data: Perform regular backups of essential data and store them securely in case of cyber incidents like ransomware. This ensures that, if data is compromised, it can be restored quickly, minimizing downtime.
- Adopt multi-factor authentication (MFA): Enforce Multi-Factor Authentication(MFA) for accessing sensitive systems or information to strengthen security. MFA adds an extra layer of protection by requiring users to verify their identity through more than one method, such as a password and a one-time code.
- Deploy anti-malware tools: Use advanced anti-malware software to detect, block, and remove malicious programs. This helps protect institutional systems from viruses, ransomware, and other forms of malware that can compromise data security.
- Monitor networks using intrusion detection systems (IDS): Implement IDS to monitor network traffic and detect suspicious activity. By identifying threats in real time, institutions can respond quickly to prevent breaches and minimize potential damage.
- Conduct penetration testing: Regularly conduct penetration testing to simulate cyberattacks and assess the security of institutional networks. This proactive approach helps identify vulnerabilities before they can be exploited by actual attackers.
- Collaborate with cybersecurity firms: Partner with cybersecurity experts to benefit from specialized knowledge and advanced security solutions. Collaboration provides access to the latest technologies, threat intelligence, and best practices to enhance the institution's overall cybersecurity posture.
- Share best practices across institutions: Create forums for collaboration among educational institutions to exchange knowledge and strategies for cybersecurity. Sharing successful practices helps build a collective defense against common threats and improves security across the education sector.
Conclusion:
The increasing cyber threats to Indian educational institutions demand immediate attention and action. With vulnerabilities like data breaches, botnet activities, and outdated infrastructure, institutions must prioritize effective cybersecurity measures. By adopting proactive strategies such as regular software updates, multi-factor authentication, and incident response plans, educational institutions can mitigate risks and safeguard sensitive data. Collaborative efforts, awareness, and investment in cybersecurity will be essential to creating a secure digital environment for academia.
.webp)
Introduction
India's National Commission for Protection of Child Rights (NCPCR) is set to approach the Ministry of Electronics and Information Technology (MeitY) to recommend mandating a KYC-based system for verifying children's age under the Digital Personal Data Protection (DPDP) Act. The decision to approach or send recommendations to MeitY was taken by NCPCR in a closed-door meeting held on August 13 with social media entities. In the meeting, NCPCR emphasised proposing a KYC-based age verification mechanism. In this background, Section 9 of the Digital Personal Data Protection Act, 2023 defines a child as someone below the age of 18, and Section 9 mandates that such children have to be verified and parental consent will be required before processing their personal data.
Requirement of Verifiable Consent Under Section 9 of DPDP Act
Regarding the processing of children's personal data, Section 9 of the DPDP Act, 2023, provides that for children below 18 years of age, consent from parents/legal guardians is required. The Data Fiduciary shall, before processing any personal data of a child or a person with a disability who has a lawful guardian, obtain verifiable consent from the parent or lawful guardian. Additionally, behavioural monitoring or targeted advertising directed at children is prohibited.
Ongoing debate on Method to obtain Verifiable Consent
Section 9 of the DPDP Act gives parents or lawful guardians more control over their children's data and privacy, and it empowers them to make decisions about how to manage their children's online activities/permissions. However, obtaining such verifiable consent from the parent or legal guardian presents a quandary. It was expected that the upcoming 'DPDP rules,' which have yet to be notified by the Central Government, would shed light on the procedure of obtaining such verifiable consent from a parent or lawful guardian.
However, In the meeting held on 18th July 2024, between MeitY and social media companies to discuss the upcoming Digital Personal Data Protection Rules (DPDP Rules), MeitY stated that it may not intend to prescribe a ‘specific mechanism’ for Data Fiduciaries to verify parental consent for minors using digital services. MeitY instead emphasised obligations put forth on the data fiduciary under section 8(4) of the DPDP Act to implement “appropriate technical and organisational measures” to ensure effective observance of the provisions contained under this act.
In a recent update, MeitY held a review meeting on DPDP rules, where they focused on a method for determining children's ages. It was reported that the ministry is making a few more revisions before releasing the guidelines for public input.
CyberPeace Policy Outlook
CyberPeace in its policy recommendations paper published last month, (available here) also advised obtaining verifiable parental consent through methods such as Government Issued ID, integration of parental consent at ‘entry points’ like app stores, obtaining consent through consent forms, or drawing attention from foreign laws such as California Privacy Law, COPPA, and developing child-friendly SIMs for enhanced child privacy.
CyberPeace in its policy paper also emphasised that when deciding the method to obtain verifiable consent, the respective platforms need to be aligned with the fact that verifiable age verification must be done without compromising user privacy. Balancing user privacy is a question of both technological capabilities and ethical considerations.
DPDP Act is a brand new framework for protecting digital personal data and also puts forth certain obligations on Data Fiduciaries and provides certain rights to Data Principal. With upcoming ‘DPDP Rules’ which are expected to be notified soon, will define the detailed procedure for the implementation of the provisions of the Act. MeitY is refining the DPDP rules before they come out for public consultation. The approach of NCPCR is aimed at ensuring child safety in this digital era. We hope that MeitY comes up with a sound mechanism for obtaining verifiable consent from parents/lawful guardians after taking due consideration to recommendations put forth by various stakeholders, expert organisations and concerned authorities such as NCPCR.
References
- https://www.moneycontrol.com/technology/dpdp-rules-ncpcr-to-recommend-meity-to-bring-in-kyc-based-age-verification-for-children-article-12801563.html
- https://pune.news/government/ncpcr-pushes-for-kyc-based-age-verification-in-digital-data-protection-a-new-era-for-child-safety-215989/#:~:text=During%20this%20meeting%2C%20NCPCR%20issued,consent%20before%20processing%20their%20data
- https://www.hindustantimes.com/india-news/ncpcr-likely-to-seek-clause-for-parents-consent-under-data-protection-rules-101724180521788.html
- https://www.drishtiias.com/daily-updates/daily-news-analysis/dpdp-act-2023-and-the-isssue-of-parental-consent
.webp)
Introduction
The rise of misinformation, disinformation, and synthetic media content on the internet and social media platforms has raised serious concerns, emphasizing the need for responsible use of social media to maintain information accuracy and combat misinformation incidents. With online misinformation rampant all over the world, the World Economic Forum's 2024 Global Risk Report, notably ranks India amongst the highest in terms of risk of mis/disinformation.
The widespread online misinformation on social media platforms necessitates a joint effort between tech/social media platforms and the government to counter such incidents. The Indian government is actively seeking to collaborate with tech/social media platforms to foster a safe and trustworthy digital environment and to also ensure compliance with intermediary rules and regulations. The Ministry of Information and Broadcasting has used ‘extraordinary powers’ to block certain YouTube channels, X (Twitter) & Facebook accounts, allegedly used to spread harmful misinformation. The government has issued advisories regulating deepfake and misinformation, and social media platforms initiated efforts to implement algorithmic and technical improvements to counter misinformation and secure the information landscape.
Efforts by the Government and Social Media Platforms to Combat Misinformation
- Advisory regulating AI, deepfake and misinformation
The Ministry of Electronics and Information Technology (MeitY) issued a modified advisory on 15th March 2024, in suppression of the advisory issued on 1st March 2024. The latest advisory specifies that the platforms should inform all users about the consequences of dealing with unlawful information on platforms, including disabling access, removing non-compliant information, suspension or termination of access or usage rights of the user to their user account and imposing punishment under applicable law. The advisory necessitates identifying synthetically created content across various formats, and instructs platforms to employ labels, unique identifiers, or metadata to ensure transparency.
- Rules related to content regulation
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (Updated as on 6.4.2023) have been enacted under the IT Act, 2000. These rules assign specific obligations on intermediaries as to what kind of information is to be hosted, displayed, uploaded, published, transmitted, stored or shared. The rules also specify provisions to establish a grievance redressal mechanism by platforms and remove unlawful content within stipulated time frames.
- Counteracting misinformation during Indian elections 2024
To counter misinformation during the Indian elections the government and social media platforms made their best efforts to ensure the electoral integrity was saved from any threat of mis/disinformation. The Election Commission of India (ECI) further launched the 'Myth vs Reality Register' to combat misinformation and to ensure the integrity of the electoral process during the general elections in 2024. The ECI collaborated with Google to empower the citizenry by making it easy to find critical voting information on Google Search and YouTube. In this way, Google has supported the 2024 Indian General Election by providing high-quality information to voters and helping people navigate AI-generated content. Google connected voters to helpful information through product features that show data from trusted institutions across its portfolio. YouTube showcased election information panels, featuring content from authoritative sources.
- YouTube and X (Twitter) new ‘Notes Feature’
- Notes Feature on YouTube: YouTube is testing an experimental feature that allows users to add notes to provide relevant, timely, and easy-to-understand context for videos. This initiative builds on previous products that display helpful information alongside videos, such as information panels and disclosure requirements when content is altered or synthetic. YouTube clarified that the pilot will be available on mobiles in the U.S. and in the English language, to start with. During this test phase, viewers, participants, and creators are invited to give feedback on the quality of the notes.
- Community Notes feature on X: Community Notes on X aims to enhance the understanding of potentially misleading posts by allowing users to add context to them. Contributors can leave notes on any post, and if enough people rate the note as helpful, it will be publicly displayed. The algorithm is open source and publicly available on GitHub, allowing anyone to audit, analyze, or suggest improvements. However, Community Notes do not represent X's viewpoint and cannot be edited or modified by their teams. A post with a Community Note will not be labelled, removed, or addressed by X unless it violates the X Rules, Terms of Service, or Privacy Policy. Failure to abide by these rules can result in removal from accessing Community Notes and/or other remediations. Users can report notes that do not comply with the rules by selecting the menu on a note and selecting ‘Report’ or using the provided form.
CyberPeace Policy Recommendations
Countering widespread online misinformation on social media platforms requires a multipronged approach that involves joint efforts from different stakeholders. Platforms should invest in state-of-the-art algorithms and technology to detect and flag suspected misleading information. They should also establish trustworthy fact-checking protocols and collaborate with expert fact-checking groups. Campaigns, seminars, and other educational materials must be encouraged by the government to increase public awareness and digital literacy about the mis/disinformation risks and impacts. Netizens should be empowered with the necessary skills and ability to discern fact and misleading information to successfully browse true information in the digital information age. The joint efforts by Government authorities, tech companies, and expert cyber security organisations are vital in promoting a secure and honest online information landscape and countering the spread of mis/disinformation. Platforms must encourage netizens/users to foster appropriate online conduct while using platforms and abiding by the terms & conditions and community guidelines of the platforms. Encouraging a culture of truth and integrity on the internet, honouring differing points of view, and confirming facts all help to create a more reliable and information-resilient environment.
References:
- https://www.meity.gov.in/writereaddata/files/Advisory%2015March%202024.pdf
- https://blog.google/intl/en-in/company-news/outreach-initiatives/supporting-the-2024-indian-general-election/
- https://blog.youtube/news-and-events/new-ways-to-offer-viewers-more-context/
- https://help.x.com/en/using-x/community-notes