#FactCheck-RBI's Alleged Guidelines on Ink Colour for Cheque Writing
Executive Summary:
A viral message is circulating claiming the Reserve Bank of India (RBI) has banned the use of black ink for writing cheques. This information is incorrect. The RBI has not issued any such directive, and cheques written in black ink remain valid and acceptable.

Claim:
The Reserve Bank of India (RBI) has issued new guidelines prohibiting using black ink for writing cheques. As per the claimed directive, cheques must now be written exclusively in blue or green ink.

Fact Check:
Upon thorough verification, it has been confirmed that the claim regarding the Reserve Bank of India (RBI) issuing a directive banning the use of black ink for writing cheques is entirely false. No such notification, guideline, or instruction has been released by the RBI in this regard. Cheques written in black ink remain valid, and the public is advised to disregard such unverified messages and rely only on official communications for accurate information.
As stated by the Press Information Bureau (PIB), this claim is false The Reserve Bank of India has not prescribed specific ink colors to be used for writing cheques. There is a mention of the color of ink to be used in point number 8, which discusses the care customers should take while writing cheques.


Conclusion:
The claim that the Reserve Bank of India has banned the use of black ink for writing cheques is completely false. No such directive, rule, or guideline has been issued by the RBI. Cheques written in black ink are valid and acceptable. The RBI has not prescribed any specific ink color for writing cheques, and the public is advised to disregard unverified messages. While general precautions for filling out cheques are mentioned in RBI advisories, there is no restriction on the color of the ink. Always refer to official sources for accurate information.
- Claim: The new RBI ink guidelines are mandatory from a specified date.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

In the vast, interconnected cosmos of the internet, where knowledge and connectivity are celebrated as the twin suns of enlightenment, there lurk shadows of a more sinister nature. Here, in these darker corners, the innocence of childhood is not only exploited but also scarred, indelibly and forever. The production, distribution, and consumption of Child Sexual Abuse Material (CSAM) have surged to alarming levels globally, casting a long, ominous shadow over the digital landscape.
In response to this pressing issue, the National Human Rights Commission (NHRC) has unfurled a comprehensive four-part advisory, a beacon of hope aimed at combating CSAM and safeguarding the rights of children in this digital age. This advisory dated 27/10/23 is not merely a reaction to the rising tide of CSAM, but a testament to the imperative need for constant vigilance in the realm of cyber peace.
The statistics paint a sobering picture. In 2021, more than 1,500 instances of publishing, storing, and transmitting CSAM were reported, shedding a harsh light on the scale of the problem. Even more alarming is the upward trend in cases reported in subsequent years. By 2023, a staggering 450,207 cases of CSAM had already been reported, marking a significant increase from the 204,056 and 163,633 cases reported in 2022 and 2021, respectively.
The Key Aspects of Advisory
The NHRC's advisory commences with a fundamental recommendation - a redefinition of terminology. It suggests replacing the term 'Child Pornography' with 'Child Sexual Abuse Material' (CSAM). This shift in language is not merely semantic; it underscores the gravity of the issue, emphasizing that this is not about pornography but child abuse.
Moreover, the advisory calls for the definition of 'sexually explicit' under Section 67B of the IT Act, 2000. This step is crucial for ensuring the prompt identification and removal of online CSAM. By giving a clear definition, law enforcement can act swiftly in removing such content from the internet.
The digital world knows no borders, and CSAM can easily cross jurisdictional lines. NHRC recognizes this challenge and proposes that laws be harmonized across jurisdictions through bilateral agreements. Moreover, it recommends pushing for the adoption of a UN draft Convention on 'Countering the Use of Information and Communications Technologies for Criminal Purposes' at the General Assembly.
One of the critical aspects of the advisory is the strengthening of law enforcement. NHRC advocates for the creation of Specialized State Police Units in every state and union territory to handle CSAM-related cases. The central government is expected to provide support, including grants, to set up and equip these units.
The NHRC further recommends establishing a Specialized Central Police Unit under the government of India's jurisdiction. This unit will focus on identifying and apprehending CSAM offenders and maintaining a repository of such content. Its role is not limited to law enforcement; it is expected to cooperate with investigative agencies, analyze patterns, and initiate the process for content takedown. This coordinated approach is designed to combat the problem effectively, both on the dark web and open web.
The role of internet intermediaries and social media platforms in controlling CSAM is undeniable. The NHRC advisory emphasizes that intermediaries must deploy technology, such as content moderation algorithms, to proactively detect and remove CSAM from their platforms. This places the onus on the platforms to be proactive in policing their content and ensuring the safety of their users.
New Developments
Platforms using end-to-end encryption services may be required to create additional protocols for monitoring the circulation of CSAM. Failure to do so may invite the withdrawal of the 'safe harbor' clause under Section 79 of the IT Act, 2000. This measure ensures that platforms using encryption technology are not inadvertently providing safe havens for those engaged in illegal activities.
NHRC's advisory extends beyond legal and law enforcement measures; it emphasizes the importance of awareness and sensitization at various levels. Schools, colleges, and institutions are called upon to educate students, parents, and teachers about the modus operandi of online child sexual abusers, the vulnerabilities of children on the internet, and the early signs of online child abuse.
To further enhance awareness, a cyber curriculum is proposed to be integrated into the education system. This curriculum will not only boost digital literacy but also educate students about relevant child care legislation, policies, and the legal consequences of violating them.
NHRC recognizes that survivors of CSAM need more than legal measures and prevention strategies. Survivors are recommended to receive support services and opportunities for rehabilitation through various means. Partnerships with civil society and other stakeholders play a vital role in this aspect. Moreover, psycho-social care centers are proposed to be established in every district to facilitate need-based support services and organization of stigma eradication programs.
NHRC's advisory is a resounding call to action, acknowledging the critical importance of protecting children from the perils of CSAM. By addressing legal gaps, strengthening law enforcement, regulating online platforms, and promoting awareness and support, the NHRC aims to create a safer digital environment for children.
Conclusion
In a world where the internet plays an increasingly central role in our lives, these recommendations are not just proactive but imperative. They underscore the collective responsibility of governments, law enforcement agencies, intermediaries, and society as a whole in safeguarding the rights and well-being of children in the digital age.
NHRC's advisory is a pivotal guide to a more secure and child-friendly digital world. By addressing the rising tide of CSAM and emphasizing the need for constant vigilance, NHRC reaffirms the critical role of organizations, governments, and individuals in ensuring cyber peace and child protection in the digital age. The active contribution from premier cyber resilience firms like Cyber Peace Foundation, amplifies the collective action forging a secure digital space, highlighting the pivotal role played by think tanks in ensuring cyber peace and resilience.
References:
- https://www.hindustantimes.com/india-news/nhrc-issues-advisory-regarding-child-sexual-abuse-material-on-internet-101698473197792.html
- https://ssrana.in/articles/nhrcs-advisory-proliferation-of-child-sexual-abuse-material-csam/
- https://theprint.in/india/specialised-central-police-unit-use-of-technology-to-proactively-detect-csam-nhrc-advisory/1822223/

Introduction
Mr Rajeev Chanderashekhar, MoS, Ministry of Electronics and Information Technology, on 09 March 2023, held a stakeholder consultation on the Digital India Bill. This bill will be the successor to the Information technology Act 2000 and provide a set of regulations and laws which will govern cyberspace in times to come. The consultation was held in Bangalore and was the first of many such consultations where the Digital India bill is to be discussed. These public stakeholder consultations will provide direct public feedback to the ministry, and this will help create a safe and secure ecosystem of Indian Cyber Laws.
What is the Digital India Act?
Cyberspace has evolved the fastest as compared to any other industry, and the evolution of the growth cannot be presumed to be stagnant or stuck as we see new technologies and gadgets being invented all across the globe. The ease created by using technology has changed how we live and function. However, bad actors often use these advantages or fruits of technology to wreak havoc upon the nation’s cyberspace. The use of technology is always governed by the application of usage and safeguard policies and laws. As technology is growing exponentially, it is pertinent that we have laws which are in congruence with today’s time and technology. This is keenly addressed by the Digital India Act, which will be the legislation governing Indian Cyberspace in times to come. This was the need of the hour in order to have the judiciary, legislature and law enforcement agencies ahead of the curve when it comes to cyber crimes and laws.
What is the Digital India Bill’s primary goal?
The Digital India Bill’s goal is to guarantee an institutional structure for accountability and that the internet in India is accessible, unhindered by user harm or criminal activity. The law will apply to new technologies, algorithmic social media platforms, artificial intelligence, user risks, the diversity of the internet, and the regulation of intermediaries. The diversity of the internet, user hazards, artificial intelligence, social media platforms, and intermediary regulation are all discussed.
Why is the Digital India Bill necessary?
The number of internet users in the country currently exceeds 760 million; in the upcoming years, this number will reach 1.2 billion. Despite the fact that the internet is useful and promotes connectivity, there are a number of user damages nearby. Thus, it is crucial to enact legislation to set forth new guidelines for individuals’ rights and responsibilities and mention the requirement to gather data.
Major Elements of the Digital India Act
Major Elements of the Digital India Bill, which will eventually become an Act, which will contribute massively towards a safe cyber-ecosystem, some of these elements aim towards the following-
- The legislation attempts to establish an internet regulator.
- Women and Child safety.
- Safe harbour for intermediaries.
- The right of the individual to secure his information and the requirement to utilise personal data for legal purposes provide the main obstacles to data protection or regulation. The law tries to deal with this difficulty.
- A limit will be placed on how far a person’s personal information can be accessed for legal reasons.
- The majority of the bill’s characteristics are contrasted with the EU’s General Data Protection Regulation.
The Way Ahead
As we ride the wave of developments in cyberspace regarding emerging technologies and automated gadgets, it becomes pertinent that the state takes due note of such technologies and the courts take cognisance of offences committed by using technology. Law enforcement agencies must also train police personnel who can effectively and efficiently investigate cybercrime cases. The ministry also released a few bills last year, such as – the Telecommunication Bill, 2022, Intermediary Rules and the Digital Personal Data Protection Bill, 2022, to better address the shortcomings and the issues in cyberspace and how to safeguard the netizens. The Digital India Act will essentially create a synergy between the current bills and the new ones to come in order to create a wholesome, safe and secure Indian cyber ecosystem.
Conclusion
Digital India Bill is necessary to address the challenges of cyberspace, like personal data and privacy, and policies related to online child and women safety to create a and create a modern and comprehensive legal framework that aligns with global standards of cyber laws. The draft of the bill is expected to come out by July. The ministry looks forward to maximising the impact of the bill through such continuous and effective public consultation to understand and fulfil the expectations and requirements of the Indian netizen, thus empowering him/her equivalent to the netizen of a developed country.
.webp)
Misinformation spread has become a cause for concern for all stakeholders, be it the government, policymakers, business organisations or the citizens. The current push for combating misinformation is rooted in the growing awareness that misinformation leads to sentiment exploitation and can result in economic instability, personal risks, and a rise in political, regional, and religious tensions. The circulation of misinformation poses significant challenges for organisations, brands and administrators of all types. The spread of misinformation online poses a risk not only to the everyday content consumer, but also creates concerns for the sharer but the platforms themselves. Sharing misinformation in the digital realm, intentionally or not, can have real consequences.
Consequences for Platforms
Platforms have been scrutinised for the content they allow to be published and what they don't. It is important to understand not only how this misinformation affects platform users, but also its impact and consequences for the platforms themselves. These consequences highlight the complex environment that social media platforms operate in, where the stakes are high from the perspective of both business and societal impact. They are:
- Legal Consequences: Platforms can be fined by regulators if they fail to comply with content moderation or misinformation-related laws and a prime example of such a law is the Digital Services Act of the EU, which has been created for the regulation of digital services that act as intermediaries for consumers and goods, services, and content. They can face lawsuits by individuals, organisations or governments for any damages due to misinformation. Defamation suits are part of the standard practice when dealing with misinformation-causing vectors. In India, the Prohibition of Fake News on Social Media Bill of 2023 is in the pipeline and would establish a regulatory body for fake news on social media platforms.
- Reputational Consequences: Platforms employ a trust model where the user trusts it and its content. If a user loses trust in the platform because of misinformation, it can reduce engagement. This might even lead to negative coverage that affects the public opinion of the brand, its value and viability in the long run.
- Financial Consequences: Businesses that engage with the platform may end their engagement with platforms accused of misinformation, which can lead to a revenue drop. This can also have major consequences affecting the long-term financial health of the platform, such as a decline in stock prices.
- Operational Consequences: To counter the scrutiny from regulators, the platform might need to engage in stricter content moderation policies or other resource-intensive tasks, increasing operational costs for the platforms.
- Market Position Loss: If the reliability of a platform is under question, then, platform users can migrate to other platforms, leading to a loss in the market share in favour of those platforms that manage misinformation more effectively.
- Freedom of Expression vs. Censorship Debate: There needs to be a balance between freedom of expression and the prevention of misinformation. Censorship can become an accusation for the platform in case of stricter content moderation and if the users feel that their opinions are unfairly suppressed.
- Ethical and Moral Responsibilities: Accountability for platforms extends to moral accountability as they allow content that affects different spheres of the user's life such as public health, democracy etc. Misinformation can cause real-world harm like health misinformation or inciting violence, which leads to the fact that platforms have social responsibility too.
Misinformation has turned into a global issue and because of this, digital platforms need to be vigilant while they navigate the varying legal, cultural and social expectations across different jurisdictions. Efforts to create standardised practices and policies have been complicated by the diversity of approaches, leading platforms to adopt flexible strategies for managing misinformation that align with global and local standards.
Addressing the Consequences
These consequences can be addressed by undertaking the following measures:
- The implementation of a more robust content moderation system by the platforms using a combination of AI and human oversight for the identification and removal of misinformation in an effective manner.
- Enhancing the transparency in platform policies for content moderation and decision-making would build user trust and reduce the backlash associated with perceived censorship.
- Collaborations with fact checkers in the form of partnerships to help verify the accuracy of content and reduce the spread of misinformation.
- Engage with regulators proactively to stay ahead of legal and regulatory requirements and avoid punitive actions.
- Platforms should Invest in media literacy initiatives and help users critically evaluate the content available to them.
Final Takeaways
The accrual of misinformation on digital platforms has resulted in presenting significant challenges across legal, reputational, financial, and operational functions for all stakeholders. As a result, a critical need arises where the interlinked, but seemingly-exclusive priorities of preventing misinformation and upholding freedom of expression must be balanced. Platforms must invest in the creation and implementation of a robust content moderation system with in-built transparency, collaborating with fact-checkers, and media literacy efforts to mitigate the adverse effects of misinformation. In addition to this, adapting to diverse international standards is essential to maintaining their global presence and societal trust.
References
- https://pirg.org/edfund/articles/misinformation-on-social-media/
- https://www.mdpi.com/2076-0760/12/12/674
- https://scroll.in/article/1057626/israel-hamas-war-misinformation-is-being-spread-across-social-media-with-real-world-consequences
- https://www.who.int/europe/news/item/01-09-2022-infodemics-and-misinformation-negatively-affect-people-s-health-behaviours--new-who-review-finds