#FactCheck-RBI's Alleged Guidelines on Ink Colour for Cheque Writing
Executive Summary:
A viral message is circulating claiming the Reserve Bank of India (RBI) has banned the use of black ink for writing cheques. This information is incorrect. The RBI has not issued any such directive, and cheques written in black ink remain valid and acceptable.

Claim:
The Reserve Bank of India (RBI) has issued new guidelines prohibiting using black ink for writing cheques. As per the claimed directive, cheques must now be written exclusively in blue or green ink.

Fact Check:
Upon thorough verification, it has been confirmed that the claim regarding the Reserve Bank of India (RBI) issuing a directive banning the use of black ink for writing cheques is entirely false. No such notification, guideline, or instruction has been released by the RBI in this regard. Cheques written in black ink remain valid, and the public is advised to disregard such unverified messages and rely only on official communications for accurate information.
As stated by the Press Information Bureau (PIB), this claim is false The Reserve Bank of India has not prescribed specific ink colors to be used for writing cheques. There is a mention of the color of ink to be used in point number 8, which discusses the care customers should take while writing cheques.


Conclusion:
The claim that the Reserve Bank of India has banned the use of black ink for writing cheques is completely false. No such directive, rule, or guideline has been issued by the RBI. Cheques written in black ink are valid and acceptable. The RBI has not prescribed any specific ink color for writing cheques, and the public is advised to disregard unverified messages. While general precautions for filling out cheques are mentioned in RBI advisories, there is no restriction on the color of the ink. Always refer to official sources for accurate information.
- Claim: The new RBI ink guidelines are mandatory from a specified date.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Executive Summary:
Given that AI technologies are evolving at a fast pace in 2024, an AI-oriented phishing attack on a large Indian financial institution illustrated the threats. The documentation of the attack specifics involves the identification of attack techniques, ramifications to the institution, intervention conducted, and resultant effects. The case study also turns to the challenges connected with the development of better protection and sensibilisation of automatized threats.
Introduction
Due to the advancement in AI technology, its uses in cybercrimes across the world have emerged significant in financial institutions. In this report a serious incident that happened in early 2024 is analysed, according to which a leading Indian bank was hit by a highly complex, highly intelligent AI-supported phishing operation. Attack made use of AI’s innate characteristic of data analysis and data persuasion which led into a severe compromise of the bank’s internal structures.
Background
The chosen financial institution, one of the largest banks in India, had a good background regarding the extremity of its cybersecurity policies. However, these global cyberattacks opened up new threats that AI-based methods posed that earlier forms of security could not entirely counter efficiently. The attackers concentrated on the top managers of the bank because it is evident that controlling such persons gives the option of entering the inner systems as well as financial information.
Attack Execution
The attackers utilised AI in sending the messages that were an exact look alike of internal messages sent between employees. From Facebook and Twitter content, blog entries, and lastly, LinkedIn connection history and email tenor of the bank’s executives, the AI used to create these emails was highly specific. Some of these emails possessed official formatting, specific internal language, and the CEO’s writing; this made them very realistic.
It also used that link in phishing emails that led the users to a pseudo internal portal in an attempt to obtain the login credentials. Due to sophistication, the targeted individuals thought the received emails were genuine, and entered their log in details easily to the bank’s network, thus allowing the attackers access.
Impact
It caused quite an impact to the bank in every aspect. Numerous executives of the company lost their passwords to the fake emails and compromised several financial databases with information from customer accounts and transactions. The break-in permitted the criminals to cease a number of the financial’s internet services hence disrupting its functions and those of its customers for a number of days.
They also suffered a devastating blow to their customer trust because the breach revealed the bank’s weakness against contemporary cyber threats. Apart from managing the immediate operations which dealt with mitigating the breach, the financial institution was also toppling a long-term reputational hit.
Technical Analysis and Findings
1. The AI techniques that are used in generation of the phishing emails are as follows:
- The attack used powerful NLP technology, which was most probably developed using the large-scaled transformer, such as GPT (Generative Pre-trained Transformer). Since these models are learned from large data samples they used the examples of the conversation pieces from social networks, emails and PC language to create quite credible emails.
Key Technical Features:
- Contextual Understanding: The AI was able to take into account the nature of prior interactions and thus write follow up emails that were perfectly in line with prior discourse.
- Style Mimicry: The AI replicated the writing of the CEO given the emails of the CEO and then extrapolated from the data given such elements as the tone, the language, and the format of the signature line.
- Adaptive Learning: The AI actively adapted from the mistakes, and feedback to tweak the generated emails for other tries and this made it difficult to detect.
2. Sophisticated Spear-Phishing Techniques
Unlike ordinary phishing scams, this attack was phishing using spear-phishing where the attackers would directly target specific people using emails. The AI used social engineering techniques that significantly increased the chances of certain individuals replying to certain emails based on algorithms which machine learning furnished.
Key Technical Features:
- Targeted Data Harvesting: Cyborgs found out the employees of the organisation and targeted messages via the public profiles and messengers were scraped.
- Behavioural Analysis: The latest behaviour pattern concerning the users of the social networking sites and other online platforms were used by the AI to forecast the courses of action expected to be taken by the end users such as clicking on the links or opening of the attachments.
- Real-Time Adjustments: These are times when it was determined that the response to the phishing email was necessary and the use of AI adjusted the consequent emails’ timing and content.
3. Advanced Evasion Techniques
The attackers were able to pull off this attack by leveraging AI in their evasion from the normal filters placed in emails. These techniques therefore entailed a modification of the contents of the emails in a manner that would not be easily detected by the spam filters while at the same time preserving the content of the message.
Key Technical Features:
- Dynamic Content Alteration: The AI merely changed the different aspects of the email message slightly to develop several versions of the phishing email that would compromise different algorithms.
- Polymorphic Attacks: In this case, polymorphic code was used in the phishing attack which implies that the actual payloads of the links changed frequently, which means that it was difficult for the AV tools to block them as they were perceived as threats.
- Phantom Domains: Another tactic employed was that of using AI in generating and disseminating phantom domains, that are actual web sites that appear to be legitimate but are in fact short lived specially created for this phishing attack, adding to the difficulty of detection.
4. Exploitation of Human Vulnerabilities
This kind of attack’s success was not only in AI but also in the vulnerability of people, trust in familiar language and the tendency to obey authorities.
Key Technical Features:
- Social Engineering: As for the second factor, AI determined specific psychological principles that should be used in order to maximise the chance of the targeted recipients opening the phishing emails, namely the principles of urgency and familiarity.
- Multi-Layered Deception: The AI was successfully able to have a two tiered approach of the emails being sent as once the targeted individuals opened the first mail, later the second one by pretext of being a follow up by a genuine company/personality.
Response
On sighting the breach, the bank’s cybersecurity personnel spring into action to try and limit the fallout. They reported the matter to the Indian Computer Emergency Response Team (CERT-In) to find who originated the attack and how to block any other intrusion. The bank also immediately started taking measures to strengthen its security a bit further, for instance, in filtering emails, and increasing the authentication procedures.
Knowing the risks, the bank realised that actions should be taken in order to enhance the cybersecurity level and implement a new wide-scale cybersecurity awareness program. This programme consisted of increasing the awareness of employees about possible AI-phishing in the organisation’s info space and the necessity of checking the sender’s identity beforehand.
Outcome
Despite the fact and evidence that this bank was able to regain its functionality after the attack without critical impacts with regards to its operations, the following issues were raised. Some of the losses that the financial institution reported include losses in form of compensation of the affected customers and costs of implementing measures to enhance the financial institution’s cybersecurity. However, the principle of the incident was significantly critical of the bank as customers and shareholders began to doubt the organisation’s capacity to safeguard information in the modern digital era of advanced artificial intelligence cyber threats.
This case depicts the importance for the financial firms to align their security plan in a way that fights the new security threats. The attack is also a message to other organisations in that they are not immune from such analysis attacks with AI and should take proper measures against such threats.
Conclusion
The recent AI-phishing attack on an Indian bank in 2024 is one of the indicators of potential modern attackers’ capabilities. Since the AI technology is still progressing, so are the advances of the cyberattacks. Financial institutions and several other organisations can only go as far as adopting adequate AI-aware cybersecurity solutions for their systems and data.
Moreover, this case raises awareness of how important it is to train the employees to be properly prepared to avoid the successful cyberattacks. The organisation’s cybersecurity awareness and secure employee behaviours, as well as practices that enable them to understand and report any likely artificial intelligence offences, helps the organisation to minimise risks from any AI attack.
Recommendations
- Enhanced AI-Based Defences: Financial institutions should employ AI-driven detection and response products that are capable of mitigating AI-operation-based cyber threats in real-time.
- Employee Training Programs: CYBER SECURITY: All employees should undergo frequent cybersecurity awareness training; here they should be trained on how to identify AI-populated phishing.
- Stricter Authentication Protocols: For more specific accounts, ID and other security procedures should be tight in order to get into sensitive ones.
- Collaboration with CERT-In: Continued engagement and coordination with authorities such as the Indian Computer Emergency Response Team (CERT-In) and other equivalents to constantly monitor new threats and valid recommendations.
- Public Communication Strategies: It is also important to establish effective communication plans to address the customers of the organisations and ensure that they remain trusted even when an organisation is facing a cyber threat.
Through implementing these, financial institutions have an opportunity for being ready with new threats that come with AI and cyber terrorism on essential financial assets in today’s complex IT environments.
.webp)
Overview:
WazirX is the platform for cryptocurrencies, based in India that has been hacked, and it made a loss of more than $230 million in cryptocurrency. This case concerned an unauthorized transaction with a multisignature or multisig, wallet controlled through Liminal’a digital asset management platform. These attacking incidents have thereafter raised more questions on the security of the Cryptocurrency exchanges and efficiency of the existing policies and laws.
Wallet Configuration and Security Measures
This wallet was breached and had a multisig setting meaning that more than one signature was needed to authorize a transaction. Specifically, it had six signatories: five are funded by WazirX and one is funded by Liminal. Every transaction needed the approval of at least three signatories of WazirX, all of whom had addressed security concerns by using Ledger’s hardware wallets; while the Liminal, too, had a signatory, for approval.
To further increase the level of security of the transactions, a whitelisting policy was introduced, only limited addresses were authorized to receive funds. This system was rather vulnerable, and the attackers managed to grasp the discrepancy between the information available through Liminal’s interface and the content of the transaction to seize unauthorized control over the wallet and implement the theft.
Modus Operandi: Attack Mechanics
The cyber attack appears to have been carefully carried out, with preliminary investigations suggesting the following tactics:
- Payload Manipulation: The attackers apparently substituted the transaction’s payload during signing; hence, they can reroute the collected funds into an unrelated wallet.
- Chain Hopping: To make it much harder to track their movements, the attackers split large amounts of money across multiple blockchains and broke tens of thousands of dollars into thousands of transactions involving different cryptocurrencies. This technique makes it difficult to trace people and things.
- Zero Balance Transactions: There were also some instances where it ended up with no Ethereum (ETH) in the balance and such wallets also in use for the purpose of further anonymization of the transactions.
- Analysis of the blockchain data suggested the enemy might have been making the preparations for this attack for several days prior to their attack and involved a high amount of planning.
Actions taken by WazirX:
Following the attack, WazirX implemented a series of immediate actions:
- User Notifications: The users were immediately notified of the occurrence of the breach and the possible risk it posed to them.
- Law Enforcement Engagement: The matters were reported to the National Cyber Crime Reporting Portal and specific authorities of which the Financial Intelligence Unit (FIU) and the Computer Emergency Response Team (CERT-In).
- Service Suspension: WazirX had suspended all its trading operations and user deposits’ and withdrawals’ to minimize further cases and investigate.
- Global Outreach: The exchange contacted more than 500 cryptocurrency exchanges and requested to blacklist the wallet’s addresses linked to the theft.
- Bounty Program: A bounty program was announced to encourage people to share information that can enable the authorities to retrieve the stolen money. A maximum of 23 million dollars was placed on the bounty.
Further Investigations
WazirX stated that it has contracted the services of cybersecurity professionals to help in the prosecution process of identifying and compensating for the losses. The exchange is still investigating the forensic data and working with the police for tracking the stolen assets. Nevertheless, the prospects of full recovery may be quite questionable primarily because of complexity of the attack and the methods used by the attackers.
Precautionary measures:
The WazirX cyber attack clearly implies that there is the necessity to improve the security and the regulation of the cryptocurrency industry. As exchanges become increasingly targeted by hackers, there is a pressing need for:
- Stricter Security Protocols: The commitment to technical innovations, such as integration of MFA, as well as constant monitoring of the users’ wallets’ activities.
- Regulatory Oversight: Formalization of the laws that require proper security for the cryptocurrency exchange platforms to safeguard their users as well as their investments.
- Community Awareness: To bypass such predicaments, there is a need to study on emergent techniques in spreading awareness, particularly in cases of scams or phishing attempts that are likely to follow such breaches.
Conclusion:
The cyber attack on WazirX in the field of cryptocurrency market, shows weaknesses and provides valuable lessons for enhancing the security. This attack highlights critical vulnerabilities in cryptocurrency exchanges, even though employing advanced security measures like multisignature wallets and whitelisting policies. The attack's complexity, involving payload manipulation, chain hopping, and zero balance transactions, underscores the attackers' meticulous planning and the challenges in tracing stolen assets. This case brings a strong message regarding the necessity of solid security measures, and constant attention to security in the rapidly growing world of digital assets. Furthermore, the incident highlights the importance of community awareness and education on emerging threats like scams and phishing attempts, which usually follow such breaches. By fostering a culture of vigilance and knowledge, the cryptocurrency community can better defend against future attacks.
Reference:
https://wazirx.com/blog/important-update-cyber-attack-incident-and-measures-to-protect-your-assets/
https://www.linkedin.com/pulse/wazirx-cyberattack-in-depth-analysis-jyqxf

Introduction
The ongoing debate on whether AI scaling has hit a wall has been rehashed by the underwhelming response to OpenAI’s ChatGPT v5. AI scaling laws, which describe that machine learning models perform better with increased training data, model parameters and computational resources, have guided the rapid progress of Large Language Models (LLMs) so far. But many AI researchers suggest that further improvements in LLMs will have to be effected through large computational costs by orders of magnitude, which does not justify the returns. The question, then, is whether scaling remains a viable path or whether the field must explore new approaches. This is not just a tech issue but a profound innovation challenge for countries like India, charting their own AI course.
The Scaling Wall: Gaps and Innovation Opportunities
Escalating costs, data scarcity, and diminishing gains mean that simply building larger AI models may no longer guarantee breakthroughs. In such a scenario, LLM developers will have to refine new approaches to training these models, for example, by diversifying data types and redefining training techniques.
This global challenge has a bearing on India’s AI ambitions. For India, where compute and data resources are relatively scarce, this scaling slowdown poses both a challenge and an opportunity. While the India AI Mission embodies smart priorities such as democratising compute resources and developing local datasets, looming scaling challenges could prove a roadblock. Realising these ambitions requires strong input from research and academia, and improved coordination between policymakers and startups. The scaling wall highlights systemic innovation gaps where sustained support is needed, not only in hardware but also in talent development, safety research, and efficient model design.
Way Forward
To truly harness AI’s transformative power, India must prioritise policy actions and ecosystem shifts that support smarter, safer, and context-rich research through the following measures:
- Driving Efficiency and Compute Innovation: Instead of relying on brute-force scaling, India should invest in research and startups working on efficient architectures, energy-conscious training methods, and compute optimisation.
- Investing in Multimodal and Diverse Data: While indigenous datasets are being developed under the India AI Mission through AI Kosha, they must be ethically sourced from speech, images, video, sensor data, and regional content, apart from text, to enable context-rich AI models truly tailored to Indian needs.
- Addressing Core Problems for Trustworthy AI: LLMs offered by all major companies, like OpenAI, Grok, and Deepseek, have the problem of unreliability, hallucinations, and biases, since they are primarily built on scaling large datasets and parameters, which have inherent limitations. India should invest in capabilities to solve these issues and design more trustworthy LLMs.
- Supporting Talent Development and Training: Despite its substantial AI talent pool, India faces an impending demand-supply gap. It will need to launch national programs and incentives to upskill engineers, researchers, and students in advanced AI skills such as model efficiency, safety, interpretability, and new training paradigms
Conclusion
The AI scaling wall debate is a reminder that the future of LLMs will depend not on ever-larger models but on smarter, safer, and more sustainable innovation. A new generation of AI is approaching us, and India can help shape its future. The country’s AI Mission and startup ecosystem are well-positioned to lead this shift by focusing on localised needs, efficient technologies, and inclusive growth, if implemented effectively. How India approaches this new set of challenges and translates its ambitions into action, however, remains to be seen.
References
- https://blogs.nvidia.com/blog/ai-scaling-laws/
- https://www.marketingaiinstitute.com/blog/scaling-laws-ai-wall
- https://fortune.com/2025/02/19/generative-ai-scaling-agi-deep-learning/
- https://indiaai.gov.in/
- https://www.deloitte.com/in/en/about/press-room/bridging-the-ai-talent-gap-to-boost-indias-tech-and-economic-impact-deloitte-nasscom-report.html