Launch of Central Suspect Registry to Combat Cyber Crimes
Introduction
The Indian government has introduced initiatives to enhance data sharing between law enforcement and stakeholders to combat cybercrime. Union Home Minister Amit Shah has launched the Central Suspect Registry, Cyber Fraud Mitigation Center, Samanvay Platform and Cyber Commandos programme on the Indian Cyber Crime Coordination Centre (I4C) Foundation Day celebration took place on the 10th September 2024 at Vigyan Bhawan, New Delhi. The ‘Central Suspect Registry’ will serve as a central-level database with consolidated data on cybercrime suspects nationwide. The Indian Cyber Crime Coordinating Center will share a list of all repeat offenders on their servers. Shri Shah added that the Suspect Registry at the central level and connecting the states with it will help in the prevention of cybercrime.
Key Highlights of Central Suspect Registry
The Indian Cyber Crime Coordination Centre (I4C) has established the suspect registry in collaboration with banks and financial intermediaries to enhance fraud risk management in the financial ecosystem. The registry will serve as a central-level database with consolidated data on cybercrime suspects. Using data from the National Cybercrime Reporting Portal (NCRP), the registry makes it possible to identify cybercriminals as potential threats.
Central Suspect Registry Need of the Hour
The Union Home Minister of India, Shri Shah, has emphasized the need for a national Cyber Suspect Registry to combat cybercrime. He argued that having separate registries for each state would not be effective, as cybercriminals have no boundaries. He emphasized the importance of connecting states to this platform, stating it would significantly help prevent future cyber crimes.
CyberPeace Outlook
There has been an alarming uptick in cybercrimes in the country highlighting the need for proactive approaches to counter the emerging threats. The recently launched initiatives under the umbrella of the Indian Cyber Crime Coordination Centre will serve as significant steps taken by the centre to improve coordination between law enforcement agencies, strengthen user awareness, and offer technical capabilities to target cyber criminals and overall aim to combat the growing rate of cybercrime in the country.
References:
Related Blogs

Introduction
In the digital landscape, there is a rapid advancement of technologies such as generative AI(Artificial Intelligence), deepfake, machine learning, etc. Such technologies offer convenience to users in performing several tasks and are capable of assisting individuals and business entities. Certain regulatory mechanisms are also established for the ethical and reasonable use of such advanced technologies. However, these technologies are easily accessible; hence, cyber-criminals leverage AI tools and technologies for malicious activities or for committing various cyber frauds. By such misuse of advanced technologies, new cyber threats have emerged.
Deepfake Scams
Deepfake is an AI-based technology. Deepfake is capable of creating realistic images or videos which in actuality are created by machine algorithms. Deepfake technology, since easily accessible, is misused by fraudsters to commit various cyber crimes or deceive and scam people through fake images or videos that look realistic. By using the Deepfake technology, cybercriminals manipulate audio and video content, which looks very realistic but, in actuality, is fake.
Voice cloning
To create a voice clone of anyone's, audio can be deepfaked too, which closely resembles a real one but, in actuality, is a fake voice created through deepfake technology. Recently, in Kerala, a man fell victim to an AI-based video call on WhatsApp. He received a video call from a person claiming to be his former colleague. The scammer, using AI deepfake technology, impersonated the face of his former colleague and asked for financial help of 40,000.
Uttarakhand Police issues warning admitting the rising trend of AI-based scams
Recently, Uttarakhand police’s Special Task Force (STF) has issued a warning admitting the widespread of AI technology-based scams such as deepfake or voice cloning scams targeting innocent people. Police expressed concern that several incidents have been reported where innocent people are lured by cybercriminals. Cybercriminals exploit advanced technologies and manipulate innocent people to believe that they are talking to their close ones or friends, but in actuality, they are fake voice clones or deepfake video calls. In this way, cybercriminals ask for immediate financial help, which ultimately leads to financial losses for victims of such scams.
Tamil Nadu Police Issues advisory on deepfake scams
To deceive people, cyber criminals misuse deepfake technologies and target them for financial gain. Recently, Tamilnadu Police Cyberwing have issued an advisory on rising deepfake scams. Fraudsters are creating highly convincing images, videos or voice clones to defraud innocent people and make them victims of financial fraud. The advisory states that you limit the personal data you share you share online and adjust privacy settings. Advisory says to promptly report any suspicious activity or cyber crimes to 1930 or the National Cyber Crime Reporting portal.
Best practices
- Pay attention if you notice compromised video quality because deepfake videos often have compromised or poor video quality and unusual blur resolution, which poses a question to its genuineness. Deepfake videos often loop or unusually freeze, which indicates that the video content might be fabricated.
- Whenever you receive requests for any immediate financial help, act responsively and verify the situation by directly contacting the person on his primary contact number.
- You need to be vigilant and cautious, since scammers often possess a sense of urgency, leading to giving no time for the victim to think about it and deceiving them by making a quick decision. Scammers pose sudden emergencies and demand financial support on an urgent basis.
- Be aware of the recent scams and follow the best practices to stay protected from rising cyber frauds.
- Verify the identity of unknown callers.
- Utilise privacy settings on your social media.
- Pay attention if you notice any suspicious nature, and avoid sharing voice notes with unknown users because scammers might use them as voice samples and create your voice clone.
- If you fall victim to such frauds, one powerful resource available is the National Cyber Crime Reporting Portal (www.cybercrime.gov.in) and the 1930 toll-free helpline number where you can report cyber fraud, including any financial crimes.
Conclusion
AI-powered technologies are leveraged by cybercriminals to commit cyber crimes such as deepfake scams, voice clone scams, etc. Where innocent people are lured by scammers. Hence there is a need for awareness and caution among the people. We should be vigilant and aware of the growing incidents of AI-based cyber scams. Must follow the best practices to stay protected.
References:
- https://www.the420.in/ai-voice-cloning-cyber-crime-alert-uttarakhand-police/
- https://www.trendmicro.com/vinfo/us/security/news/cybercrime-and-digital-threats/exploiting-ai-how-cybercriminals-misuse-abuse-ai-and-ml#:~:text=AI%20and%20ML%20Misuses%20and%20Abuses%20in%20the%20Future&text=Through%20the%20use%20of%20AI,and%20business%20processes%20are%20compromised.
- https://www.ndtv.com/india-news/kerala-man-loses-rs-40-000-to-ai-based-deepfake-scam-heres-what-it-is-4217841
- https://news.bharattimes.co.in/t-n-cybercrime-police-issue-advisory-on-deepfake-scams/

On 22nd October 2024, Jyotiraditya Scindia, Union Minister for Communications, launched the (DoT) Department of Telecoms’ International Incoming Spoofed Calls Prevention System. This was introduced in light of efforts toward preventing international fraudulent calls that enable cyber crimes. A recent report as per PIB claims for the system to have been effective and played a role in a 90% reduction in the number of spoofed international calls, its instances falling from 1.35 Crore to 6 Lakhs within two months of the launch of the system.
International spoof calls are calls that masquerade as numbers originating from within the country when displayed on the target's mobile screen. This is done by manipulating the calling line identity or the CLI, commonly known as the phone number. Previous cases reported mention that such spoof calls have been used for conducting financial scams, impersonating government officials to carry out digital arrests, and inducing panic. Instances of threats of disconnecting numbers by TRAI officials, and narcotics officials on finding drugs or even contraband through couriers are also rampant.
International Incoming Spoofed Calls Prevention System
As was addressed in the Budget in 2024, the system was previously called the Centralised International Out Roamer (CIOR), and the DoT was allocated Rs.38.76 crore for the same. The Digital Intelligence Unit (DIU) under the DoT is another project that aims to investigate and research fraudulent use of telecom resources, including messages, scams, and spam - the budget for which has been increased from 50 to 85 crores.
The International Incoming Spoofed Calls Prevention System was implemented in two phases, the first one was at the level of the telephone companies (telcos). Telcos can verify their subscribers and Indian SIMs based on the Indian Telecom Service Providers (TSPs) international long-distance (ILD) network. When a user with an Indian number travels abroad, the roaming feature gets activated, and all calls hit the ILD network of the TSP. This allows the TSP to verify whether the numbers starting with +91 are genuinely making calls from abroad or from India. However, a TSP can only verify numbers that are issued with their TSP ILD network and not those of other TSPs. This issue was addressed in the second phase, as the DIU of DoT and the TSPs built an integrated system so that a centralised database could be used to check for genuine subscribers.
CyberPeace Outlook
A press release on 23rd December 2024 encouraged the TSPs to label incoming International calls as International calls on the mobile screen of the receiver. Some of them have already started adding labels and are sending awareness messages informing their subscribers of tips on staying safe from scams. Apart from these, there are also applications available online that help in identifying callers and their location, however, these are at the behest of the users' efforts and have moderate trust value. At the level of the public, the practice of blocking unknown international numbers and not calling back, along with awareness regarding country codes is encouraged. Coordinated and updated efforts on the part of the Government and the TSPs are much appreciated in today's time as scammers continue to find new ways to commit cyber crimes using telecommunication resources.
References
- https://www.hindustantimes.com/india-news/jyotiraditya-scindia-launches-dot-system-to-block-spam-international-calls-101729615441509.html
- https://www.business-standard.com/india-news/centre-launches-system-to-block-international-spoofed-calls-curb-fraud-124102300449_1.html
- https://www.opindia.com/2024/12/number-of-spoofed-international-calls-used-in-cyber-crimes-goes-down-by-90-in-2-months/
- https://www.cnbctv18.com/technology/telecom/telecom-department-anti-spoofed-international-calls-19529459.htm
- https://pib.gov.in/PressReleaseIframePage.aspx?PRID=2067113
- https://pib.gov.in/PressReleasePage.aspx?PRID=2087644
- https://www.hindustantimes.com/india-news/display-international-call-for-calls-from-abroad-to-curb-scams-dot-to-telecos-101735050551449.html

Introduction
Generative AI, particularly deepfake technology, poses significant risks to security in the financial sector. Deepfake technology can convincingly mimic voices, create lip-sync videos, execute face swaps, and carry out other types of impersonation through tools like DALL-E, Midjourney, Respeecher, Murf, etc, which are now widely accessible and have been misused for fraud. For example, in 2024, cybercriminals in Hong Kong used deepfake technology to impersonate the Chief Financial Officer of a company, defrauding it of $25 million. Surveys, including Regula’s Deepfake Trends 2024 and Sumsub reports, highlight financial services as the most targeted sector for deepfake-induced fraud.
Deepfake Technology and Its Risks to Financial Systems
India’s financial ecosystem, including banks, NBFCs, and fintech companies, is leveraging technology to enhance access to credit for households and MSMEs. The country is a leader in global real-time payments and its digital economy comprises 10% of its GDP. However, it faces unique cybersecurity challenges. According to the RBI’s 2023-24 Currency and Finance report, banks cite cybersecurity threats, legacy systems, and low customer digital literacy as major hurdles in digital adoption. Deepfake technology intensifies risks like:
- Social Engineering Attacks: Information security breaches through phishing, vishing, etc. become more convincing with deepfake imagery and audio.
- Bypassing Authentication Protocols: Deepfake audio or images may circumvent voice and image-based authentication systems, exposing sensitive data.
- Market Manipulation: Misleading deepfake content making false claims and endorsements can harm investor trust and damage stock market performance.
- Business Email Compromise Scams: Deepfake audio can mimic the voice of a real person with authority in the organization to falsely authorize payments.
- Evolving Deception Techniques: The usage of AI will allow cybercriminals to deploy malware that can adapt in real-time to carry out phishing attacks and inundate targets with increased speed and variations. Legacy security frameworks are not suited to countering automated attacks at such a scale.
Existing Frameworks and Gaps
In 2016, the RBI introduced cybersecurity guidelines for banks, neo-banking, lending, and non-banking financial institutions, focusing on resilience measures like Board-level policies, baseline security standards, data leak prevention, running penetration tests, and mandating Cybersecurity Operations Centres (C-SOCs). It also mandated incident reporting to the RBI for cyber events. Similarly, SEBI’s Cybersecurity and Cyber Resilience Framework (CSCRF) applies to regulated entities (REs) like stock brokers, mutual funds, KYC agencies, etc., requiring policies, risk management frameworks, and third-party assessments of cyber resilience measures. While both frameworks are comprehensive, they require updates addressing emerging threats from generative AI-driven cyber fraud.
Cyberpeace Recommendations
- AI Cybersecurity to Counter AI Cybercrime: AI-generated attacks can be designed to overwhelm with their speed and scale. Cybercriminals increasingly exploit platforms like LinkedIn, Microsoft Teams, and Messenger, to target people. More and more organizations of all sizes will have to use AI-based cybersecurity for detection and response since generative AI is becoming increasingly essential in combating hackers and breaches.
- Enhancing Multi-factor Authentication (MFA): With improving image and voice-generation/manipulation technologies, enhanced authentication measures such as token-based authentication or other hardware-based measures, abnormal behaviour detection, multi-device push notifications, geolocation verifications, etc. can be used to improve prevention strategies. New targeted technological solutions for content-driven authentication can also be implemented.
- Addressing Third-Party Vulnerabilities: Financial institutions often outsource operations to vendors that may not follow the same cybersecurity protocols, which can introduce vulnerabilities. Ensuring all parties follow standardized protocols can address these gaps.
- Protecting Senior Professionals: Senior-level and high-profile individuals at organizations are at a greater risk of being imitated or impersonated since they hold higher authority over decision-making and have greater access to sensitive information. Protecting their identity metrics through technological interventions is of utmost importance.
- Advanced Employee Training: To build organizational resilience, employees must be trained to understand how generative and emerging technologies work. A well-trained workforce can significantly lower the likelihood of successful human-focused human-focused cyberattacks like phishing and impersonation.
- Financial Support to Smaller Institutions: Smaller institutions may not have the resources to invest in robust long-term cybersecurity solutions and upgrades. They require financial and technological support from the government to meet requisite standards.
Conclusion
According to The India Cyber Threat Report 2025 by the Data Security Council of India (DSCI) and Seqrite, deepfake-enabled cyberattacks, especially in the finance and healthcare sectors, are set to increase in 2025. This has the potential to disrupt services, steal sensitive data, and exploit geopolitical tensions, presenting a significant risk to the critical infrastructure of India.
As the threat landscape changes, institutions will have to continue to embrace AI and Machine Learning (ML) for threat detection and response. The financial sector must prioritize robust cybersecurity strategies, participate in regulation-framing procedures, adopt AI-based solutions, and enhance workforce training, to safeguard against AI-enabled fraud. Collaborative efforts among policymakers, financial institutions, and technology providers will be essential to strengthen defenses.
Sources
- https://sumsub.com/newsroom/deepfake-cases-surge-in-countries-holding-2024-elections-sumsub-research-shows/
- https://www.globenewswire.com/news-release/2024/10/31/2972565/0/en/Deepfake-Fraud-Costs-the-Financial-Sector-an-Average-of-600-000-for-Each-Company-Regula-s-Survey-Shows.html
- https://www.sipa.columbia.edu/sites/default/files/2023-05/For%20Publication_BOfA_PollardCartier.pdf
- https://edition.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk/index.html
- https://www.rbi.org.in/Commonman/English/scripts/Notification.aspx?Id=1721
- https://elplaw.in/leadership/cybersecurity-and-cyber-resilience-framework-for-sebi-regulated-entities/
- https://economictimes.indiatimes.com/tech/artificial-intelligence/ai-driven-deepfake-enabled-cyberattacks-to-rise-in-2025-healthcarefinance-sectors-at-risk-report/articleshow/115976846.cms?from=mdr