Domestic UPI Frauds: Finance Ministry Presented Data in LokSabha
Introduction
According to the Finance Ministry's data, the incidence of domestic Unified Payment Interface (UPI) fraud rose by 85% in FY 2023-24 compared to FY 2022-23. Further, as of September of FY 2024-25, 6.32 lakh fraud cases had been already reported, amounting to Rs 485 crore. The data was shared on 25th November 2024, by the Finance Ministry in response to a question in Lok Sabha’s winter session about the fraud in UPI transactions during the past three fiscal years.
Statistics

UPI Frauds and Government's Countermeasures
On the query as to measures taken by the government for safe and secure UPI transactions and prevention of fraud in the transactions, the ministry has highlighted the measures as follows:
- The Reserve Bank of India (RBI) has launched the Central Payment Fraud Information Registry (CPFIR), a web-based tool for reporting payment-related frauds, operational since March 2020, and it requires requiring all Regulated Entities (RE) to report payment-related frauds to the said CPFIR.
- The Government, RBI, and National Payments Corporation of India (NPCI) have implemented various measures to prevent payment-related frauds, including UPI transaction frauds. These include device binding, two-factor authentication through PIN, daily transaction limits, and limits on use cases.
- Further, NPCI offers a fraud monitoring solution for banks, enabling them to alert and decline transactions using AI/ML models. RBI and banks are also promoting awareness through SMS, radio, and publicity on 'cyber-crime prevention'.
- The Ministry of Home Affairs has launched a National Cybercrime Reporting Portal (NCRP) (www.cybercrime.gov.in) and a National Cybercrime Helpline Number 1930 to help citizens report cyber incidents, including financial fraud. Customers can also report fraud on the official websites of their bank or bank branches.
- The Department of Telecommunications has introduced the Digital Intelligence Platform (DIP) and 'Chakshu' facility on the Sanchar Saathi portal, enabling citizens to report suspected fraud messages via call, SMS, or WhatsApp.
Conclusion
UPI is India's most popular digital payment method. As of June 2024, there are around 350 million active users of the UPI in India. The Indian Cyber Crime Coordination Centre (I4C) report indicates that ‘Online Financial Fraud’, a cyber crime category under NCRP, is the most prevalent among others. The rise of financial fraud, particularly UPI fraud is cause for alarm, the scammers use sophisticated strategies to deceive victims. It is high time for netizens to exercise caution and care with their personal and financial information, stay aware of common tactics used by fraudsters, and adhere to best security practices for secure transactions and the safe use of UPI services.
References
Related Blogs

Brief Overview of the EU AI Act
The EU AI Act, Regulation (EU) 2024/1689, was officially published in the EU Official Journal on 12 July 2024. This landmark legislation on Artificial Intelligence (AI) will come into force just 20 days after publication, setting harmonized rules across the EU. It amends key regulations and directives to ensure a robust framework for AI technologies. The AI Act, a set of EU rules governing AI, has been in development for two years and now, the EU AI Act enters into force across all 27 EU Member States on 1 August 2024, with certain future deadlines tied up and the enforcement of the majority of its provisions will commence on 2 August 2026. The law prohibits certain uses of AI tools, including those that threaten citizens' rights, such as biometric categorization, untargeted scraping of faces, and systems that try to read emotions are banned in the workplace and schools, as are social scoring systems. It also prohibits the use of predictive policing tools in some instances. The law takes a phased approach to implementing the EU's AI rulebook, meaning there are various deadlines between now and then as different legal provisions will start to apply.
The framework puts different obligations on AI developers, depending on use cases and perceived risk. The bulk of AI uses will not be regulated as they are considered low-risk, but a small number of potential AI use cases are banned under the law. High-risk use cases, such as biometric uses of AI or AI used in law enforcement, employment, education, and critical infrastructure, are allowed under the law but developers of such apps face obligations in areas like data quality and anti-bias considerations. A third risk tier also applies some lighter transparency requirements for makers of tools like AI chatbots.
In case of failure to comply with the Act, the companies in the EU providing, distributing, importing, and using AI systems and GPAI models, are subject to fines of up to EUR 35 million or seven per cent of the total worldwide annual turnover, whichever is higher.
Key highlights of EU AI Act Provisions
- The AI Act classifies AI according to its risk. It prohibits Unacceptable risks such as social scoring systems and manipulative AI. The regulation mostly addresses high-risk AI systems.
- Limited-risk AI systems are subject to lighter transparency obligations and according to the act, the developers and deployers must ensure that the end-users are aware that the interaction they are having is with AI such as Chatbots and Deepfakes. The AI Act allows the free use of minimal-risk AI. This includes the majority of AI applications currently available in the EU single market like AI-enabled video games, and spam filters, but with the advancement of Gen AI changes with regards to this might be done. The majority of obligations fall on providers (developers) of high-risk AI systems that intend to place on the market or put into service high-risk AI systems in the EU, regardless of whether they are based in the EU or a third country. And also, a third-country provider where the high-risk AI system’s output is used in the EU.
- Users are natural or legal persons who deploy an AI system in a professional capacity, not affected end-users. Users (deployers) of high-risk AI systems have some obligations, though less than providers (developers). This applies to users located in the EU, and third-country users where the AI system’s output is used in the EU.
- General purpose AI or GPAI model providers must provide technical documentation, and instructions for use, comply with the Copyright Directive, and publish a summary of the content used for training. Free and open license GPAI model providers only need to comply with copyright and publish the training data summary, unless they present a systemic risk. All providers of GPAI models that present a systemic risk – open or closed – must also conduct model evaluations, and adversarial testing, and track and report serious incidents and ensure cybersecurity protections.
- The Codes of Practice will account for international approaches. It will cover but not necessarily be limited to the obligations, particularly the relevant information to include in technical documentation for authorities and downstream providers, identification of the type and nature of systemic risks and their sources, and the modalities of risk management accounting for specific challenges in addressing risks due to the way they may emerge and materialize throughout the value chain. The AI Office may invite GPAI model providers, and relevant national competent authorities to participate in drawing up the codes, while civil society, industry, academia, downstream providers and independent experts may support the process.
Application & Timeline of Act
The EU AI Act will be fully applicable 24 months after entry into force, but some parts will be applicable sooner, for instance the ban on AI systems posing unacceptable risks will apply six months after the entry into force. The Codes of Practice will apply nine months after entry into force. Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force. High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force. The expected timeline for the same is:
- August 1st, 2024: The AI Act will enter into force.
- February 2025: Prohibition of certain AI systems - Chapters I (general provisions) & II (prohibited AI systems) will apply; Prohibition of certain AI systems.
- August 2025: Chapter III Section 4 (notifying authorities), Chapter V (general purpose AI models), Chapter VII (governance), Chapter XII (confidentiality and penalties), and Article 78 (confidentiality) will apply, except for Article 101 (fines for General Purpose AI providers); Requirements for new GPAI models.
- August 2026: The whole AI Act applies, except for Article 6(1) & corresponding obligations (one of the categories of high-risk AI systems);
- August 2027: Article 6(1) & corresponding obligations apply.
The AI Act sets out clear definitions for the different actors involved in AI, such as the providers, deployers, importers, distributors, and product manufacturers. This means all parties involved in the development, usage, import, distribution, or manufacturing of AI systems will be held accountable. Along with this, the AI Act also applies to providers and deployers of AI systems located outside of the EU, e.g., in Switzerland, if output produced by the system is intended to be used in the EU. The Act applies to any AI system within the EU that is on the market, in service, or in use, covering both AI providers (the companies selling AI systems) and AI deployers (the organizations using those systems).
In short, the AI Act will apply to different companies across the AI distribution chain, including providers, deployers, importers, and distributors (collectively referred to as “Operators”). The EU AI Act also has extraterritorial application and can also apply to companies not established in the EU, or providers outside the EU if they -make an AI system or GPAI model available on the EU market. Even if only the output generated by the AI system is used in the EU, the Act still applies to such providers and deployers.
CyberPeace Outlook
The EU AI Act, approved by EU lawmakers in 2024, is a landmark legislation designed to protect citizens' health, safety, and fundamental rights from potential harm caused by AI systems. The AI Act will apply to AI systems and GPAI models. The Act creates a tiered risk categorization system with various regulations and stiff penalties for noncompliance. The Act adopts a risk-based approach to AI governance, categorizing potential risks into four tiers: unacceptable, high, limited, and low. Violations of banned systems carry the highest fine: €35 million, or 7 percent of global annual revenue. It establishes transparency requirements for general-purpose AI systems. The regulation also provides specific rules for general-purpose AI (GPAI) models and lays down more stringent requirements for GPAI models with 'high-impact capabilities' that could pose a systemic risk and have a significant impact on the internal market. For high-risk AI systems, the AI Act addresses the issues of fundamental rights impact assessment and data protection impact assessment.
The EU AI Act aims to enhance trust in AI technologies by establishing clear regulatory standards governing AI. We encourage regulatory frameworks that strive to balance the desire to foster innovation with the critical need to prevent unethical practices that may cause user harm. The legislation can be seen as strengthening the EU's position as a global leader in AI innovation and developing regulatory frameworks for emerging technologies. It sets a global benchmark for regulating AI. The companies to which the act applies will need to make sure their practices align with the same. The act may inspire other nations to develop their own legislation contributing to global AI governance. The world of AI is complex and challenging, the implementation of regulatory checks, and compliance by the concerned companies, all pose a conundrum. However, in the end, balancing innovation with ethical considerations is paramount.
At the same hand, the tech sector welcomes regulatory progress but warns that overly-rigid regulations could stifle innovation. Hence flexibility and adaptability are key to effective AI governance. The journey towards robust AI regulation has begun in major countries, and it is important that we find the right balance between safety and innovation and also take into consideration the industry reactions.
References:
- https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401689
- https://www.theverge.com/2024/7/12/24197058/eu-ai-act-regulations-bans-deadline
- https://techcrunch.com/2024/07/12/eus-ai-act-gets-published-in-blocs-official-journal-starting-clock-on-legal-deadlines/
- https://www.wsgr.com/en/insights/eu-ai-act-to-enter-into-force-in-august.html
- https://www.techtarget.com/searchenterpriseai/tip/Is-your-business-ready-for-the-EU-AI-Act
- https://www.simmons-simmons.com/en/publications/clyimpowh000ouxgkw1oidakk/the-eu-ai-act-a-quick-guide

Introduction
The emergence of deepfake technology has become a significant problem in an era driven by technological growth and power. The government has reacted proactively as a result of concerns about the exploitation of this technology due to its extraordinary realism in manipulating information. The national government is in the vanguard of defending national interests, public trust, and security as the digital world changes. On the 26th of December 2023, the central government issued an advisory to businesses, highlighting how urgent it is to confront this growing threat.
The directive aims to directly address the growing concerns around Deepfakes, or misinformation driven by AI. This advice represents the result of talks that Union Minister Shri Rajeev Chandrasekhar, had with intermediaries during the course of a month-long Digital India dialogue. The main aim of the advisory is to accurately and clearly inform users about information that is forbidden, especially those listed under Rule 3(1)(b) of the IT Rules.
Advisory
The Ministry of Electronics and Information Technology (MeitY) has sent a formal recommendation to all intermediaries, requesting adherence to current IT regulations and emphasizing the need to address issues with misinformation, specifically those driven by artificial intelligence (AI), such as Deepfakes. Union Minister Rajeev Chandrasekhar released the recommendation, which highlights the necessity of communicating forbidden information in a clear and understandable manner, particularly in light of Rule 3(1)(b) of the IT Rules.
Advise on Prohibited Content Communication
According to MeitY's advice, intermediaries must transmit content that is prohibited by Rule 3(1)(b) of the IT Rules in a clear and accurate manner. This involves giving users precise details during enrollment, login, and content sharing/uploading on the website, as well as including such information in customer contracts and terms of service.
Ensuring Users Are Aware of the Rules
Digital platform suppliers are required to inform their users of the laws that are relevant to them. This covers provisions found in the IT Act of 2000 and the Indian Penal Code (IPC). Corporations should inform users of the potential consequences of breaking the restrictions outlined in Rule 3(1)(b) and should also urge users to notify any illegal activity to law enforcement.
Talks Concerning Deepfakes
For more than a month, Union Minister Rajeev Chandrasekhar had a significant talk with various platforms where they addressed the issue of "deepfakes," or computer-generated fake videos. The meeting emphasized how crucial it is that everyone abides by the laws and regulations in effect, particularly the IT Rules to prevent deepfakes from spreading.
Addressing the Danger of Disinformation
Minister Chandrasekhar underlined the grave issue of disinformation, particularly in the context of deepfakes, which are false pieces of content produced using the latest developments such as artificial intelligence. He emphasized the dangers this deceptive data posed to internet users' security and confidence. The Minister emphasized the efficiency of the IT regulations in addressing this issue and cited the Prime Minister's caution about the risks of deepfakes.
Rule Against Spreading False Information
The Minister referred particularly to Rule 3(1)(b)(v), which states unequivocally that it is forbidden to disseminate false information, even when doing so involves cutting-edge technology like deepfakes. He called on intermediaries—the businesses that offer digital platforms—to take prompt action to take such content down from their systems. Additionally, he ensured that everyone is aware that breaking such rules has legal implications.
Analysis
The Central Government's latest advisory on deepfake technology demonstrates a proactive strategy to deal with new issues. It also highlights the necessity of comprehensive legislation to directly regulate AI material, particularly with regard to user interests.
There is a wider regulatory vacuum for content produced by artificial intelligence, even though the current guideline concentrates on the precision and lucidity of information distribution. While some limitations are mentioned in the existing laws, there are no clear guidelines for controlling or differentiating AI-generated content.
Positively, it is laudable that the government has recognized the dangers posed by deepfakes and is making appropriate efforts to counter them. As AI technology develops, there is a chance to create thorough laws that not only solve problems but also create a supportive environment for the creation of ethical AI content. User protection, accountability, openness, and moral AI use would all benefit from such laws. This offers an opportunity for regulatory development to guarantee the successful and advantageous incorporation of AI into our digital environment.
Conclusion
The Central Government's preemptive advice on deepfake technology shows a great dedication to tackling new risks in the digital sphere. The advice highlights the urgent need to combat deepfakes, but it also highlights the necessity for extensive legislation on content produced by artificial intelligence. The lack of clear norms offers a chance for constructive regulatory development to protect the interests of users. The advancement of AI technology necessitates the adoption of rules that promote the creation of ethical AI content, guaranteeing user protection, accountability, and transparency. This is a turning point in the evolution of regulations, making it easier to responsibly incorporate AI into our changing digital landscape.
References
- https://economictimes.indiatimes.com/tech/technology/deepfake-menace-govt-issues-advisory-to-intermediaries-to-comply-with-existing-it-rules/articleshow/106297813.cms
- https://pib.gov.in/PressReleaseIframePage.aspx?PRID=1990542#:~:text=Ministry%20of%20Electronics%20and%20Information,misinformation%20powered%20by%20AI%20%E2%80%93%20Deepfakes.
- https://www.timesnownews.com/india/centres-deepfake-warning-to-it-firms-ensure-users-dont-violate-content-rules-article-106298282#:~:text=The%20Union%20government%20on%20Tuesday,actors%2C%20businesspersons%20and%20other%20celebrities

Executive Summary:
A recent claim going around on social media that a child created sand sculptures of cricket legend Mahendra Singh Dhoni, has been proven false by the CyberPeace Research Team. The team discovered that the images were actually produced using an AI tool. Evident from the unusual details like extra fingers and unnatural characteristics in the sculptures, the Research Team discerned the likelihood of artificial creation. This suspicion was further substantiated by AI detection tools. This incident underscores the need to fact-check information before posting, as misinformation can quickly go viral on social media. It is advised everyone to carefully assess content to stop the spread of false information.

Claims:
The claim is that the photographs published on social media show sand sculptures of cricketer Mahendra Singh Dhoni made by a child.




Fact Check:
Upon receiving the posts, we carefully examined the images. The collage of 4 pictures has many anomalies which are the clear sign of AI generated images.

In the first image the left hand of the sand sculpture has 6 fingers and in the word INDIA, ‘A’ is not properly aligned i.e not in the same line as other letters. In the second image, the finger of the boy is missing and the sand sculpture has 4 fingers in its front foot and has 3 legs. In the third image the slipper of the boy is not visible whereas some part of the slipper is visible, and in the fourth image the hand of the boy is not looking like a hand. These are some of the major discrepancies clearly visible in the images.
We then checked using an AI Image detection tool named ‘Hive’ image detection, Hive detected the image as 100.0% AI generated.

We then checked it in another AI image detection named ContentAtScale AI image detection, and it found to be 98% AI generated.

From this we concluded that the Image is AI generated and has no connection with the claim made in the viral social media posts. We have also previously debunked AI Generated artwork of sand sculpture of Indian Cricketer Virat Kohli which had the same types of anomalies as those seen in this case.
Conclusion:
Taking into consideration the distortions spotted in the images and the result of AI detection tools, it can be concluded that the claim of the pictures representing the child's sand sculptures of cricketer Mahendra Singh Dhoni is false. The pictures are created with Artificial Intelligence. It is important to check and authenticate the content before posting it to social media websites.
- Claim: The frame of pictures shared on social media contains child's sand sculptures of cricket player Mahendra Singh Dhoni.
- Claimed on: X (formerly known as Twitter), Instagram, Facebook, YouTube
- Fact Check: Fake & Misleading