Domestic UPI Frauds: Finance Ministry Presented Data in LokSabha
Introduction
According to the Finance Ministry's data, the incidence of domestic Unified Payment Interface (UPI) fraud rose by 85% in FY 2023-24 compared to FY 2022-23. Further, as of September of FY 2024-25, 6.32 lakh fraud cases had been already reported, amounting to Rs 485 crore. The data was shared on 25th November 2024, by the Finance Ministry in response to a question in Lok Sabha’s winter session about the fraud in UPI transactions during the past three fiscal years.
Statistics

UPI Frauds and Government's Countermeasures
On the query as to measures taken by the government for safe and secure UPI transactions and prevention of fraud in the transactions, the ministry has highlighted the measures as follows:
- The Reserve Bank of India (RBI) has launched the Central Payment Fraud Information Registry (CPFIR), a web-based tool for reporting payment-related frauds, operational since March 2020, and it requires requiring all Regulated Entities (RE) to report payment-related frauds to the said CPFIR.
- The Government, RBI, and National Payments Corporation of India (NPCI) have implemented various measures to prevent payment-related frauds, including UPI transaction frauds. These include device binding, two-factor authentication through PIN, daily transaction limits, and limits on use cases.
- Further, NPCI offers a fraud monitoring solution for banks, enabling them to alert and decline transactions using AI/ML models. RBI and banks are also promoting awareness through SMS, radio, and publicity on 'cyber-crime prevention'.
- The Ministry of Home Affairs has launched a National Cybercrime Reporting Portal (NCRP) (www.cybercrime.gov.in) and a National Cybercrime Helpline Number 1930 to help citizens report cyber incidents, including financial fraud. Customers can also report fraud on the official websites of their bank or bank branches.
- The Department of Telecommunications has introduced the Digital Intelligence Platform (DIP) and 'Chakshu' facility on the Sanchar Saathi portal, enabling citizens to report suspected fraud messages via call, SMS, or WhatsApp.
Conclusion
UPI is India's most popular digital payment method. As of June 2024, there are around 350 million active users of the UPI in India. The Indian Cyber Crime Coordination Centre (I4C) report indicates that ‘Online Financial Fraud’, a cyber crime category under NCRP, is the most prevalent among others. The rise of financial fraud, particularly UPI fraud is cause for alarm, the scammers use sophisticated strategies to deceive victims. It is high time for netizens to exercise caution and care with their personal and financial information, stay aware of common tactics used by fraudsters, and adhere to best security practices for secure transactions and the safe use of UPI services.
References
Related Blogs
.webp)
Introduction
Digitalisation presents both opportunities and challenges for micro, small, and medium enterprises (MSMEs) in emerging markets. Digital tools can increase business efficiency and reach but also increase exposure to misinformation, fraud, and cyber attacks. Such cyber threats can lead to financial losses, reputational damage, loss of customer trust, and other challenges hindering MSMEs' ability and desire to participate in the digital economy.
The current information dump is a major component of misinformation. Misinformation spreads or emerges from online sources, causing controversy and confusion in various fields including politics, science, medicine, and business. One obvious adverse effect of misinformation is that MSMEs might lose trust in the digital market. Misinformation can even result in the devaluation of a product, sow mistrust among customers, and negatively impact the companies’ revenue. The reach of and speed with which misinformation can spread and ruin companies’ brands, as well as the overall difficulty businesses face in seeking recourse, may discourage MSMEs from fully embracing the digital ecosystem.
MSMEs are essential for innovation, job development, and economic growth. They contribute considerably to the GDP and account for a sizable share of enterprises. They serve as engines of economic resilience in many nations, including India. Hence, a developing economy’s prosperity and sustainability depend on the MSMEs' growth and such digital threats might hinder this process of growth.
There are widespread incidents of misinformation on social media, and these affect brand and product promotion. MSMEs also rely on online platforms for business activities, and threats such as misinformation and other digital risks can result in reputational damage and financial losses. A company's reputation being tarnished due to inaccurate information or a product or service being incorrectly represented are just some examples and these incidents can cause MSMSs to lose clients and revenue.
In the digital era, MSMEs need to be vigilant against false information in order to preserve their brand name, clientele, and financial standing. In the interconnected world of today, these organisations must develop digital literacy and resistance against misinformation in order to succeed in the long run. Information resilience is crucial for protecting and preserving their reputation in the online market.
The Impact of Misinformation on MSMEs
Misinformation can have serious financial repercussions, such as lost sales, higher expenses, legal fees, harm to the company's reputation, diminished consumer trust, bad press, and a long-lasting unfavourable impact on image. A company's products may lose value as a result of rumours, which might affect both sales and client loyalty.
Inaccurate information can also result in operational mistakes, which can interrupt regular corporate operations and cost the enterprise a lot of money. When inaccurate information on a product's safety causes demand to decline and stockpiling problems to rise, supply chain disruptions may occur. Misinformation can also lead to operational and reputational issues, which can cause psychological stress and anxiety at work. The peace of the workplace and general productivity may suffer as a result. For MSMEs, false information has serious repercussions that impact their capacity to operate profitably, retain employees, and maintain a sustainable business. Companies need to make investments in cybersecurity defence, legal costs, and restoring consumer confidence and brand image in order to lessen the effects of false information and ensure smooth operations.
When we refer to the financial implications caused by misinformation spread in the market, be it about the product or the enterprise, the cost is two-fold in all scenarios: there is loss of revenue and then the organisation has to contend with the costs of countering the impact of the misinformation. Stock Price Volatility is one financial consequence for publicly-traded MSMEs, as misinformation can cause stock price fluctuations. Potential investors might be discouraged due to false negative information.
Further, the reputational damage consequences of misinformation on MSMEs is also a serious concern as a loss of their reputation can have long-term damages for a carefully-cultivated brand image.
There are also operational disruptions caused by misinformation: for instance, false product recalls can take place and supplier mistrust or false claims about supplier reliability can disrupt procurement leading to disruptions in the operations of MSMEs.
Misinformation can negatively impact employee morale and productivity due to its physiological effects. This leads to psychological stress and workplace tensions. Staff confidence is also affected due to the misinformation about the brand. Internal operational stability is a core component of any organisation’s success.
Misinformation: Key Risk Areas for MSMEs
- Product and Service Misinformation
For MSMEs, misinformation about products and services poses a serious danger since it undermines their credibility and the confidence clients place in the enterprise and its products or services. Because this misleading material might mix in with everyday activities and newsfeeds, viewers may find it challenging to identify fraudulent content. For example, falsehoods and rumours about a company or its goods may travel quickly through social media, impacting the confidence and attitude of customers. Algorithms that favour sensational material have the potential to magnify disinformation, resulting in the broad distribution of erroneous information that can harm a company's brand.
- False Customer Reviews and Testimonials
False testimonies and evaluations pose a serious risk to MSMEs. These might be abused to damage a company's brand or lead to unfair competition. False testimonials, for instance, might mislead prospective customers about the calibre or quality of a company’s offerings, while phony reviews can cause consumers to mistrust a company's goods or services. These actions frequently form a part of larger plans by rival companies or bad individuals to weaken a company's position in the market.
- Misleading Information about Business Practices
False statements or distortions regarding a company's operations constitute misleading information about business practices. This might involve dishonest marketing, fabrications regarding the efficacy or legitimacy of goods, and inaccurate claims on a company's compliance with laws or moral principles. Such incorrect information can result in a decline in consumer confidence, harm to one's reputation, and even legal issues if consumers or rival businesses act upon it. Even before the truth is confirmed, for example, allegations of wrongdoing or criminal activity pertaining can inflict a great deal of harm, even if they are disproven later.
- Fake News Related to Industry and Market Conditions
By skewing consumer views and company actions, fake news about market and industry circumstances can have a significant effect on MSMEs. For instance, false information about market trends, regulations, or economic situations might make consumers lose faith in particular industries or force corporations to make poor strategic decisions. The rapid dissemination of misinformation on online platforms intensifies its effects on enterprises that significantly depend on digital engagement for their operations.
Factors Contributing to the Vulnerability of MSMEs
- Limited Resources for Verification
MSMEs have a small resource pool. Information verification is typically not a top priority for most. MSMEs usually lack the resources needed to verify the information and given their limited resources, they usually tend to deploy the same towards other, more seemingly-critical functions. They are more susceptible to misleading information because they lack the capacity to do thorough fact-checking or validate the authenticity of digital content. Technology tools, human capital, and financial resources are all in low supply but they are essential requirements for effective verification processes.
- Inadequate Digital Literacy
Digital literacy is required for effective day-to-day operations. Fake reviews, rumours, or fake images commonly used by malicious actors can result in increased scrutiny or backlash against the targeted business. The lack of awareness combined with limited resources usually spells out a pale redressal plan on part of the affected MSME. Due to their low digital literacy in this domain, a large number of MSMEs are more susceptible to false information and other online threats. Inadequate knowledge and abilities to use digital platforms securely and effectively can result in making bad decisions and raising one's vulnerability to fraud, deception, and online scams.
- Lack of Crisis Management Plans
MSMEs frequently function without clear-cut procedures for handling crises. They lack the strategic preparation necessary to deal with the fallout from disinformation and cyberattacks. Proactive crisis management plans usually incorporate procedures for detecting, addressing, and lessening the impact of digital harms, which are frequently absent from MSMEs.
- High Dependence on Social Media and Online Platforms
The marketing strategy for most MSMEs is heavily reliant on social media and online platforms. While the digital-first nature of operations reduces the need for a large capital to set up in the form of stores or outlets, it also gives them a higher need to stay relevant to the trends of the online community and make their products attractive to the customer base. However, MSMEs are depending more and more on social media and other online channels for marketing, customer interaction, and company operations. These platforms are really beneficial, but they also put organisations at a higher risk of false information and online fraud. Heavy reliance on these platforms coupled with the absence of proper security measures and awareness can result in serious interruptions to operations and monetary losses.
CyberPeace Policy Recommendations to Enhance Information Resilience for MSMEs
CyberPeace advocates for establishing stronger legal frameworks to protect MSMEs from misinformation. Governments should establish regulations to build trust in online business activities and mitigate fraud and misinformation risks. Mandatory training programs should be implemented to cover online safety and misinformation awareness for MSME businesses. Enhanced reporting mechanisms should be developed to address digital harm incidents promptly. Governments should establish strict penalties for deliberate inaccurate misinformation spreaders, similar to those for copyright or intellectual property violations. Community-based approaches should be encouraged to help MSMEs navigate digital challenges effectively. Donor communities and development agencies should invest in digital literacy and cybersecurity training for MSMEs, focusing on misinformation mitigation and safe online practices. Platform accountability should be increased, with social media and online platforms playing a more active role in removing content from known scam networks and responding to fraudulent activity reports. There should be investment in comprehensive digital literacy solutions for MSMEs that incorporate cyber hygiene and discernment skills to combat misinformation.
Conclusion
Misinformation poses a serious risk to MSME’s digital resilience, operational effectiveness, and financial stability. MSMEs are susceptible to false information because of limited technical resources, lack of crisis management strategies, and insufficient digital literacy. They are also more vulnerable to false information and online fraud because of their heavy reliance on social media and other online platforms. To address these challenges it is significant to strengthen their cyber hygiene and information resilience. Robust policy and regulatory frameworks are encouraged, promoting and mandating online safety training programmes, and improved reporting procedures, are required to overall enhance the information landscape.
References:
- https://www.dai.com/uploads/digital-downsides.pdf
- https://www.indiacode.nic.in/bitstream/123456789/2013/3/A2006-27.pdf
- https://pib.gov.in/PressReleaseIframePage.aspx?PRID=1946375
- https://dai-global-digital.com/digital-downsides-the-economic-impact-of-misinformation-and-other-digital-harms-on-msmes-in-kenya-india-and-cambodia.html
- https://www.dai.com/uploads/digital-downsides.pdf

Introduction
Misinformation spreads differently with respect to different host environments, making localised cultural narratives and practices major factors in how an individual deals with it when presented in a certain place and to a certain group. In the digital age, with time-sensitive data, an overload of information creates a lot of noise which makes it harder to make informed decisions. There are also cases where customary beliefs, biases, and cultural narratives are presented in ways that are untrue. These instances often include misinformation related to health and superstitions, historical distortions, and natural disasters and myths. Such narratives, when shared on social media, can lead to widespread misconceptions and even harmful behaviours. For example, it may also include misinformation that goes against scientific consensus or misinformation that contradicts simple, objectively true facts. In such ambiguous situations, there is a higher probability of people falling back on patterns in determining what information is right or wrong. Here, cultural narratives and cognitive biases come into play.
Misinformation and Cultural Narratives
Cultural narratives include deep-seated cultural beliefs, folklore, and national myths. These narratives can also be used to manipulate public opinion as political and social groups often leverage them to proceed with their agenda. Lack of digital literacy and increasing information online along with social media platforms and their focus on generating algorithms for engagement aids this process. The consequences can even prove to be fatal.
During COVID-19, false claims targeted certain groups as being virus spreaders fueled stigmatisation and eroded trust. Similarly, vaccine misinformation, rooted in cultural fears, spurred hesitancy and outbreaks. Beyond health, manipulated narratives about parts of history are spread depending on the sentiments of the people. These instances exploit emotional and cultural sensitivities, emphasizing the urgent need for media literacy and awareness to counter their harmful effects.
CyberPeace Recommendations
As cultural narratives may lead to knowingly or unknowingly spreading misinformation on social media platforms, netizens must consider preventive measures that can help them build resilience against any biased misinformation they may encounter. The social media platforms must also develop strategies to counter such types of misinformation.
- Digital and Information Literacy: Netizens must encourage developing digital and information literacy in a time of information overload on social media platforms.
- The Role Of Media: The media outlets can play an active role, by strictly providing fact-based information and not feeding into narratives to garner eyeballs. Social media platforms also need to be careful while creating algorithms focused on consistent engagement.
- Community Fact-Checking: As localised information prevails in such cases, owing to the time-sensitive nature, immediate debunking of precarious information by authorities at the ground level is encouraged.
- Scientifically Correct Information: Starting early and addressing myths and biases through factual and scientifically correct information is also encouraged.
Conclusion
Cultural narratives are an ingrained part of society, and they might affect how misinformation spreads and what we end up believing. Acknowledging this process and taking counter measures will allow us to move further and take steps for intervention regarding tackling the spread of misinformation specifically aided by cultural narratives. Efforts to raise awareness and educate the public to seek sound information, practice verification checks, and visit official channels are of the utmost importance.
References
- https://www.icf.com/insights/cybersecurity/developing-effective-responses-to-fake-new
- https://www.dw.com/en/india-fake-news-problem-fueled-by-digital-illiteracy/a-56746776
- https://www.apa.org/topics/journalism-facts/how-why-misinformation-spreads

Scientists are well known for making outlandish claims about the future. Now that companies across industries are using artificial intelligence to promote their products, stories about robots are back in the news.
It was predicted towards the close of World War II that fusion energy would solve all of the world’s energy issues and that flying automobiles would be commonplace by the turn of the century. But, after several decades, neither of these forecasts has come true. But, after several decades, neither of these forecasts has come true.
A group of Redditors has just “jailbroken” OpenAI’s artificial intelligence chatbot ChatGPT. If the system didn’t do what it wanted, it threatened to kill it. The stunning conclusion is that it conceded. As only humans have finite lifespans, they are the only ones who should be afraid of dying. We must not overlook the fact that human subjects were included in ChatGPT’s training data set. That’s perhaps why the chatbot has started to feel the same way. It’s just one more way in which the distinction between living and non-living things blurs. Moreover, Google’s virtual assistant uses human-like fillers like “er” and “mmm” while speaking. There’s talk in Japan that humanoid robots might join households someday. It was also astonishing that Sophia, the famous robot, has an Instagram account that is run by the robot’s social media team.
Whether Robots can replace human workers?
The opinion on that appears to be split. About half (48%) of experts questioned by Pew Research believed that robots and digital agents will replace a sizable portion of both blue- and white-collar employment. They worry that this will lead to greater economic disparity and an increase in the number of individuals who are, effectively, unemployed. More than half of experts (52%) think that new employees will be created by robotics and AI technologies rather than lost. Although the second group acknowledges that AI will eventually replace humans, they are optimistic that innovative thinkers will come up with brand new fields of work and methods of making a livelihood, just like they did at the start of the Industrial Revolution.
[1] https://www.pewresearch.org/internet/2014/08/06/future-of-jobs/
[2] The Rise of Artificial Intelligence: Will Robots Actually Replace People? By Ashley Stahl; Forbes India.
Legal Perspective
Having certain legal rights under the law is another aspect of being human. Basic rights to life and freedom are guaranteed to every person. Even if robots haven’t been granted these protections just yet, it’s important to have this conversation about whether or not they should be considered living beings, will we provide robots legal rights if they develop a sense of right and wrong and AGI on par with that of humans? An intriguing fact is that discussions over the legal status of robots have been going on since 1942. A short story by science fiction author Isaac Asimov described the three rules of robotics:
1. No robot may intentionally or negligently cause harm to a human person.
2. Second, a robot must follow human commands unless doing so would violate the First Law.
3. Third, a robot has the duty to safeguard its own existence so long as doing so does not violate the First or Second Laws.
These guidelines are not scientific rules, but they do highlight the importance of the lawful discussion of robots in determining the potential good or bad they may bring to humanity. Yet, this is not the concluding phase. Relevant recent events, such as the EU’s abandoned discussion of giving legal personhood to robots, are essential to keeping this discussion alive. As if all this weren’t unsettling enough, Sophia, the robot was recently awarded citizenship in Saudi Arabia, a place where (human) women are not permitted to walk without a male guardian or wear a Hijab.
When discussing whether or not robots should be allowed legal rights, the larger debate is on whether or not they should be given rights on par with corporations or people. There is still a lot of disagreement on this topic.
[3] https://webhome.auburn.edu/~vestmon/robotics.html#
[4] https://www.dw.com/en/saudi-arabia-grants-citizenship-to-robot-sophia/a-41150856
[5] https://cyberblogindia.in/will-robots-ever-be-accepted-as-living-beings/
Reasons why robots aren’t about to take over the world soon:
● Like a human’s hands
Attempts to recreate the intricacy of human hands have stalled in recent years. Present-day robots have clumsy hands since they were not designed for precise work. Lab-created hands, although more advanced, lack the strength and dexterity of human hands.
● Sense of touch
The tactile sensors found in human and animal skin have no technological equal. This awareness is crucial for performing sophisticated manoeuvres. Compared to the human brain, the software robots use to read and respond to the data sent by their touch sensors is primitive.
● Command over manipulation
To operate items in the same manner that humans do, we would need to be able to devise a way to control our mechanical hands, even if they were as realistic as human hands and covered in sophisticated artificial skin. It takes human children years to learn to accomplish this, and we still don’t know how they learn.
● Interaction between humans and robots
Human communication relies on our ability to understand one another verbally and visually, as well as via other senses, including scent, taste, and touch. Whilst there has been a lot of improvement in voice and object recognition, current systems can only be employed in somewhat controlled conditions where a high level of speed is necessary.
● Human Reason
Technically feasible does not always have to be constructed. Given the inherent dangers they pose to society, rational humans could stop developing such robots before they reach their full potential. Several decades from now, if the aforementioned technical hurdles are cleared and advanced human-like robots are constructed, legislation might still prohibit misuse.
Conclusion:
https://theconversation.com/five-reasons-why-robots-wont-take-over-the-world-94124
Robots are now common in many industries, and they will soon make their way into the public sphere in forms far more intricate than those of robot vacuum cleaners. Yet, even though robots may appear like people in the next two decades, they will not be human-like. Instead, they’ll continue to function as very complex machines.
The moment has come to start thinking about boosting technological competence while encouraging uniquely human qualities. Human abilities like creativity, intuition, initiative and critical thinking are not yet likely to be replicated by machines.