#FactCheck: Fake Phishing link on Modi Government is giving ₹5,000 to all Indian citizens via UPI
Executive Summary:
A viral social media message claims that the Indian government is offering a ₹5,000 gift to citizens in celebration of Prime Minister Narendra Modi’s birthday. However, this claim is false. The message is part of a deceptive scam that tricks users into transferring money via UPI, rather than receiving any benefit. Fact-checkers have confirmed that this is a fraud using misleading graphics and fake links to lure people into authorizing payments to scammers.

Claim:
The post circulating widely on platforms such as WhatsApp and Facebook states that every Indian citizen is eligible to receive ₹5,000 as a gift from the current Union Government on the Prime Minister’s birthday. The message post includes visuals of PM Modi, BJP party symbols, and UPI app interfaces such as PhonePe or Google Pay, and urges users to click on the BJP Election Symbol [Lotus] or on the provided link to receive the gift directly into their bank account.


Fact Check:
Our research indicates that there is no official announcement or credible article supporting the claim that the government is offering ₹5,000 under the Pradhan Mantri Jan Dhan Yojana (PMJDY). This claim does not appear on any official government websites or verified scheme listings.

While the message was crafted to appear legitimate, it was in fact misleading. The intent was to deceive users into initiating a UPI payment rather than receiving one, thereby putting them at financial risk.
A screen popped up showing a request to pay ₹686 to an unfamiliar UPI ID. When the ‘Pay ₹686’ button was tapped, the app asked for the UPI PIN—clearly indicating that this would have authorised a payment straight from the user’s bank account to the scammer’s.

We advise the public to verify such claims through official sources before taking any action.
Our research indicated that the claim in the viral post is false and part of a fraudulent UPI money scam.

Clicking the link that went with the viral Facebook post, it took us to a website
https://wh1449479[.]ispot[.]cc/with a somewhat odd domain name of 'ispot.cc', which is certainly not a government-related or commonly known domain name. On the website, we observed images that featured a number of unauthorized visuals, including a Prime Minister Narendra Modi image, a Union Minister and BJP President J.P. Nadda image, the national symbol, the BJP symbol, and the Pradhan Mantri Jan Dhan Yojana logo. It looked like they were using these visuals intentionally to convince users that the website was legitimate.
Conclusion:
The assertion that the Indian government is handing out ₹5,000 to all citizens is totally false and should be reported as a scam. The message uses the trust related to government schemes, tricking users into sending money through UPI to criminals. They recommend that individuals do not click on links or respond to any such message about obtaining a government gift prior to verification. If you or a friend has fallen victim to this fraud, they are urged to report it immediately to your bank, and report it through the National Cyber Crime Reporting Portal (https://cybercrime.gov.in) or contact the cyber helpline at 1930. They also recommend always checking messages like this through their official government website first.
- Claim: The Modi Government is distributing ₹5,000 to citizens through UPI apps
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs
.webp)
Introduction
Empowering today’s youth with the right skills is more crucial than ever in a rapidly evolving digital world. Every year on July 15th, the United Nations marks World Youth Skills Day to emphasise the critical role of skills development in preparing young people for meaningful work and resilient futures. As AI transforms industries and societies, equipping young minds with digital and AI skills is key to fostering security, adaptability, and growth in the years ahead.
Why AI Upskilling is Crucial in Modern Cyber Defence
Security in the digital age remains a complex challenge, regardless of the presence of Artificial Intelligence (AI). It is one of the biggest modern ironies, and not only that, it is a paradox wrapped in code, where the cure and the curse are written in the same language. The very hand that protects the world from cyber threats can very well be used for the creation of that threat. This being said, the modern-day implementation of AI has to circumvent the threats posed by it or any other advanced technology. A solid grasp of AI and machine learning mechanisms is no longer optional; it is fundamental for modern cybersecurity. The traditional cybersecurity training programs employ static content, which can often become outdated and inadequate for the vulnerabilities. AI-powered solutions, such as intrusion detection systems and next-generation firewalls, use behavioural analysis instead of just matching signatures. AI models are susceptible, nevertheless, as malevolent actors can introduce hostile inputs or tainted data to trick computers into incorrect classification. Data poisoning is a major threat to AI defences, according to Cisco's evidence.
As threats surpass the current understanding of cybersecurity professionals, a need arises to upskill them in advanced AI technologies so that they can fortify the security of current systems. Two of the most important skills for professionals would be AI/ML Model Auditing and Data Science. Skilled data scientists can sift through vast logs, from pocket captures to user profiles, to detect anomalies, assess vulnerabilities, and anticipate attacks. A news report from Business Insider puts it correctly: ‘It takes a good-guy AI to fight a bad-guy AI.’ The technology of generative AI is quite new. As a result, it poses fresh security issues and faces security risks like data exfiltration and prompt injections.
Another method that can prove effective is Natural Language Processing (NLP), which helps machines process this unstructured data, enabling automated spam detection, sentiment analysis, and threat context extraction. Security teams skilled in NLP can deploy systems that flag suspicious email patterns, detect malicious content in code reviews, and monitor internal networks for insider threats, all at speeds and scales humans cannot match.
The AI skills, as aforementioned, are not only for courtesy’s sake; they have become essential in the current landscape. India is not far behind in this mission; it is committed, along with its western counterparts, to employ the emerging technologies in its larger goal of advancement. With quiet confidence, India takes pride in its remarkable capacity to nurture exceptional talent in science and technology, with Indian minds making significant contributions across global arenas.
AI Upskilling in India
As per a news report of March 2025, Jayant Chaudhary, Minister of State, Ministry of Skill Development & Entrepreneurship, highlighted that various schemes under the Skill India Programme (SIP) guarantee greater integration of emerging technologies, such as artificial intelligence (AI), cybersecurity, blockchain, and cloud computing, to meet industry demands. The SIP’s parliamentary brochure states that more than 6.15 million recipients have received training as of December 2024. Other schemes that facilitate educating and training professionals, such as Data Scientist, Business Intelligence Analyst, and Machine Learning Engineer are,
- Pradhan Mantri Kaushal Vikas Yojana 4.0 (PMKVY 4.0)
- Pradhan Mantri National Apprenticeship Promotion Scheme (PM-NAPS)
- Jan Shikshan Sansthan (JSS)
Another report showcases how Indian companies, or companies with their offices in India such as Ernst & Young (EY), are recognising the potential of the Indian workforce and yet their deficiencies in emerging technologies and leading the way by internal upskilling and establishing an AI Academy, a new program designed to assist businesses in providing their employees with essential AI capabilities, in response to the increasing need for AI expertise. Using more than 200 real-world AI use cases, the program offers interactive, organised learning opportunities that cover everything from basic ideas to sophisticated generative AI capabilities.
In order to better understand the need for these initiatives, a reference is significant to a report backed by Google.org and the Asian Development Bank; India appears to be at a turning point in the global use of AI. As per the research, “AI for All: Building an AI-Ready Workforce in Asia-Pacific,” India urgently needs to provide accessible and efficient AI upskilling despite having the largest workforce in the world. According to the paper, by 2030, AI could boost the Asia-Pacific region’s GDP by up to USD 3 trillion. The key to this potential is India, a country with the youngest and fastest-growing population.
Conclusion and CyberPeace Resolution
As the world stands at the crossroads of innovation and insecurity, India finds itself uniquely poised, with its vast young population and growing technologies. But to truly safeguard its digital future and harness the promise of AI, the country must think beyond flagship schemes. Imagine classrooms where students learn not just to code but to question algorithms, workplaces where AI training is as routine as onboarding.
India’s journey towards digital resilience is not just about mastering technology but about cultivating curiosity, responsibility, and trust. CyberPeace is committed to this future and is resolute in this collective pursuit of an ethically secure digital world. CyberPeace resolves to be an active catalyst in AI upskilling across India. We commit to launching specialised training modules on AI, cybersecurity, and digital ethics tailored for students and professionals. It seeks to close the AI literacy gap and develop a workforce that is both morally aware and technologically proficient by working with educational institutions, skilling initiatives, and industry stakeholders.
References
- https://www.helpnetsecurity.com/2025/03/07/ai-gamified-simulations-cybersecurity/
- https://www.businessinsider.com/artificial-intelligence-cybersecurity-large-language-model-threats-solutions-2025-5?utm
- https://apacnewsnetwork.com/2025/03/ai-5g-skills-boost-skill-india-targets-industry-demands-over-6-15-million-beneficiaries-trained-till-2024/
- https://indianexpress.com/article/technology/artificial-intelligence/india-must-upskill-fast-to-keep-up-with-ai-jobs-says-new-report-10107821/

Introduction
Election misinformation poses a major threat to democratic processes all over the world. The rampant spread of misleading information intentionally (disinformation) and unintentionally (misinformation) during the election cycle can not only create grounds for voter confusion with ramifications on election results but also incite harassment, bullying, and even physical violence. The attack on the United States Capitol Building in Washington D.C., in 2021, is a classic example of this phenomenon, where the spread of dis/misinformation snowballed into riots.
Election Dis/Misinformation
Election dis/misinformation is false or misleading information that affects/influences public understanding of voting, candidates, and election integrity. The internet, particularly social media, is the foremost source of false information during elections. It hosts fabricated news articles, posts or messages containing incorrectly-captioned pictures and videos, fabricated websites, synthetic media and memes, and distorted truths or lies. In a recent example during the 2024 US elections, fake videos using the Federal Bureau of Investigation’s (FBI) insignia alleging voter fraud in collusion with a political party and claiming the threat of terrorist attacks were circulated. According to polling data collected by Brookings, false claims influenced how voters saw candidates and shaped opinions on major issues like the economy, immigration, and crime. It also impacted how they viewed the news media’s coverage of the candidates’ campaign. The shaping of public perceptions can thus, directly influence election outcomes. It can increase polarisation, affect the quality of democratic discourse, and cause disenfranchisement. From a broader perspective, pervasive and persistent misinformation during the electoral process also has the potential to erode public trust in democratic government institutions and destabilise social order in the long run.
Challenges In Combating Dis/Misinformation
- Platform Limitations: Current content moderation practices by social media companies struggle to identify and flag misinformation effectively. To address this, further adjustments are needed, including platform design improvements, algorithm changes, enhanced content moderation, and stronger regulations.
- Speed and Spread: Due to increasingly powerful algorithms, the speed and scale at which misinformation can spread is unprecedented. In contrast, content moderation and fact-checking are reactive and are more time-consuming. Further, incendiary material, which is often the subject of fake news, tends to command higher emotional engagement and thus, spreads faster (virality).
- Geopolitical influences: Foreign actors seeking to benefit from the erosion of public trust in the USA present a challenge to the country's governance, administration and security machinery. In 2018, the federal jury indicted 11 Russian military officials for alleged computer hacking to gain access to files during the 2016 elections. Similarly, Russian involvement in the 2024 federal elections has been alleged by high-ranking officials such as White House national security spokesman John Kirby, and Attorney General Merrick Garland.
- Lack of Targeted Plan to Combat Election Dis/Misinformation: In the USA, dis/misinformation is indirectly addressed through laws on commercial advertising, fraud, defamation, etc. At the state level, some laws such as Bills AB 730, AB 2655, AB 2839, and AB 2355 in California target election dis/misinformation. The federal and state governments criminalize false claims about election procedures, but the Constitution mandates “breathing space” for protection from false statements within election speech. This makes it difficult for the government to regulate election-related falsities.
CyberPeace Recommendations
- Strengthening Election Cybersecurity Infrastructure: To build public trust in the electoral process and its institutions, security measures such as updated data protection protocols, publicized audits of election results, encryption of voter data, etc. can be taken. In 2022, the federal legislative body of the USA passed the Electoral Count Reform and Presidential Transition Improvement Act (ECRA), pushing reforms allowing only a state’s governor or designated executive official to submit official election results, preventing state legislatures from altering elector appointment rules after Election Day and making it more difficult for federal legislators to overturn election results. More investments can be made in training, scenario planning, and fact-checking for more robust mitigation of election-related malpractices online.
- Regulating Transparency on Social Media Platforms: Measures such as transparent labeling of election-related content and clear disclosure of political advertising to increase accountability can make it easier for voters to identify potential misinformation. This type of transparency is a necessary first step in the regulation of content on social media and is useful in providing disclosures, public reporting, and access to data for researchers. Regulatory support is also required in cases where popular platforms actively promote election misinformation.
- Increasing focus on ‘Prebunking’ and Debunking Information: Rather than addressing misinformation after it spreads, ‘prebunking’ should serve as the primary defence to strengthen public resilience ahead of time. On the other hand, misinformation needs to be debunked repeatedly through trusted channels. Psychological inoculation techniques against dis/misinformation can be scaled to reach millions on social media through short videos or messages.
- Focused Interventions On Contentious Themes By Social Media Platforms: As platforms prioritize user growth, the burden of verifying the accuracy of posts largely rests with users. To shoulder the responsibility of tackling false information, social media platforms can outline critical themes with large-scale impact such as anti-vax content, and either censor, ban, or tweak the recommendations algorithm to reduce exposure and weaken online echo chambers.
- Addressing Dis/Information through a Socio-Psychological Lens: Dis/misinformation and its impact on domains like health, education, economy, politics, etc. need to be understood through a psychological and sociological lens, apart from the technological one. A holistic understanding of the propagation of false information should inform digital literacy training in schools and public awareness campaigns to empower citizens to evaluate online information critically.
Conclusion
According to the World Economic Forum’s Global Risks Report 2024, the link between misleading or false information and societal unrest will be a focal point during elections in several major economies over the next two years. Democracies must employ a mixed approach of immediate tactical solutions, such as large-scale fact-checking and content labelling, and long-term evidence-backed countermeasures, such as digital literacy, to curb the spread and impact of dis/misinformation.
Sources
- https://www.cbsnews.com/news/2024-election-misinformation-fbi-fake-videos/
- https://www.brookings.edu/articles/how-disinformation-defined-the-2024-election-narrative/
- https://www.fbi.gov/wanted/cyber/russian-interference-in-2016-u-s-elections
- https://indianexpress.com/article/world/misinformation-spreads-fear-distrust-ahead-us-election-9652111/
- https://academic.oup.com/ajcl/article/70/Supplement_1/i278/6597032#377629256
- https://www.brennancenter.org/our-work/policy-solutions/how-states-can-prevent-election-subversion-2024-and-beyond
- https://www.bbc.com/news/articles/cx2dpj485nno
- https://msutoday.msu.edu/news/2022/how-misinformation-and-disinformation-influence-elections
- https://misinforeview.hks.harvard.edu/article/a-survey-of-expert-views-on-misinformation-definitions-determinants-solutions-and-future-of-the-field/
- https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2023-06/Digital_News_Report_2023.pdf
- https://www.weforum.org/stories/2024/03/disinformation-trust-ecosystem-experts-curb-it/
- https://www.apa.org/topics/journalism-facts/misinformation-recommendations
- https://mythvsreality.eci.gov.in/
- https://www.brookings.edu/articles/transparency-is-essential-for-effective-social-media-regulation/
- https://www.brookings.edu/articles/how-should-social-media-platforms-combat-misinformation-and-hate-speech/

Introduction
Generative AI models are significant consumers of computational resources and energy required for training and running models. While AI is being hailed as a game-changer, however underneath the shiny exterior, cracks are present which significantly raises concerns for its environmental impact. The development, maintenance, and disposal of AI technology all come with a large carbon footprint. The energy consumption of AI models, particularly large-scale models or image generation systems, these models rely on data centers powered by electricity, often from non-renewable sources, which exacerbates environmental concerns and contributes to substantial carbon emissions.
As AI adoption grows, improving energy efficiency becomes essential. Optimising algorithms, reducing model complexity, and using more efficient hardware can lower the energy footprint of AI systems. Additionally, transitioning to renewable energy sources for data centers can help mitigate their environmental impact. There is a growing need for sustainable AI development, where environmental considerations are integral to model design and deployment.
A breakdown of how generative AI contributes to environmental risks and the pressing need for energy efficiency:
- Gen AI during the training phase has high power consumption, when vast amounts of computational power which is often utilising extensive GPU clusters for weeks or at times even months, consumes a substantial amount of electricity. Post this phase, the inference phase where the deployment of these models takes place for real-time inference, can be energy-extensive especially when we take into account the millions of users of Gen AI.
- The main source of energy used for training and deploying AI models often comes from non-renewable sources which then contribute to the carbon footprint. The data centers where the computations for Gen AI take place are a significant source of carbon emissions if they rely on the use of fossil fuels for their energy needs for the training and deployment of the models. According to a study by MIT, training an AI can produce emissions that are equivalent to around 300 round-trip flights between New York and San Francisco. According to a report by Goldman Sachs, Data Companies will use 8% of US power by 2030, compared to 3% in 2022 as their energy demand grows by 160%.
- The production and disposal of hardware (GPUs, servers) necessary for AI contribute to environmental degradation. Mining for raw materials and disposing of electronic waste (e-waste) are additional environmental concerns. E-waste contains hazardous chemicals, including lead, mercury, and cadmium, that can contaminate soil and water supplies and endanger both human health and the environment.
Efforts by the Industry to reduce the environmental risk posed by Gen AI
There are a few examples of how companies are making efforts to reduce their carbon footprint, reduce energy consumption and overall be more environmentally friendly in the long run. Some of the efforts are as under:
- Google's TPUs in particular the Google Tensor are designed specifically for machine learning tasks and offer a higher performance-per-watt ratio compared to traditional GPUs, leading to more efficient AI computations during the shorter periods requiring peak consumption.
- Researchers at Microsoft, for instance, have developed a so-called “1 bit” architecture that can make LLMs 10 times more energy efficient than the current leading system. This system simplifies the models’ calculations by reducing the values to 0 or 1, slashing power consumption but without sacrificing its performance.
- OpenAI has been working on optimizing the efficiency of its models and exploring ways to reduce the environmental impact of AI and using renewable energy as much as possible including the research into more efficient training methods and model architectures.
Policy Recommendations
We advocate for the sustainable product development process and press the need for Energy Efficiency in AI Models to counter the environmental impact that they have. These improvements would not only be better for the environment but also contribute to the greater and sustainable development of Gen AI. Some suggestions are as follows:
- AI needs to adopt a Climate justice framework which has been informed by a diverse context and perspectives while working in tandem with the UN’s (Sustainable Development Goals) SDGs.
- Working and developing more efficient algorithms that would require less computational power for both training and inference can reduce energy consumption. Designing more energy-efficient hardware, such as specialized AI accelerators and next-generation GPUs, can help mitigate the environmental impact.
- Transitioning to renewable energy sources (solar, wind, hydro) can significantly reduce the carbon footprint associated with AI. The World Economic Forum (WEF) projects that by 2050, the total amount of e-waste generated will have surpassed 120 million metric tonnes.
- Employing techniques like model compression, which reduces the size of AI models without sacrificing performance, can lead to less energy-intensive computations. Optimized models are faster and require less hardware, thus consuming less energy.
- Implementing scattered learning approaches, where models are trained across decentralized devices rather than centralized data centers, can lead to a better distribution of energy load evenly and reduce the overall environmental impact.
- Enhancing the energy efficiency of data centers through better cooling systems, improved energy management practices, and the use of AI for optimizing data center operations can contribute to reduced energy consumption.
Final Words
The UN Sustainable Development Goals (SDGs) are crucial for the AI industry just as other industries as they guide responsible innovation. Aligning AI development with the SDGs will ensure ethical practices, promoting sustainability, equity, and inclusivity. This alignment fosters global trust in AI technologies, encourages investment, and drives solutions to pressing global challenges, such as poverty, education, and climate change, ultimately creating a positive impact on society and the environment. The current state of AI is that it is essentially utilizing enormous power and producing a product not efficiently utilizing the power it gets. AI and its derivatives are stressing the environment in such a manner which if it continues will affect the clean water resources and other non-renewable power generation sources which contributed to the huge carbon footprint of the AI industry as a whole.
References
- https://cio.economictimes.indiatimes.com/news/artificial-intelligence/ais-hunger-for-power-can-be-tamed/111302991
- https://earth.org/the-green-dilemma-can-ai-fulfil-its-potential-without-harming-the-environment/
- https://www.technologyreview.com/2019/06/06/239031/training-a-single-ai-model-can-emit-as-much-carbon-as-five-cars-in-their-lifetimes/
- https://www.scientificamerican.com/article/ais-climate-impact-goes-beyond-its-emissions/
- https://insights.grcglobalgroup.com/the-environmental-impact-of-ai/