#FactCheck: Fake Phishing link on Modi Government is giving ₹5,000 to all Indian citizens via UPI
Executive Summary:
A viral social media message claims that the Indian government is offering a ₹5,000 gift to citizens in celebration of Prime Minister Narendra Modi’s birthday. However, this claim is false. The message is part of a deceptive scam that tricks users into transferring money via UPI, rather than receiving any benefit. Fact-checkers have confirmed that this is a fraud using misleading graphics and fake links to lure people into authorizing payments to scammers.

Claim:
The post circulating widely on platforms such as WhatsApp and Facebook states that every Indian citizen is eligible to receive ₹5,000 as a gift from the current Union Government on the Prime Minister’s birthday. The message post includes visuals of PM Modi, BJP party symbols, and UPI app interfaces such as PhonePe or Google Pay, and urges users to click on the BJP Election Symbol [Lotus] or on the provided link to receive the gift directly into their bank account.


Fact Check:
Our research indicates that there is no official announcement or credible article supporting the claim that the government is offering ₹5,000 under the Pradhan Mantri Jan Dhan Yojana (PMJDY). This claim does not appear on any official government websites or verified scheme listings.

While the message was crafted to appear legitimate, it was in fact misleading. The intent was to deceive users into initiating a UPI payment rather than receiving one, thereby putting them at financial risk.
A screen popped up showing a request to pay ₹686 to an unfamiliar UPI ID. When the ‘Pay ₹686’ button was tapped, the app asked for the UPI PIN—clearly indicating that this would have authorised a payment straight from the user’s bank account to the scammer’s.

We advise the public to verify such claims through official sources before taking any action.
Our research indicated that the claim in the viral post is false and part of a fraudulent UPI money scam.

Clicking the link that went with the viral Facebook post, it took us to a website
https://wh1449479[.]ispot[.]cc/with a somewhat odd domain name of 'ispot.cc', which is certainly not a government-related or commonly known domain name. On the website, we observed images that featured a number of unauthorized visuals, including a Prime Minister Narendra Modi image, a Union Minister and BJP President J.P. Nadda image, the national symbol, the BJP symbol, and the Pradhan Mantri Jan Dhan Yojana logo. It looked like they were using these visuals intentionally to convince users that the website was legitimate.
Conclusion:
The assertion that the Indian government is handing out ₹5,000 to all citizens is totally false and should be reported as a scam. The message uses the trust related to government schemes, tricking users into sending money through UPI to criminals. They recommend that individuals do not click on links or respond to any such message about obtaining a government gift prior to verification. If you or a friend has fallen victim to this fraud, they are urged to report it immediately to your bank, and report it through the National Cyber Crime Reporting Portal (https://cybercrime.gov.in) or contact the cyber helpline at 1930. They also recommend always checking messages like this through their official government website first.
- Claim: The Modi Government is distributing ₹5,000 to citizens through UPI apps
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
In the era of digitalisation, social media has become an essential part of our lives, with people spending a lot of time updating every moment of their lives on these platforms. Social media networks such as WhatsApp, Facebook, and YouTube have emerged as significant sources of Information. However, the proliferation of misinformation is alarming since misinformation can have grave consequences for individuals, organisations, and society as a whole. Misinformation can spread rapidly via social media, leaving a higher impact on larger audiences. Bad actors can exploit algorithms for their benefit or some other agenda, using tactics such as clickbait headlines, emotionally charged language, and manipulated algorithms to increase false information.
Impact
The impact of misinformation on our lives is devastating, affecting individuals, communities, and society as a whole. False or misleading health information can have serious consequences, such as believing in unproven remedies or misinformation about some vaccines can cause serious illness, disability, or even death. Any misinformation related to any financial scheme or investment can lead to false or poor financial decisions that could lead to bankruptcy and loss of long-term savings.
In a democratic nation, misinformation plays a vital role in forming a political opinion, and the misinformation spread on social media during elections can affect voter behaviour, damage trust, and may cause political instability.
Mitigating strategies
The best way to minimise or stop the spreading of misinformation requires a multi-faceted approach. These strategies include promoting media literacy with critical thinking, verifying information before sharing, holding social media platforms accountable, regulating misinformation, supporting critical research, and fostering healthy means of communication to build a resilient society.
To put an end to the cycle of misinformation and move towards a better future, we must create plans to combat the spread of false information. This will require coordinated actions from individuals, communities, tech companies, and institutions to promote a culture of information accuracy and responsible behaviour.
The widespread spread of false information on social media platforms presents serious problems for people, groups, and society as a whole. It becomes clear that battling false information necessitates a thorough and multifaceted strategy as we go deeper into comprehending the nuances of this problem.
Encouraging consumers to develop media literacy and critical thinking abilities is essential to preventing the spread of false information. Being educated is essential for equipping people to distinguish between reliable sources and false information. Giving individuals the skills to assess information critically will enable them to choose the content they share and consume with knowledge. Public awareness campaigns should be used to promote and include initiatives that aim to improve media literacy in school curriculum.
Ways to Stop Misinformation
As we have seen, misinformation can cause serious implications; the best way to minimise or stop the spreading of misinformation requires a multifaceted approach; here are some strategies to combat misinformation.
- Promote Media Literacy with Critical Thinking: Educate individuals about how to critically evaluate information, fact check, and recognise common tactics used to spread misinformation, the users must use their critical thinking before forming any opinion or perspective and sharing the content.
- Verify Information: we must encourage people to verify the information before sharing, especially if it seems sensational or controversial, and encourage the consumption of news or any information from a reputable source of news that follows ethical journalistic standards.
- Accountability: Advocate for social media networks' openness and responsibility in the fight against misinformation. Encourage platforms to put in place procedures to detect and delete fraudulent content while boosting credible sources.
- Regulate Misinformation: Looking at the current situation, it is important to advocate for policies and regulations that address the spread of misinformation while safeguarding freedom of expression. Transparency in online communication by identifying the source of information and disclosing any conflict of interest.
- Support Critical Research: Invest in research and study on the sources, impacts, and remedies to misinformation. Support collaborative initiatives by social scientists, psychologists, journalists, and technology to create evidence-based techniques for countering misinformation.
Conclusion
To prevent the cycle of misinformation and move towards responsible use of the Internet, we must create strategies to combat the spread of false information. This will require coordinated actions from individuals, communities, tech companies, and institutions to promote a culture of information accuracy and responsible behaviour.

What are Deepfakes?
A deepfake is essentially a video of a person in which their face or body has been digitally altered so that they appear to be someone else, typically used maliciously or to spread false information. Deepfake technology is a method for manipulating videos, images, and audio utilising powerful computers and deep learning. It is used to generate fake news and commit financial fraud, among other wrongdoings. It overlays a digital composite over an already-existing video, picture, or audio; cybercriminals use Artificial Intelligence technology. The term deepfake was coined first time in 2017 by an anonymous Reddit user, who called himself deepfake.
Deepfakes works on a combination of AI and ML, which makes the technology hard to detect by Web 2.0 applications, and it is almost impossible for a layman to see if an image or video is fake or has been created using deepfakes. In recent times, we have seen a wave of AI-driven tools which have impacted all industries and professions across the globe. Deepfakes are often created to spread misinformation. There lies a key difference between image morphing and deepfakes. Image morphing is primarily used for evading facial recognition, but deepfakes are created to spread misinformation and propaganda.
Issues Pertaining to Deepfakes in India
Deepfakes are a threat to any nation as the impact can be divesting in terms of monetary losses, social and cultural unrest, and actions against the sovereignty of India by anti-national elements. Deepfake detection is difficult but not impossible. The following threats/issues are seen to be originating out of deep fakes:
- Misinformation: One of the biggest issues of Deepfake is misinformation, the same was seen during the Russia-Ukraine conflict, where in a deepfake of Ukraine’s president, Mr Zelensky, surfaced on the internet and caused mass confusion and propaganda-based misappropriation among the Ukrainians.
- Instigation against the Union of India: Deepfake poses a massive threat to the integrity of the Union of India, as this is one of the easiest ways for anti-national elements to propagate violence or instigate people against the nation and its interests. As India grows, so do the possibilities of anti-national attacks against the nation.
- Cyberbullying/ Harassment: Deepfakes can be used by bad actors to harass and bully people online in order to extort money from them.
- Exposure to Illicit Content: Deepfakes can be easily used to create illicit content, and oftentimes, it is seen that it is being circulated on online gaming platforms where children engage the most.
- Threat to Digital Privacy: Deepfakes are created by using existing videos. Hence, bad actors often use photos and videos from Social media accounts to create deepfakes, this directly poses a threat to the digital privacy of a netizen.
- Lack of Grievance Redressal Mechanism: In the contemporary world, the majority of nations lack a concrete policy to address the aspects of deepfake. Hence, it is of paramount importance to establish legal and industry-based grievance redressal mechanisms for the victims.
- Lack of Digital Literacy: Despite of high internet and technology penetration rates in India, digital literacy lags behind, this is a massive concern for the Indian netizens as it takes them far from understanding the tech, which results in the under-reporting of crimes. Large-scale awareness and sensitisation campaigns need to be undertaken in India to address misinformation and the influence of deepfakes.
How to spot deepfakes?
Deepfakes look like the original video at first look, but as we progress into the digital world, it is pertinent to establish identifying deepfakes in our digital routine and netiquettes in order to stay protected in the future and to address this issue before it is too late. The following aspects can be kept in mind while differentiating between a real video and a deepfake
- Look for facial expressions and irregularities: Whenever differentiating between an original video and deepfake, always look for changes in facial expressions and irregularities, it can be seen that the facial expressions, such as eye movement and a temporary twitch on the face, are all signs of a video being a deepfake.
- Listen to the audio: The audio in deepfake also has variations as it is imposed on an existing video, so keep a check on the sound effects coming from a video in congruence with the actions or gestures in the video.
- Pay attention to the background: The most easiest way to spot a deepfake is to pay attention to the background, in all deepfakes, you can spot irregularities in the background as, in most cases, its created using virtual effects so that all deepfakes will have an element of artificialness in the background.
- Context and Content: Most of the instances of deepfake have been focused towards creating or spreading misinformation hence, the context and content of any video is an integral part of differentiating between an original video and deepfake.
- Fact-Checking: As a basic cyber safety and digital hygiene protocol, one should always make sure to fact-check each and every piece of information they come across on social media. As a preventive measure, always make sure to fact-check any information or post sharing it with your known ones.
- AI Tools: When in doubt, check it out, and never refrain from using Deepfake detection tools like- Sentinel, Intel’s real-time deepfake detector - Fake catcher, We Verify, and Microsoft’s Video Authenticator tool to analyze the videos and combating technology with technology.
Recent Instance
A deepfake video of actress Rashmika Mandanna recently went viral on social media, creating quite a stir. The video showed a woman entering an elevator who looked remarkably like Mandanna. However, it was later revealed that the woman in the video was not Mandanna, but rather, her face was superimposed using AI tools. Some social media users were deceived into believing that the woman was indeed Mandanna, while others identified it as an AI-generated deepfake. The original video was actually of a British-Indian girl named Zara Patel, who has a substantial following on Instagram. This incident sparked criticism from social media users towards those who created and shared the video merely for views, and there were calls for strict action against the uploaders. The rapid changes in the digital world pose a threat to personal privacy; hence, caution is advised when sharing personal items on social media.
Legal Remedies
Although Deepfake is not recognised by law in India, it is indirectly addressed by Sec. 66 E of the IT Act, which makes it illegal to capture, publish, or transmit someone's image in the media without that person's consent, thus violating their privacy. The maximum penalty for this violation is ₹2 lakh in fines or three years in prison. The DPDP Act's applicability in 2023 means that the creation of deepfakes will directly affect an individual's right to digital privacy and will also violate the IT guidelines under the Intermediary Guidelines, as platforms will be required to exercise caution while disseminating and publishing misinformation through deepfakes. The indirect provisions of the Indian Penal Code, which cover the sale and dissemination of derogatory publications, songs and actions, deception in the delivery of property, cheating and dishonestly influencing the delivery of property, and forgery with the intent to defame, are the only legal remedies available for deepfakes. Deep fakes must be recognized legally due to the growing power of misinformation. The Data Protection Board and the soon-to-be-established fact-checking body must recognize crimes related to deepfakes and provide an efficient system for filing complaints.
Conclusion
Deepfake is an aftermath of the advancements of Web 3.0 and, hence is just the tip of the iceberg in terms of the issues/threats from emerging technologies. It is pertinent to upskill and educate the netizens about the keen aspects of deepfakes to stay safe in the future. At the same time, developing and developed nations need to create policies and laws to efficiently regulate deepfake and to set up redressal mechanisms for victims and industry. As we move ahead, it is pertinent to address the threats originating out of the emerging techs and, at the same time, create a robust resilience for the same.
References

The Illusion of Digital Serenity
In the age of technology, our email accounts have turned into overcrowded spaces, full of newsletters, special offers, and unwanted updates. To most, the presence of an "unsubscribe" link brings a minor feeling of empowerment, a chance to declutter and restore digital serenity. Yet behind this harmless-seeming tool lurks a developing cybersecurity threat. Recent research and expert discussions indicate that the "unsubscribe" button is being used by cybercriminals to carry out phishing campaigns, confirm active email accounts, and distribute malware. This new threat not only undermines individual users but also has wider implications for trust, behaviour, and governance in cyberspace.
Exploiting User Behaviour
The main challenge is the manipulation of user behaviour. Cyber thieves have learned to analyse typical user habits, most notably the instinctive process of unsubscribing from spam mail. Taking advantage of this, they now place criminal codes in emails that pose as real subscription programs. These codes may redirect traffic to fake websites that attempt to steal credentials, force the installation of malicious code, or merely count the click as verification that the recipient's email address is valid. Once confirmed, these addresses tend to be resold on the dark web or included in additional spam lists, further elevating the threat of subsequent attacks.
A Social Engineering Trap
This type of cyber deception is a prime example of social engineering, where the weakest link in the security chain ends up being the human factor. In the same way, misinformation campaigns take advantage of cognitive biases such as confirmation or familiarity, and these unsubscribe traps exploit user convenience and habits. The bait is so simple, and that is exactly what makes it work. Someone attempting to combat spam may unknowingly walk into a sophisticated cyber threat. Unlike phishing messages impersonating banks or government agencies, which tend to elicit suspicion, spoofed unsubscribe links are integrated into regular digital habits, making them more difficult to recognise and resist.
Professional Disguise, Malicious Intent
Technical analysis determines that most of these messages come from suspicious domains or spoofed versions of valid ones, like "@offers-zomato.ru" in place of the authentic "@zomato.com." The appearance of the email looks professional, complete with logos and styling copied from reputable businesses. But behind the HTML styling lies redirection code and obfuscated scripts with a very different agenda. At times, users are redirected to sites that mimic login pages or questionnaire forms, capturing sensitive information under the guise of email preference management.
Beyond the Inbox: Broader Consequences
The consequences of this attack go beyond the individual user. The compromise of a personal email account can be used to carry out more extensive spamming campaigns, engage in botnets, or even execute identity theft. Furthermore, the compromised devices may become entry points for ransomware attacks or espionage campaigns, particularly if the individual works within sensitive sectors such as finance, defence, or healthcare. In this context, what appears to be a personal lapse becomes a national security risk. This is why the issue posed by the weaponised unsubscribe button must be considered not just as a cybersecurity risk but also as a policy and public awareness issue.
Platform Responsibility
Platform responsibility is yet another important aspect. Email service providers such as Gmail, Outlook, and ProtonMail do have native unsubscribe capabilities, under the List-Unsubscribe header mechanism. These tools enable users to remove themselves from valid mailing lists safely without engaging with the original email content. Yet many users do not know about these safer options and instead resort to in-body unsubscribe links that are easier to find but risky. To that extent, email platforms need to do more not only to enhance backend security but also to steer user actions through simple interfaces, safety messages, and digital hygiene alerts.
Education as a Defence
Education plays a central role in mitigation. Just as cyber hygiene campaigns have been launched to teach users not to click on suspicious links or download unknown attachments, similar efforts are needed to highlight the risks associated with casual unsubscribing. Cybersecurity literacy must evolve to match changing threat patterns. Rather than only targeting clearly malicious activity, awareness campaigns should start tackling deceptive tactics that disguise themselves as beneficial, including unsubscribe traps or simulated customer support conversations. Partnerships between public and private institutions might be vital in helping with this by leveraging their resources for mass digital education.
Practical Safeguards for Users
Users are advised to always check the sender's domain before clicking any link, avoid unknown promotional emails, and hover over any link to preview its true destination. Rather than clicking "unsubscribe," users can simply mark such emails as spam or junk so that their email providers can automatically filter similar messages in the future. For enhanced security, embracing mechanisms such as mail client sandboxing, two-factor authentication (2FA) support, and alias email addresses for sign-ups can also help create layered defences.
Policy and Regulatory Implications
Policy implications are also significant. Governments and data protection regulators must study the increasing misuse of misleading unsubscribe hyperlinks under electronic communication and consent laws. In India, the new Digital Personal Data Protection Act, 2023 (DPDPA), provides a legislative framework to counter such deceptive practices, especially under the principles of legitimate processing and purpose limitation. The law requires that the processing of data should be transparent and fair, a requirement that malicious emails obviously breach. Regulatory agencies like CERT-In can also release periodic notifications warning users against such trends as part of their charter to encourage secure digital practices.
The Trust Deficit
The vulnerability also relates to broader issues of trust in digital infrastructure. When widely used tools such as an unsubscribe feature become points of exploitation, user trust in digital platforms erodes. Such a trust deficit can lead to generalised distrust of email systems, digital communication, and even legitimate marketing. Restoring and maintaining such trust demands a unified response that includes technical measures, user education, and regulatory action.
Conclusion: Inbox Hygiene with Caution
The "unsubscribe button trap" is a parable of the modern age. It illustrates how mundane digital interactions, when manipulated, can do great damage not only to individual users but also to the larger ecosystem of online security and trust. As cyber-attacks grow increasingly psychologically advanced and behaviorally focused, our response must similarly become more sophisticated, interdisciplinary, and user-driven. Getting your inbox in order should never involve putting yourself in cyber danger. But as things stand, even that basic task requires caution, context, and clear thinking.