#FactCheck: Phishing Scam on Jio is offering a ₹700 Holi reward through a promotional link
Executive Summary:
A viral post currently circulating on various social media platforms claims that Reliance Jio is offering a ₹700 Holi gift to its users, accompanied by a link for individuals to claim the offer. This post has gained significant traction, with many users engaging in it in good faith, believing it to be a legitimate promotional offer. However, after careful investigation, it has been confirmed that this post is, in fact, a phishing scam designed to steal personal and financial information from unsuspecting users. This report seeks to examine the facts surrounding the viral claim, confirm its fraudulent nature, and provide recommendations to minimize the risk of falling victim to such scams.
Claim:
Reliance Jio is offering a ₹700 reward as part of a Holi promotional campaign, accessible through a shared link.

Fact Check:
Upon review, it has been verified that this claim is misleading. Reliance Jio has not provided any promo deal for Holi at this time. The Link being forwarded is considered a phishing scam to steal personal and financial user details. There are no reports of this promo offer on Jio’s official website or verified social media accounts. The URL included in the message does not end in the official Jio domain, indicating a fake website. The website requests for the personal information of individuals so that it could be used for unethical cyber crime activities. Additionally, we checked the link with the ScamAdviser website, which flagged it as suspicious and unsafe.


Conclusion:
The viral post claiming that Reliance Jio is offering a ₹700 Holi gift is a phishing scam. There is no legitimate offer from Jio, and the link provided leads to a fraudulent website designed to steal personal and financial information. Users are advised not to click on the link and to report any suspicious content. Always verify promotions through official channels to protect personal data from cybercriminal activities.
- Claim: Users can claim ₹700 by participating in Jio's Holi offer.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
The use of digital information and communication technologies for healthcare access has been on the rise in recent times. Mental health care is increasingly being provided through online platforms by remote practitioners, and even by AI-powered chatbots, which use natural language processing (NLP) and machine learning (ML) processes to simulate conversations between the platform and a user. Thus, AI chatbots can provide mental health support from the comfort of the home, at any time of the day, via a mobile phone. While this has great potential to enhance the mental health care ecosystem, such chatbots can present technical and ethical challenges as well.
Background
According to the WHO’s World Mental Health Report of 2022, every 1 in 8 people globally is estimated to be suffering from some form of mental health disorder. The need for mental health services worldwide is high but the supply of a care ecosystem is inadequate both in terms of availability and quality. In India, it is estimated that there are only 0.75 psychiatrists per 100,000 patients and only 30% of the mental health patients get help. Considering the slow thawing of social stigma regarding mental health, especially among younger demographics and support services being confined to urban Indian centres, the demand for a telehealth market is only projected to grow. This paves the way for, among other tools, AI-powered chatbots to fill the gap in providing quick, relatively inexpensive, and easy access to mental health counseling services.
Challenges
Users who seek mental health support are already vulnerable, and AI-induced oversight can exacerbate distress due to some of the following reasons:
- Inaccuracy: Apart from AI’s tendency to hallucinate data, chatbots may simply provide incorrect or harmful advice since they may be trained on data that is not representative of the specific physiological and psychological propensities of various demographics.
- Non-Contextual Learning: The efficacy of mental health counseling often relies on rapport-building between the service provider and client, relying on circumstantial and contextual factors. Machine learning models may struggle with understanding interpersonal or social cues, making their responses over-generalised.
- Reinforcement of Unhelpful Behaviors: In some cases, AI chatbots, if poorly designed, have the potential to reinforce unhealthy thought patterns. This is especially true for complex conditions such as OCD, treatment for which requires highly specific therapeutic interventions.
- False Reassurance: Relying solely on chatbots for counseling may create a partial sense of safety, thereby discouraging users from approaching professional mental health support services. This could reinforce unhelpful behaviours and exacerbate the condition.
- Sensitive Data Vulnerabilities: Health data is sensitive personal information. Chatbot service providers will need to clarify how health data is stored, processed, shared, and used. Without strong data protection and transparency standards, users are exposed to further risks to their well-being.
Way Forward
- Addressing Therapeutic Misconception: A lack of understanding of the purpose and capabilities of such chatbots, in terms of care expectations and treatments they can offer, can jeopardize user health. Platforms providing such services should be mandated to lay disclaimers about the limitations of the therapeutic relationship between the platform and its users in a manner that is easy to understand.
- Improved Algorithm Design: Training data for these models must undertake regular updates and audits to enhance their accuracy, incorporate contextual socio-cultural factors for profile analysis, and use feedback loops from customers and mental health professionals.
- Human Oversight: Models of therapy where AI chatbots are used to supplement treatment instead of replacing human intervention can be explored. Such platforms must also provide escalation mechanisms in cases where human-intervention is sought or required.
Conclusion
It is important to recognize that so far, there is no substitute for professional mental health services. Chatbots can help users gain awareness of their mental health condition and play an educational role in this regard, nudging them in the right direction, and provide assistance to both the practitioner and the client/patient. However, relying on this option to fill gaps in mental health services is not enough. Addressing this growing —and arguably already critical— global health crisis requires dedicated public funding to ensure comprehensive mental health support for all.
Sources
- https://www.who.int/news/item/17-06-2022-who-highlights-urgent-need-to-transform-mental-health-and-mental-health-care
- https://health.economictimes.indiatimes.com/news/industry/mental-healthcare-in-india-building-a-strong-ecosystem-for-a-sound-mind/105395767#:~:text=Indian%20mental%20health%20market%20is,access%20to%20better%20quality%20services.
- https://www.frontiersin.org/journals/digital-health/articles/10.3389/fdgth.2023.1278186/full

Introduction
The spread of information in the quickly changing digital age presents both advantages and difficulties. The phrases "misinformation" and "disinformation" are commonly used in conversations concerning information inaccuracy. It's important to counter such prevalent threats, especially in light of how they affect countries like India. It becomes essential to investigate the practical ramifications of misinformation/disinformation and other prevalent digital threats. Like many other nations, India has had to deal with the fallout from fraudulent internet actions in 2023, which has highlighted the critical necessity for strong cybersecurity safeguards.
The Emergence of AI Chatbots; OpenAI's ChatGPT and Google's Bard
The launch of OpenAI's ChatGPT in November 2022 was a major turning point in the AI space, inspiring the creation of rival chatbot ‘Google's Bard’ (Launched in 2023). These chatbots represent a significant breakthrough in artificial intelligence (AI) as they produce replies by combining information gathered from huge databases, driven by Large Language Models (LLMs). In the same way, AI picture generators that make use of diffusion models and existing datasets have attracted a lot of interest in 2023.
Deepfake Proliferation in 2023
Deepfake technology's proliferation in 2023 contributed to misinformation/disinformation in India, affecting politicians, corporate leaders, and celebrities. Some of these fakes were used for political purposes while others were for creating pornographic and entertainment content. Social turmoil, political instability, and financial ramifications were among the outcomes. The lack of tech measures about the same added difficulties in detection & prevention, causing widespread synthetic content.
Challenges of Synthetic Media
Problems of synthetic media, especially AI-powered or synthetic Audio video content proliferated widely during 2023 in India. These included issues with political manipulation, identity theft, disinformation, legal and ethical issues, security risks, difficulties with identification, and issues with media integrity. It covered an array of consequences, ranging from financial deception and the dissemination of false information to swaying elections and intensifying intercultural conflicts.
Biometric Fraud Surge in 2023
Biometric fraud in India, especially through the Aadhaar-enabled Payment System (AePS), has become a major threat in 2023. Due to the AePS's weaknesses being exploited by cybercriminals, many depositors have had their hard-earned assets stolen by fraudulent activity. This demonstrates the real effects of biometric fraud on those who have had their Aadhaar-linked data manipulated and unauthorized access granted. The use of biometric data in financial systems raises more questions about the security and integrity of the nation's digital payment systems in addition to endangering individual financial stability.
Government strategies to counter digital threats
- The Indian Union Government has sent a warning to the country's largest social media platforms, highlighting the importance of exercising caution when spotting and responding to deepfake and false material. The advice directs intermediaries to delete reported information within 36 hours, disable access in compliance with IT Rules 2021, and act quickly against content that violates laws and regulations. The government's dedication to ensuring the safety of digital citizens was underscored by Union Minister Rajeev Chandrasekhar, who also stressed the gravity of deepfake crimes, which disproportionately impact women.
- The government has recently come up with an advisory to social media intermediaries to identify misinformation and deepfakes and to make sure of the compliance of Information Technology (IT) Rules 2021. It is the legal obligation of online platforms to prevent the spread of misinformation and exercise due diligence or reasonable efforts to identify misinformation and deepfakes.
- The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules 2021 were amended in 2023. The online gaming industry is required to abide by a set of rules. These include not hosting harmful or unverified online games, not promoting games without approval from the SRB, labelling real-money games with a verification mark, educating users about deposit and winning policies, setting up a quick and effective grievance redressal process, requesting user information, and forbidding the offering of credit or financing for real-money gaming. These steps are intended to guarantee ethical and open behaviour throughout the online gaming industry.
- With an emphasis on Personal Data Protection, the government enacted the Digital Personal Data Protection Act, 2023. It is a brand-new framework for digital personal data protection which aims to protect the individual's digital personal data.
- The " Cyber Swachhta Kendra " (Botnet Cleaning and Malware Analysis Centre) is a part of the Government of India's Digital India initiative under the (MeitY) to create a secure cyberspace. It uses malware research and botnet identification to tackle cybersecurity. It works with antivirus software providers and internet service providers to establish a safer digital environment.
Strategies by Social Media Platforms
Various social media platforms like YouTube, and Meta have reformed their policies on misinformation and disinformation. This shows their comprehensive strategy for combating deepfake, misinformation/disinformation content on the network. The platform YouTube prioritizes eliminating content that transgresses its regulations, decreasing the amount of questionable information that is recommended, endorsing reliable news sources, and assisting reputable authors. YouTube uses unambiguous facts and expert consensus to thwart misrepresentation. In order to quickly delete information that violates policies, a mix of content reviewers and machine learning is used throughout the enforcement process. Policies are designed in partnership with external experts and producers. In order to improve the overall quality of information that users have access to, the platform also gives users the ability to flag material, places a strong emphasis on media literacy, and gives precedence to giving context.
Meta’s policies address different misinformation categories, aiming for a balance between expression, safety, and authenticity. Content directly contributing to imminent harm or political interference is removed, with partnerships with experts for assessment. To counter misinformation, the efforts include fact-checking partnerships, directing users to authoritative sources, and promoting media literacy.
Promoting ‘Tech for Good’
By 2024, the vision for "Tech for Good" will have expanded to include programs that enable people to understand the ever-complex digital world and promote a more secure and reliable online community. The emphasis is on using technology to strengthen cybersecurity defenses and combat dishonest practices. This entails encouraging digital literacy and providing users with the knowledge and skills to recognize and stop false information, online dangers, and cybercrimes. Furthermore, the focus is on promoting and exposing effective strategies for preventing cybercrime through cooperation between citizens, government agencies, and technology businesses. The intention is to employ technology's good aspects to build a digital environment that values security, honesty, and moral behaviour while also promoting innovation and connectedness.
Conclusion
In the evolving digital landscape, difficulties are presented by false information powered by artificial intelligence and the misuse of advanced technology by bad actors. Notably, there are ongoing collaborative efforts and progress in creating a secure digital environment. Governments, social media corporations, civil societies and tech companies have shown a united commitment to tackling the intricacies of the digital world in 2024 through their own projects. It is evident that everyone has a shared obligation to establish a safe online environment with the adoption of ethical norms, protective laws, and cybersecurity measures. The "Tech for Good" goal for 2024, which emphasizes digital literacy, collaboration, and the ethical use of technology, seems promising. The cooperative efforts of people, governments, civil societies and tech firms will play a crucial role as we continue to improve our policies, practices, and technical solutions.
References:
- https://news.abplive.com/fact-check/deepfakes-ai-driven-misinformation-year-2023-brought-new-era-of-digital-deception-abpp-1651243
- https://pib.gov.in/PressReleaseIframePage.aspx?PRID=1975445

Introduction
Over the past few years, the virtual space has been an irreplaceable livelihood platform for content creators and influencers, particularly on major social media platforms like YouTube and Instagram. Yet, if this growth in digital entrepreneurship is accompanied by anything, it is a worrying trend, a steep surge in account takeover (ATO) attacks against these actors. In recent years, cybercriminals have stepped up the quantity and level of sophistication of such attacks, hacking into accounts, jeopardising the follower base, and incurring economic and reputational damage. They don’t just take over accounts to cause disruption. Instead, they use these hijacked accounts to run scams like fake livestreams and cryptocurrency fraud, spreading them by pretending to be the original account owner. This type of cybercrime is no longer a nuisance; it now poses a serious threat to the creator economy, digital trust, and the wider social media ecosystem.
Why Are Content Creators Prime Targets?
Content creators hold a special place on the web. They are prominent users who live for visibility, public confidence, and ongoing interaction with their followers. Their social media footprint tends to extend across several interrelated platforms, e.g., YouTube, Instagram, X (formerly Twitter), with many of these accounts having similar login credentials or being managed from the same email accounts. This interconnectivity of their online presence crosses multiple platforms and benefits workflow, but makes them appealing targets for hackers. One entry point can give access to a whole chain of vulnerabilities. Attackers, once they control an account, can wield its influence and reach to share scams, lead followers to phishing sites, or spread malware, all from the cover of a trusted name.
Popular Tactics Used by Attackers
- Malicious Livestream Takeovers and Rebranding - Cybercriminals hijack high-subscriber channels and rebrand them to mimic official channels. Original videos are hidden or deleted, replaced with scammy streams using deep fake personas to promote crypto schemes.
- Fake Sponsorship Offers - Creators receive emails from supposed sponsors that contain malware-infected attachments or malicious download links, leading to credential theft.
- Malvertising Campaigns - These involve fake ads on social platforms promoting exclusive software like AI tools or unreleased games. Victims download malware that searches for stored login credentials.
- Phishing and Social Engineering on Instagram - Hackers impersonate Meta support teams via DMs and emails. They direct creators to login pages that are cloned versions of Instagram's site. Others pose as fans to request phone numbers and trick victims into revealing password reset codes.
- Timely Exploits and Event Hijacking - During major public or official events, attackers often escalate their activity. Hijacked accounts are used to promote fake giveaways or exclusive live streams, luring users to malicious websites designed to steal personal information or financial data.
Real-World Impact and Case Examples
The reach and potency of account takeover attacks upon content creators are far-reaching and profound. In a report presented in 2024 by Bitdefender, over 9,000 malicious live streams were seen on YouTube during a year, with many having been streamed from hijacked creator accounts and reassigned to advertise scams and fake content. Perhaps the most high-profile incident was a channel with more than 28 million subscribers and 12.4 billion total views, which was totally taken over and utilised for a crypto fraud scheme live streaming. Additionally, Bitdefender research indicated that over 350 scam domains were utilised by cybercriminals, directly connected via hijacked social media accounts, to entice followers into phishing scams and bogus investment opportunities. Many of these pieces of content included AI-created deep fakes impersonating recognisable personalities like Elon Musk and other public figures, providing the illusion of authenticity around fake endorsements (CCN, 2024). Further, attackers have exploited popular digital events such as esports events, such as Counter-Strike 2 (CS2), by hijacking YouTube gaming channels and livestreaming false giveaways or referring viewers to imitated betting sites.
Protective Measures for Creators
- Enable Multi-Factor Authentication (MFA)
Adds an essential layer of defence. Even if a password is compromised, attackers can't log in without the second factor. Prefer app-based or hardware token authentication.
- Scrutinize Sponsorships
Verify sender domains and avoid opening suspicious attachments. Use sandbox environments to test files. In case of doubt, verify collaboration opportunities through official company sources or verified contacts.
- Monitor Account Activity
Keep tabs on login history, new uploads, and connected apps. Configure alerts for suspicious login attempts or spikes in activity to detect breaches early. Configure alerts for suspicious login attempts or spikes in activity to detect breaches early.
- Educate Your Team
If your account is managed by editors or third parties, train them on common phishing and malware tactics. Employ regular refresher sessions and send mock phishing tests to reinforce awareness.
- Use Purpose-Built Security Tools
Specialised security solutions offer features like account monitoring, scam detection, guided recovery, and protection for team members. These tools can also help identify suspicious activity early and support a quick response to potential threats.
Conclusion
Account takeover attacks are no longer random events, they're systemic risks that compromise the financial well-being and personal safety of creators all over the world. As cybercriminals grow increasingly sophisticated and realistic in their scams, the only solution is a security-first approach. This encompasses a mix of technical controls, platform-level collaboration, education, and investment in creator-centric cybersecurity technologies. In today's fast-paced digital landscape, creators not only need to think about content but also about defending their digital identity. As digital platforms continue to grow, so do the threats targeting creators. However, with the right awareness, tools, and safeguards in place, a secure and thriving digital environment for creators is entirely achievable.
References
- https://www.bitdefender.com/en-au/blog/hotforsecurity/account-takeover-attacks-on-social-media-a-rising-threat-for-content-creators-and-influencers
- https://www.arkoselabs.com/account-takeover/social-media-account-takeover/
- https://www.imperva.com/learn/application-security/account-takeover-ato/
- https://www.security.org/digital-safety/account-takeover-annual-report/
- https://www.niceactimize.com/glossary/account-takeover/