TRAI issues guidelines to Access Service Providers to prevent misuse of messaging services
Introduction
The Telecom Regulatory Authority of India (TRAI) on 20th August 2024 issued directives requiring Access Service Providers to adhere to the specific guidelines to protect consumer interests and prevent fraudulent activities. TRAI has mandated all Access Service Providers to abide by the directives. These steps advance TRAI's efforts to promote a secure messaging ecosystem, protecting consumer interests and eliminating fraudulent conduct.
Key Highlights of the TRAI’s Directives
- For improved monitoring and control, TRAI has directed that Access Service Providers move telemarketing calls, beginning with the 140 series, to an online DLT (Digital Ledger Technology) platform by September 30, 2024, at the latest.
- All Access Service Providers will be forbidden from sending messages that contain URLs, APKs, OTT links, or callback numbers that the sender has not whitelisted, the rule is to be effective from September 1st, 2024.
- In an effort to improve message traceability, TRAI has made it mandatory for all messages, starting on November 1, 2024, to include a traceable trail from sender to receiver. Any message with an undefined or mismatched telemarketer chain will be rejected.
- To discourage the exploitation or misuse of templates for promotional content, TRAI has introduced punitive actions in case of non-compliance. Content Templates registered in the wrong category will be banned, and subsequent offences will result in a one-month suspension of the Sender's services.
- To assure compliance with rules, all Headers and Content Templates registered on DLT must follow the requirements. Furthermore, a single Content Template cannot be connected to numerous headers.
- If any misuse of headers or content templates by a sender is discovered, TRAI has instructed an immediate ‘suspension of traffic’ from all of that sender's headers and content templates for their verification. Such suspension can only be revoked only after the Sender has taken legal action against such usage. Furthermore, Delivery-Telemarketers must identify and disclose companies guilty of such misuse within two business days, or else risk comparable repercussions.
CyberPeace Policy Outlook
TRAI’s measures are aimed at curbing the misuse of messaging services including spam. TRAI has mandated that headers and content templates follow defined requirements. Punitive actions are introduced in case of non-compliance with the directives, such as blacklisting and service suspension. TRAI’s measures will surely curb the increasing rate of scams such as phishing, spamming, and other fraudulent activities and ultimately protect consumer's interests and establish a true cyber-safe environment in messaging services ecosystem.
The official text of TRAI directives is available on the official website of TRAI or you can access the link here.
References
- https://www.trai.gov.in/sites/default/files/Direction_20082024.pdf
- https://www.trai.gov.in/sites/default/files/PR_No.53of2024.pdf
- https://pib.gov.in/PressReleaseIframePage.aspx?PRID=2046872
- https://legal.economictimes.indiatimes.com/news/regulators/trai-issues-directives-to-access-providers-to-curb-misuse-fraud-through-messaging/112669368
Related Blogs
.webp)
Introduction
The unprecedented rise of social media, challenges with regional languages, and the heavy use of messaging apps like WhatsApp have all led to an increase in misinformation in India. False stories spread quickly and can cause significant harm, like political propaganda and health-related mis/misinformation. Programs that teach people how to use social media responsibly and attempt to check facts are essential, but they do not always connect with people deeply. Reading stories, attending lectures, and using tools that check facts are standard passive learning methods used in traditional media literacy programs.
Adding game-like features to non-game settings is called "gamification," it could be a new and interesting way to answer this question. Gamification involves engaging people by making them active players instead of just passive consumers of information. Research shows that interactive learning improves interest, thinking skills, and memory. People can learn to recognise fake news safely by turning fact-checking into a game before encountering it in real life. A study by Roozenbeek and van der Linden (2019) showed that playing misinformation games can significantly enhance people's capacity to recognise and avoid false information.
Several misinformation-related games have been successfully implemented worldwide:
- The Bad News Game – This browser-based game by Cambridge University lets players step into the shoes of a fake news creator, teaching them how misinformation is crafted and spread (Roozenbeek & van der Linden, 2019).
- Factitious – A quiz game where users swipe left or right to decide whether a news headline is real or fake (Guess et al., 2020).
- Go Viral! – A game designed to inoculate people against COVID-19 misinformation by simulating the tactics used by fake news peddlers (van der Linden et al., 2020).
For programs to effectively combat misinformation in India, they must consider factors such as the responsible use of smartphones, evolving language trends, and common misinformation patterns in the country. Here are some key aspects to keep in mind:
- Vernacular Languages
There should be games in Hindi, Tamil, Bengali, Telugu, and other major languages since that is how rumours spread in different areas and diverse cultural contexts. AI voice conversation and translation can help reduce literacy differences. Research shows that people are more likely to engage with and trust information in their native language (Pennycook & Rand, 2019).
- Games Based on WhatsApp
Interactive tests and chatbot-powered games can educate consumers directly within the app they use most frequently since WhatsApp is a significant hub for false information. A game with a WhatsApp-like interface where players may feel like they are in real life, having to decide whether to avoid, check the facts of, or forward messages that are going viral could be helpful in India.
- Detecting False Information
As part of a mobile-friendly game, players can pretend to be reporters or fact-checkers and have to prove stories that are going viral. They can do the same with real-life tools like reverse picture searches or reliable websites that check facts. Research shows that doing interactive tasks to find fake news makes people more aware of it over time (Lewandowsky et al., 2017).
- Reward-Based Participation
Participation could be increased by providing rewards for finishing misleading challenges, such as badges, diplomas, or even incentives on mobile data. This might be easier to do if there are relationships with phone companies. Reward-based learning has made people more interested and motivated in digital literacy classes (Deterding et al., 2011).
- Universities and Schools
Educational institutions can help people spot false information by adding game-like elements to their lessons. Hamari et al. (2014) say that students are more likely to join and remember what they learn when there are competitive and interactive parts to the learning. Misinformation games can be used in media studies classes at schools and universities by using models to teach students how to check sources, spot bias, and understand the psychological tricks that misinformation campaigns use.
What Artificial Intelligence Can Do for Gamification
Artificial intelligence can tailor learning experiences to each player in false games. AI-powered misinformation detection bots could lead participants through situations tailored to their learning level, ensuring they are consistently challenged. Recent natural language processing (NLP) developments enable AI to identify nuanced misinformation patterns and adjust gameplay accordingly (Zellers et al., 2019). This could be especially helpful in India, where fake news is spread differently depending on the language and area.
Possible Opportunities
Augmented reality (AR) scavenger hunts for misinformation, interactive misinformation events, and educational misinformation tournaments are all examples of games that help fight misinformation. India can help millions, especially young people, think critically and combat the spread of false information by making media literacy fun and interesting. Using Artificial Intelligence (AI) in gamified treatments for misinformation could be a fascinating area of study in the future. AI-powered bots could mimic real-time cases of misinformation and give quick feedback, which would help students learn more.
Problems and Moral Consequences
While gaming is an interesting way to fight false information, it also comes with some problems that you should think about:
- Ethical Concerns: Games that try to imitate how fake news spreads must ensure players do not learn how to spread false information by accident.
- Scalability: Although worldwide misinformation initiatives exist, developing and expanding localised versions for India's varied language and cultural contexts provide significant challenges.
- Assessing Impact: There is a necessity for rigorous research approaches to evaluate the efficacy of gamified treatments in altering misinformation-related behaviours, keeping cultural and socio-economic contexts in the picture.
Conclusion
A gamified approach can serve as an effective tool in India's fight against misinformation. By integrating game elements into digital literacy programs, it can encourage critical thinking and help people recognize misinformation more effectively. The goal is to scale these efforts, collaborate with educators, and leverage India's rapidly evolving technology to make fact-checking a regular practice rather than an occasional concern.
As technology and misinformation evolve, so must the strategies to counter them. A coordinated and multifaceted approach, one that involves active participation from netizens, strict platform guidelines, fact-checking initiatives, and support from expert organizations that proactively prebunk and debunk misinformation can be a strong way forward.
References
- Deterding, S., Dixon, D., Khaled, R., & Nacke, L. (2011). From game design elements to gamefulness: defining "gamification". Proceedings of the 15th International Academic MindTrek Conference.
- Guess, A., Nagler, J., & Tucker, J. (2020). Less than you think: Prevalence and predictors of fake news dissemination on Facebook. Science Advances.
- Hamari, J., Koivisto, J., & Sarsa, H. (2014). Does gamification work?—A literature review of empirical studies on gamification. Proceedings of the 47th Hawaii International Conference on System Sciences.
- Lewandowsky, S., Ecker, U. K., & Cook, J. (2017). Beyond misinformation: Understanding and coping with the “post-truth” era. Journal of Applied Research in Memory and Cognition.
- Pennycook, G., & Rand, D. G. (2019). Fighting misinformation on social media using “accuracy prompts”. Nature Human Behaviour.
- Roozenbeek, J., & van der Linden, S. (2019). The fake news game: actively inoculating against the risk of misinformation. Journal of Risk Research.
- van der Linden, S., Roozenbeek, J., Compton, J. (2020). Inoculating against fake news about COVID-19. Frontiers in Psychology.
- Zellers, R., Holtzman, A., Rashkin, H., Bisk, Y., Farhadi, A., Roesner, F., & Choi, Y. (2019). Defending against neural fake news. Advances in Neural Information Processing Systems.

Introduction
“an intermediary, on whose computer resource the information is stored, hosted or published, upon receiving actual knowledge in the form of an order by a court of competent jurisdiction or on being notified by the Appropriate Government or its agency under clause (b) of sub-section (3) of section 79 of the Act, shall not , which is prohibited under any law for the time being in force in relation to the interest of the sovereignty and integrity of India; security of the State; friendly relations with foreign States; public order; decency or morality; in relation to contempt of court; defamation; incitement to an offence relating to the above, or any information which is prohibited under any law for the time being in force”
Law grows by confronting its absences, it heals itself through its own gaps. The most recent notification from MeitY, G.S.R. 775(E) dated October 22, 2025, is an illustration of that self-correction. On November 15, 2025, the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025, will come into effect. They accomplish two crucial things: they restrict who can use "actual knowledge” to initiate takedown and require senior-level scrutiny of those directives. By doing this, they maintain genuine security requirements while guiding India’s content governance system towards more transparent due process.
When Regulation Learns Restraint
To better understand the jurisprudence of revision, one must need to understand that Regulation, in its truest form, must know when to pause. The 2025 amendment marks that rare moment when the government chooses precision over power, when regulation learns restraint. The amendment revises Rule 3(1)(d) of the 2021 Rules. Social media sites, hosting companies, and other digital intermediaries are still required to take action within 36 hours of receiving “actual knowledge” that a piece of content is illegal (e.g. poses a threat to public order, sovereignty, decency, or morality). However, “actual knowledge” now only occurs in the following situations:
(i) a court order from a court of competent jurisdiction, or
(ii) a reasoned written intimation from a duly authorised government officer not below Joint Secretary rank (or equivalent)
The authorised authority in matters involving the police “must not be below the rank of Deputy Inspector General of Police (DIG)”. This creates a well defined, senior-accountable channel in place of a diffuse trigger.
There are two more new structural guardrails. The Rules first establish a monthly assessment of all takedown notifications by a Secretary-level officer of the relevant government to test necessity, proportionality, and compliance with India’s safe harbour provision under Section 79(3) of the IT Act. Second, in order for platforms to act precisely rather than in an expansive manner, takedown requests must be accompanied by legal justification, a description of the illegal act, and precise URLs or identifiers. The cumulative result of these guardrails is that each removal has a proportionality check and a paper trail.
Due Process as the Law’s Conscience
Indian jurisprudence has been debating what constitutes “actual knowledge” for over a decade. The Supreme Court in Shreya Singhal (2015) connected an intermediary’s removal obligation to notifications from official channels or court orders rather than vague notice. But over time, that line became hazy due to enforcement practices and some court rulings, raising concerns about over-removal and safe-harbour loss under Section 79(3). Even while more recent decisions questioned the “reasonable efforts” of intermediaries, the 2025 amendment institutionally pays homage to Shreya Singhal’s ethos by refocusing “actual knowledge” on formal reviewable communications from senior state actors or judges.
The amendment also introduces an internal constitutionalism to executive orders by mandating monthly audits at the Secretary level. The state is required to re-justify its own orders on a rolling basis, evaluating them against proportionality and necessity, which are criteria that Indian courts are increasingly requesting for speech restrictions. Clearer triggers, better logs, and less vague “please remove” communications that previously left compliance teams in legal limbo are the results for intermediaries.
The Court’s Echo in the Amendment
The essence of this amendment is echoed in Karnataka High Court’s Ruling on Sahyog Portal, a government portal used to coordinate takedown orders under Section 79(3)(b), was constitutional. The HC rejected X’s (formerly Twitter’s) appeal contesting the legitimacy of the portal in September. The business had claimed that by giving nodal officers the authority to issue takedown orders without court review, the portal permitted arbitrary content removals. The court disagreed, holding that the officers’ acts were in accordance with Section 79 (3)(b) and that they were “not dropping from the air but emanating from statutes.” The amendment turns compliance into conscience by conforming to the Sahyog Portal verdict, reiterating that due process is the moral grammar of governance rather than just a formality.
Conclusion: The Necessary Restlessness of Law
Law cannot afford stillness; it survives through self doubt and reinvention. The 2025 amendment, too, is not a destination, it’s a pause before the next question, a reminder that justice breathes through revision. As befits a constitutional democracy, India’s path to content governance has been combative and iterative. The next rule making cycle has been sharpened by the stays split judgments, and strikes down that have resulted from strategic litigation centred on the IT Rules, safe harbour, government fact-checking, and blocking orders. Lessons learnt are reflected in the 2025 amendment: review triumphs over opacity; specificity triumphs over vagueness; and due process triumphs over discretion. A digital republic balances freedom and force in this way.
Sources
- https://pressnews.in/law-and-justice/government-notifies-amendments-to-it-rules-2025-strengthening-intermediary-obligations/
- https://www.meity.gov.in/static/uploads/2025/10/90dedea70a3fdfe6d58efb55b95b4109.pdf
- https://www.pib.gov.in/PressReleasePage.aspx?PRID=2181719
- https://www.scobserver.in/journal/x-relies-on-shreya-singhal-in-arbitrary-content-blocking-case-in-karnataka-hc/
- https://www.medianama.com/2025/10/223-content-takedown-rules-online-platforms-36-hr-deadline-officer-rank/#:~:text=It%20specifies%20that%20government%20officers,Deputy%20Inspector%20General%20of%20Police%E2%80%9D.

Introduction
Search Engine Optimisation (SEO) is a process through which one can improve website visibility on search engine platforms like Google, Microsoft Bing, etc. There is an implicit understanding that SEO suggestions or the links that are generated on top are the more popular information sources and, hence, are deemed to be more trustworthy. This trust, however, is being misused by threat actors through a process called SEO poisoning.
SEO poisoning is a method used by threat actors to attack and obtain information about the user by using manipulative methods that position their desired link, web page, etc to appear at the top of the search engine algorithm. The end goal is to lure the user into clicking and downloading their malware, presented in the garb of legitimate marketing or even as a valid result for Google search.
An active example of attempts at SEO poisoning has been discussed in a report by the Hindustan Times on 11th November, 2024. It highlights that using certain keywords could make a user more susceptible to hacking. Hackers are now targeting people who enter specific words or specific combinations in search engines. According to the report, users who looked up and clicked on links at the top related to the search query “Are Bengal cats legal in Australia?” had details regarding their personal information posted online soon after.
SEO Poisoning - Modus Operandi Of Attack
There are certain tactics that are used by the attackers on SEO poisoning, these are:
- Keyword stuffing- This method involves overloading a webpage with irrelevant words, which helps the false website appear higher in ranking.
- Typosquatting- This method involves creating domain names or links similar to the more popular and trusted websites. A lack of scrutiny before clicking would lead the user to download malware, from what they thought was a legitimate site.
- Cloaking- This method operates by showing different content to both the search engines and the user. While the search engine sees what it assumes to be a legitimate website, the user is exposed to harmful content.
- Private Link Networks- Threat actors create a group of unrelated websites in order to increase the number of referral links, which enables them to rank higher on search engine platforms.
- Article Spinning- This method involves imitating content from other pre-existing, legitimate websites, while making a few minor changes, giving the impression to search engine crawlers of it being original content.
- Sneaky Redirect- This method redirects the users to malicious websites (without their knowledge) instead of the ones the user had intended to click.
CyberPeace Recommendations
- Employee Security Awareness Training: Security awareness training can help employees familiarise themselves with tactics of SEO poisoning, encouraging them to either spot such inconsistencies early on or even alert the security team at the earliest.
- Tool usage: Companies can use Digital Risk Monitoring tools to catch instances of typosquatting. Endpoint Detection and Response (EDR) tools also help keep an eye on client history and assess user activities during security breaches to figure out the source of the affected file.
- Internal Security Measures: To refer to lists of Indicators of Compromise (IOC). IOC has URL lists that show evidence of the strange behaviour of websites, and this can be used to practice caution. Deploying Web Application Firewalls (WAFs) to mitigate and detect malicious traffic is helpful.
Conclusion
The nature of SEO poisoning is such that it inherently promotes the spread of misinformation, and facilitates cyberattacks. Misinformation regarding the legitimacy of the links and the content they display, in order to lure users into clicking on them, puts personal information under threat. As people trust their favoured search engines, and there is a lack of awareness of such tactics in use, one must exercise caution while clicking on links that seem to be popular, despite them being hosted by trusted search engines.
References
- https://www.checkpoint.com/cyber-hub/cyber-security/what-is-cyber-attack/what-is-seo-poisoning/
- https://www.vectra.ai/topics/seo-poisoning
- https://www.techtarget.com/whatis/definition/search-poisoning
- https://www.blackberry.com/us/en/solutions/endpoint-security/ransomware-protection/seo-poisoning
- https://www.coalitioninc.com/blog/seo-poisoning-attacks
- https://www.sciencedirect.com/science/article/abs/pii/S0160791X24000186
- https://www.repindia.com/blog/secure-your-organisation-from-seo-poisoning-and-malvertising-threats/
- https://www.hindustantimes.com/technology/typing-these-6-words-on-google-could-make-you-a-target-for-hackers-101731286153415.html