TRAI issues guidelines to Access Service Providers to prevent misuse of messaging services
Introduction
The Telecom Regulatory Authority of India (TRAI) on 20th August 2024 issued directives requiring Access Service Providers to adhere to the specific guidelines to protect consumer interests and prevent fraudulent activities. TRAI has mandated all Access Service Providers to abide by the directives. These steps advance TRAI's efforts to promote a secure messaging ecosystem, protecting consumer interests and eliminating fraudulent conduct.
Key Highlights of the TRAI’s Directives
- For improved monitoring and control, TRAI has directed that Access Service Providers move telemarketing calls, beginning with the 140 series, to an online DLT (Digital Ledger Technology) platform by September 30, 2024, at the latest.
- All Access Service Providers will be forbidden from sending messages that contain URLs, APKs, OTT links, or callback numbers that the sender has not whitelisted, the rule is to be effective from September 1st, 2024.
- In an effort to improve message traceability, TRAI has made it mandatory for all messages, starting on November 1, 2024, to include a traceable trail from sender to receiver. Any message with an undefined or mismatched telemarketer chain will be rejected.
- To discourage the exploitation or misuse of templates for promotional content, TRAI has introduced punitive actions in case of non-compliance. Content Templates registered in the wrong category will be banned, and subsequent offences will result in a one-month suspension of the Sender's services.
- To assure compliance with rules, all Headers and Content Templates registered on DLT must follow the requirements. Furthermore, a single Content Template cannot be connected to numerous headers.
- If any misuse of headers or content templates by a sender is discovered, TRAI has instructed an immediate ‘suspension of traffic’ from all of that sender's headers and content templates for their verification. Such suspension can only be revoked only after the Sender has taken legal action against such usage. Furthermore, Delivery-Telemarketers must identify and disclose companies guilty of such misuse within two business days, or else risk comparable repercussions.
CyberPeace Policy Outlook
TRAI’s measures are aimed at curbing the misuse of messaging services including spam. TRAI has mandated that headers and content templates follow defined requirements. Punitive actions are introduced in case of non-compliance with the directives, such as blacklisting and service suspension. TRAI’s measures will surely curb the increasing rate of scams such as phishing, spamming, and other fraudulent activities and ultimately protect consumer's interests and establish a true cyber-safe environment in messaging services ecosystem.
The official text of TRAI directives is available on the official website of TRAI or you can access the link here.
References
- https://www.trai.gov.in/sites/default/files/Direction_20082024.pdf
- https://www.trai.gov.in/sites/default/files/PR_No.53of2024.pdf
- https://pib.gov.in/PressReleaseIframePage.aspx?PRID=2046872
- https://legal.economictimes.indiatimes.com/news/regulators/trai-issues-directives-to-access-providers-to-curb-misuse-fraud-through-messaging/112669368
Related Blogs

Overview:
In today’s digital landscape, safeguarding personal data and communications is more crucial than ever. WhatsApp, as one of the world’s leading messaging platforms, consistently enhances its security features to protect user interactions, offering a seamless and private messaging experience
App Lock: Secure Access with Biometric Authentication
To fortify security at the device level, WhatsApp offers an app lock feature, enabling users to protect their app with biometric authentication such as fingerprint or Face ID. This feature ensures that only authorized users can access the app, adding an additional layer of protection to private conversations.
How to Enable App Lock:
- Open WhatsApp and navigate to Settings.
- Select Privacy.
- Scroll down and tap App Lock.
- Activate Fingerprint Lock or Face ID and follow the on-screen instructions.

Chat Lock: Restrict Access to Private Conversations
WhatsApp allows users to lock specific chats, moving them to a secured folder that requires biometric authentication or a passcode for access. This feature is ideal for safeguarding sensitive conversations from unauthorized viewing.
How to Lock a Chat:
- Open WhatsApp and select the chat to be locked.
- Tap on the three dots (Android) or More Options (iPhone).
- Select Lock Chat
- Enable the lock using Fingerprint or Face ID.

Privacy Checkup: Strengthening Security Preferences
The privacy checkup tool assists users in reviewing and customizing essential security settings. It provides guidance on adjusting visibility preferences, call security, and blocked contacts, ensuring a personalized and secure communication experience.
How to Run Privacy Checkup:
- Open WhatsApp and navigate to Settings.
- Tap Privacy.
- Select Privacy Checkup and follow the prompts to adjust settings.

Automatic Blocking of Unknown Accounts and Messages
To combat spam and potential security threats, WhatsApp automatically restricts unknown accounts that send excessive messages. Users can also manually block or report suspicious contacts to further enhance security.
How to Manage Blocking of Unknown Accounts:
- Open WhatsApp and go to Settings.
- Select Privacy.
- Tap to Advanced
- Enable Block unknown account messages

IP Address Protection in Calls
To prevent tracking and enhance privacy, WhatsApp provides an option to hide IP addresses during calls. When enabled, calls are routed through WhatsApp’s servers, preventing location exposure via direct connections.
How to Enable IP Address Protection in Calls:
- Open WhatsApp and go to Settings.
- Select Privacy, then tap Advanced.
- Enable Protect IP Address in Calls.

Disappearing Messages: Auto-Deleting Conversations
Disappearing messages help maintain confidentiality by automatically deleting sent messages after a predefined period—24 hours, 7 days, or 90 days. This feature is particularly beneficial for reducing digital footprints.
How to Enable Disappearing Messages:
- Open the chat and tap the Chat Name.
- Select Disappearing Messages.
- Choose the preferred duration before messages disappear.

View Once: One-Time Access to Media Files
The ‘View Once’ feature ensures that shared photos and videos can only be viewed a single time before being automatically deleted, reducing the risk of unauthorized storage or redistribution.
How to Send View Once Media:
- Open a chat and tap the attachment icon.
- Choose Camera or Gallery to select media.
- Tap the ‘1’ icon before sending the media file.

Group Privacy Controls: Manage Who Can Add You
WhatsApp provides users with the ability to control group invitations, preventing unwanted additions by unknown individuals. Users can restrict group invitations to ‘Everyone,’ ‘My Contacts,’ or ‘My Contacts Except…’ for enhanced privacy.
How to Adjust Group Privacy Settings:
- Open WhatsApp and go to Settings.
- Select Privacy and tap Groups.
- Choose from the available options: Everyone, My Contacts, or My Contacts Except

Conclusion
WhatsApp continuously enhances its security features to protect user privacy and ensure safe communication. With tools like App Lock, Chat Lock, Privacy Checkup, IP Address Protection, and Disappearing Messages, users can safeguard their data and interactions. Features like View Once and Group Privacy Controls further enhance confidentiality. By enabling these settings, users can maintain a secure and private messaging experience, effectively reducing risks associated with unauthorized access, tracking, and digital footprints. Stay updated and leverage these features for enhanced security.

Introduction
Recently, in April 2025, security researchers at Oligo Security exposed a substantial and wide-ranging threat impacting Apple's AirPlay protocol and its use via third-party Software Development Kit (SDK). According to the research, the recently discovered set of vulnerabilities titled "AirBorne" had the potential to enable remote code execution, escape permissions, and leak private data across many different Apple and third-party AirPlay-compatible devices. With well over 2.35 billion active Apple devices globally and tens of millions of third-party products that incorporate the AirPlay SDK, the scope of the problem is enormous. Those wireless-based vulnerabilities pose not only a technical threat but also increasingly an enterprise- and consumer-level security concern.
Understanding AirBorne: What’s at Stake?
AirBorne is the title given to a set of 23 vulnerabilities identified in the AirPlay communication protocol and its related SDK utilised by third-party vendors. Seventeen have been given official CVE designations. The most severe among them permit Remote Code Execution (RCE) with zero or limited user interaction. This provides hackers the ability to penetrate home networks, business environments, and even cars with CarPlay technology onboard.
Types of Vulnerabilities Identified
AirBorne vulnerabilities support a range of attack types, including:
- Zero-Click and One-Click RCE
- Access Control List (ACL) bypass
- User interaction bypass
- Local arbitrary file read
- Sensitive data disclosure
- Man-in-the-middle (MITM) attacks
- Denial of Service (DoS)
Each vulnerability can be used individually or chained together to escalate access and broaden the attack surface.
Remote Code Execution (RCE): Key Attack Scenarios
- MacOS – Zero-Click RCE (CVE-2025-24252 & CVE-2025-24206) These weaknesses enable attackers to run code on a MacOS system without any user action, as long as the AirPlay receiver is enabled and configured to accept connections from anyone on the same network. The threat of wormable malware propagating via corporate or public Wi-Fi networks is especially concerning.
- MacOS – One-Click RCE (CVE-2025-24271 & CVE-2025-24137) If AirPlay is set to "Current User," attackers can exploit these CVEs to deploy malicious code with one click by the user. This raises the level of threat in shared office or home networks.
- AirPlay SDK Devices – Zero-Click RCE (CVE-2025-24132) Third-party speakers and receivers through the AirPlay SDK are particularly susceptible, where exploitation requires no user intervention. Upon compromise, the attackers have the potential to play unauthorised media, turn microphones on, or monitor intimate spaces.
- CarPlay Devices – RCE Over Wi-Fi, Bluetooth, or USB CVE-2025-24132 also affects CarPlay-enabled systems. Under certain circumstances, the perpetrators around can take advantage of predictable Wi-Fi credentials, intercept Bluetooth PINs, or utilise USB connections to take over dashboard features, which may distract drivers or listen in on in-car conversations.
Other Exploits Beyond RCE
AirBorne also opens the door for:
- Sensitive Information Disclosure: Exposing private logs or user metadata over local networks (CVE-2025-24270).
- Local Arbitrary File Access: Letting attackers read restricted files on a device (CVE-2025-24270 group).
- DoS Attacks: Exploiting NULL pointer dereferences or misformatted data to crash processes like the AirPlay receiver or WindowServer, forcing user logouts or system instability (CVE-2025-24129, CVE-2025-24177, etc.).
How the Attack Works: A Technical Breakdown
AirPlay sends on port 7000 via HTTP and RTSP, typically encoded in Apple's own plist (property list) form. Exploits result from incorrect treatment of these plists, especially when skipping type checking or assuming invalid data will be valid. For instance, CVE-2025-24129 illustrates how a broken plist can produce type confusion to crash or execute code based on configuration.
A hacker must be within the same Wi-Fi network as the targeted device. This connection might be through a hacked laptop, public wireless with shared access, or an insecure corporate connection. Once in proximity, the hacker has the ability to use AirBorne bugs to hijack AirPlay-enabled devices. There, bad code can be released to spy, gain long-term network access, or spread control to other devices on the network, perhaps creating a botnet or stealing critical data.
The Espionage Angle
Most third-party AirPlay-compatible devices, including smart speakers, contain built-in microphones. In theory, that leaves the door open for such devices to become eavesdropping tools. While Oligo did not show a functional exploit for the purposes of espionage, the risk suggests the gravity of the situation.
The CarPlay Risk Factor
Besides smart home appliances, vulnerabilities in AirBorne have also been found for Apple CarPlay by Oligo. Those vulnerabilities, when exploited, may enable attackers to take over an automobile's entertainment system. Fortunately, the attacks would need pairing directly through USB or Bluetooth and are much less practical. Even so, it illustrates how networks of connected components remain at risk in various situations, ranging from residences to automobiles.
How to Protect Yourself and Your Organisation
- Immediate Actions:
- Update Devices: Ensure all Apple devices and third-party gadgets are upgraded to the latest software version.
- Disable AirPlay Receiver: If AirPlay is not in use, disable it in system settings.
- Restrict AirPlay Access: Use firewalls to block port 7000 from untrusted IPs.
- Set AirPlay to “Current User” to limit network-based attack.
- Organisational Recommendations:
- Communicate the patch urgency to employees and stakeholders.
- Inventory all AirPlay-enabled hardware, including in meeting rooms and vehicles.
- Isolate vulnerable devices on segmented networks until updated.
Conclusion
The AirBorne vulnerabilities illustrate that even mature systems such as Apple's are not immune from foundational security weaknesses. The extensive deployment of AirPlay across devices, industries, and ecosystems makes these vulnerabilities a systemic threat. Oligo's discovery has served to catalyse immediate response from Apple, but since third-party devices remain vulnerable, responsibility falls to users and organisations to install patches, implement robust configurations, and compartmentalise possible attack surfaces. Effective proactive cybersecurity hygiene, network segmentation, and timely patches are the strongest defences to avoid these kinds of wormable, scalable attacks from becoming large-scale breaches.
References
- https://www.oligo.security/blog/airborne
- https://www.wired.com/story/airborne-airplay-flaws/
- https://thehackernews.com/2025/05/wormable-airplay-flaws-enable-zero.html
- https://www.securityweek.com/airplay-vulnerabilities-expose-apple-devices-to-zero-click-takeover/
- https://www.pcmag.com/news/airborne-flaw-exposes-airplay-devices-to-hacking-how-to-protect-yourself
- https://cyberguy.com/security/hackers-breaking-into-apple-devices-through-airplay/
.webp)
Introduction
The unprecedented rise of social media, challenges with regional languages, and the heavy use of messaging apps like WhatsApp have all led to an increase in misinformation in India. False stories spread quickly and can cause significant harm, like political propaganda and health-related mis/misinformation. Programs that teach people how to use social media responsibly and attempt to check facts are essential, but they do not always connect with people deeply. Reading stories, attending lectures, and using tools that check facts are standard passive learning methods used in traditional media literacy programs.
Adding game-like features to non-game settings is called "gamification," it could be a new and interesting way to answer this question. Gamification involves engaging people by making them active players instead of just passive consumers of information. Research shows that interactive learning improves interest, thinking skills, and memory. People can learn to recognise fake news safely by turning fact-checking into a game before encountering it in real life. A study by Roozenbeek and van der Linden (2019) showed that playing misinformation games can significantly enhance people's capacity to recognise and avoid false information.
Several misinformation-related games have been successfully implemented worldwide:
- The Bad News Game – This browser-based game by Cambridge University lets players step into the shoes of a fake news creator, teaching them how misinformation is crafted and spread (Roozenbeek & van der Linden, 2019).
- Factitious – A quiz game where users swipe left or right to decide whether a news headline is real or fake (Guess et al., 2020).
- Go Viral! – A game designed to inoculate people against COVID-19 misinformation by simulating the tactics used by fake news peddlers (van der Linden et al., 2020).
For programs to effectively combat misinformation in India, they must consider factors such as the responsible use of smartphones, evolving language trends, and common misinformation patterns in the country. Here are some key aspects to keep in mind:
- Vernacular Languages
There should be games in Hindi, Tamil, Bengali, Telugu, and other major languages since that is how rumours spread in different areas and diverse cultural contexts. AI voice conversation and translation can help reduce literacy differences. Research shows that people are more likely to engage with and trust information in their native language (Pennycook & Rand, 2019).
- Games Based on WhatsApp
Interactive tests and chatbot-powered games can educate consumers directly within the app they use most frequently since WhatsApp is a significant hub for false information. A game with a WhatsApp-like interface where players may feel like they are in real life, having to decide whether to avoid, check the facts of, or forward messages that are going viral could be helpful in India.
- Detecting False Information
As part of a mobile-friendly game, players can pretend to be reporters or fact-checkers and have to prove stories that are going viral. They can do the same with real-life tools like reverse picture searches or reliable websites that check facts. Research shows that doing interactive tasks to find fake news makes people more aware of it over time (Lewandowsky et al., 2017).
- Reward-Based Participation
Participation could be increased by providing rewards for finishing misleading challenges, such as badges, diplomas, or even incentives on mobile data. This might be easier to do if there are relationships with phone companies. Reward-based learning has made people more interested and motivated in digital literacy classes (Deterding et al., 2011).
- Universities and Schools
Educational institutions can help people spot false information by adding game-like elements to their lessons. Hamari et al. (2014) say that students are more likely to join and remember what they learn when there are competitive and interactive parts to the learning. Misinformation games can be used in media studies classes at schools and universities by using models to teach students how to check sources, spot bias, and understand the psychological tricks that misinformation campaigns use.
What Artificial Intelligence Can Do for Gamification
Artificial intelligence can tailor learning experiences to each player in false games. AI-powered misinformation detection bots could lead participants through situations tailored to their learning level, ensuring they are consistently challenged. Recent natural language processing (NLP) developments enable AI to identify nuanced misinformation patterns and adjust gameplay accordingly (Zellers et al., 2019). This could be especially helpful in India, where fake news is spread differently depending on the language and area.
Possible Opportunities
Augmented reality (AR) scavenger hunts for misinformation, interactive misinformation events, and educational misinformation tournaments are all examples of games that help fight misinformation. India can help millions, especially young people, think critically and combat the spread of false information by making media literacy fun and interesting. Using Artificial Intelligence (AI) in gamified treatments for misinformation could be a fascinating area of study in the future. AI-powered bots could mimic real-time cases of misinformation and give quick feedback, which would help students learn more.
Problems and Moral Consequences
While gaming is an interesting way to fight false information, it also comes with some problems that you should think about:
- Ethical Concerns: Games that try to imitate how fake news spreads must ensure players do not learn how to spread false information by accident.
- Scalability: Although worldwide misinformation initiatives exist, developing and expanding localised versions for India's varied language and cultural contexts provide significant challenges.
- Assessing Impact: There is a necessity for rigorous research approaches to evaluate the efficacy of gamified treatments in altering misinformation-related behaviours, keeping cultural and socio-economic contexts in the picture.
Conclusion
A gamified approach can serve as an effective tool in India's fight against misinformation. By integrating game elements into digital literacy programs, it can encourage critical thinking and help people recognize misinformation more effectively. The goal is to scale these efforts, collaborate with educators, and leverage India's rapidly evolving technology to make fact-checking a regular practice rather than an occasional concern.
As technology and misinformation evolve, so must the strategies to counter them. A coordinated and multifaceted approach, one that involves active participation from netizens, strict platform guidelines, fact-checking initiatives, and support from expert organizations that proactively prebunk and debunk misinformation can be a strong way forward.
References
- Deterding, S., Dixon, D., Khaled, R., & Nacke, L. (2011). From game design elements to gamefulness: defining "gamification". Proceedings of the 15th International Academic MindTrek Conference.
- Guess, A., Nagler, J., & Tucker, J. (2020). Less than you think: Prevalence and predictors of fake news dissemination on Facebook. Science Advances.
- Hamari, J., Koivisto, J., & Sarsa, H. (2014). Does gamification work?—A literature review of empirical studies on gamification. Proceedings of the 47th Hawaii International Conference on System Sciences.
- Lewandowsky, S., Ecker, U. K., & Cook, J. (2017). Beyond misinformation: Understanding and coping with the “post-truth” era. Journal of Applied Research in Memory and Cognition.
- Pennycook, G., & Rand, D. G. (2019). Fighting misinformation on social media using “accuracy prompts”. Nature Human Behaviour.
- Roozenbeek, J., & van der Linden, S. (2019). The fake news game: actively inoculating against the risk of misinformation. Journal of Risk Research.
- van der Linden, S., Roozenbeek, J., Compton, J. (2020). Inoculating against fake news about COVID-19. Frontiers in Psychology.
- Zellers, R., Holtzman, A., Rashkin, H., Bisk, Y., Farhadi, A., Roesner, F., & Choi, Y. (2019). Defending against neural fake news. Advances in Neural Information Processing Systems.