Advisory for APS School Students
Pretext
The Army Welfare Education Society has informed the Parents and students that a Scam is targeting the Army schools Students. The Scamster approaches the students by faking the voice of a female and a male. The scamster asks for the personal information and photos of the students by telling them they are taking details for the event, which is being organised by the Army welfare education society for the celebration of independence day. The Army welfare education society intimated that Parents to beware of these calls from scammers.
The students of Army Schools of Jammu & Kashmir, Noida, are getting calls from the scamster. The students were asked to share sensitive information. Students across the country are getting calls and WhatsApp messages from two numbers, which end with 1715 and 2167. The Scamster are posing to be teachers and asking for the students’ names on the pretext of adding them to the WhatsApp Groups. The scamster then sends forms links to the WhatsApp groups and asking students to fill out the form to seek more sensitive information.
Do’s
- Do Make sure to verify the caller.
- Do block the caller while finding it suspicious.
- Do be careful while sharing personal Information.
- Do inform the School Authorities while receiving these types of calls and messages posing to be teachers.
- Do Check the legitimacy of any agency and organisation while telling the details
- Do Record Calls asking for personal information.
- Do inform parents about scam calling.
- Do cross-check the caller and ask for crucial information.
- Do make others aware of the scam.
Don’ts
- Don’t answer anonymous calls or unknown calls from anyone.
- Don’t share personal information with anyone.
- Don’t Share OTP with anyone.
- Don’t open suspicious links.
- Don’t fill any forms, asking for personal information
- Don’t confirm your identity until you know the caller.
- Don’t Reply to messages asking for financial information.
- Don’t go to a fake website by following a prompt call.
- Don’t share bank Details and passwords.
- Don’t Make payment over a prompt fake call.
Related Blogs

Introduction
In the digital realm of social media, Meta Platforms, the driving force behind Facebook and Instagram, faces intense scrutiny following The Wall Street Journal's investigative report. This exploration delves deeper into critical issues surrounding child safety on these widespread platforms, unravelling algorithmic intricacies, enforcement dilemmas, and the ethical maze surrounding monetisation features. Instances of "parent-managed minor accounts" leveraging Meta's subscription tools to monetise content featuring young individuals have raised eyebrows. While skirting the line of legality, this practice prompts concerns due to its potential appeal to adults and the associated inappropriate interactions. It's a nuanced issue demanding nuanced solutions.
Failed Algorithms
The very heartbeat of Meta's digital ecosystem, its algorithms, has come under intense scrutiny. These algorithms, designed to curate and deliver content, were found to actively promoting accounts featuring explicit content to users with known pedophilic interests. The revelation sparks a crucial conversation about the ethical responsibilities tied to the algorithms shaping our digital experiences. Striking the right balance between personalised content delivery and safeguarding users is a delicate task.
While algorithms play a pivotal role in tailoring content to users' preferences, Meta needs to reevaluate the algorithms to ensure they don't inadvertently promote inappropriate content. Stricter checks and balances within the algorithmic framework can help prevent the inadvertent amplification of content that may exploit or endanger minors.
Major Enforcement Challenges
Meta's enforcement challenges have come to light as previously banned parent-run accounts resurrect, gaining official verification and accumulating large followings. The struggle to remove associated backup profiles adds layers to concerns about the effectiveness of Meta's enforcement mechanisms. It underscores the need for a robust system capable of swift and thorough actions against policy violators.
To enhance enforcement mechanisms, Meta should invest in advanced content detection tools and employ a dedicated team for consistent monitoring. This proactive approach can mitigate the risks associated with inappropriate content and reinforce a safer online environment for all users.
The financial dynamics of Meta's ecosystem expose concerns about the exploitation of videos that are eligible for cash gifts from followers. The decision to expand the subscription feature before implementing adequate safety measures poses ethical questions. Prioritising financial gains over user safety risks tarnishing the platform's reputation and trustworthiness. A re-evaluation of this strategy is crucial for maintaining a healthy and secure online environment.
To address safety concerns tied to monetisation features, Meta should consider implementing stricter eligibility criteria for content creators. Verifying the legitimacy and appropriateness of content before allowing it to be monetised can act as a preventive measure against the exploitation of the system.
Meta's Response
In the aftermath of the revelations, Meta's spokesperson, Andy Stone, took centre stage to defend the company's actions. Stone emphasised ongoing efforts to enhance safety measures, asserting Meta's commitment to rectifying the situation. However, critics argue that Meta's response lacks the decisive actions required to align with industry standards observed on other platforms. The debate continues over the delicate balance between user safety and the pursuit of financial gain. A more transparent and accountable approach to addressing these concerns is imperative.
To rebuild trust and credibility, Meta needs to implement concrete and visible changes. This includes transparent communication about the steps taken to address the identified issues, continuous updates on progress, and a commitment to a user-centric approach that prioritises safety over financial interests.
The formation of a task force in June 2023 was a commendable step to tackle child sexualisation on the platform. However, the effectiveness of these efforts remains limited. Persistent challenges in detecting and preventing potential child safety hazards underscore the need for continuous improvement. Legislative scrutiny adds an extra layer of pressure, emphasising the urgency for Meta to enhance its strategies for user protection.
To overcome ongoing challenges, Meta should collaborate with external child safety organisations, experts, and regulators. Open dialogues and partnerships can provide valuable insights and recommendations, fostering a collaborative approach to creating a safer online environment.
Drawing a parallel with competitors such as Patreon and OnlyFans reveals stark differences in child safety practices. While Meta grapples with its challenges, these platforms maintain stringent policies against certain content involving minors. This comparison underscores the need for universal industry standards to safeguard minors effectively. Collaborative efforts within the industry to establish and adhere to such standards can contribute to a safer digital environment for all.
To align with industry standards, Meta should actively participate in cross-industry collaborations and adopt best practices from platforms with successful child safety measures. This collaborative approach ensures a unified effort to protect users across various digital platforms.
Conclusion
Navigating the intricate landscape of child safety concerns on Meta Platforms demands a nuanced and comprehensive approach. The identified algorithmic failures, enforcement challenges, and controversies surrounding monetisation features underscore the urgency for Meta to reassess and fortify its commitment to being a responsible digital space. As the platform faces this critical examination, it has an opportunity to not only rectify the existing issues but to set a precedent for ethical and secure social media engagement.
This comprehensive exploration aims not only to shed light on the existing issues but also to provide a roadmap for Meta Platforms to evolve into a safer and more responsible digital space. The responsibility lies not just in acknowledging shortcomings but in actively working towards solutions that prioritise the well-being of its users.
References
- https://timesofindia.indiatimes.com/gadgets-news/instagram-facebook-prioritised-money-over-child-safety-claims-report/articleshow/107952778.cms
- https://www.adweek.com/blognetwork/meta-staff-found-instagram-tool-enabled-child-exploitation-the-company-pressed-ahead-anyway/107604/
- https://www.tbsnews.net/tech/meta-staff-found-instagram-subscription-tool-facilitated-child-exploitation-yet-company

The Illusion of Digital Serenity
In the age of technology, our email accounts have turned into overcrowded spaces, full of newsletters, special offers, and unwanted updates. To most, the presence of an "unsubscribe" link brings a minor feeling of empowerment, a chance to declutter and restore digital serenity. Yet behind this harmless-seeming tool lurks a developing cybersecurity threat. Recent research and expert discussions indicate that the "unsubscribe" button is being used by cybercriminals to carry out phishing campaigns, confirm active email accounts, and distribute malware. This new threat not only undermines individual users but also has wider implications for trust, behaviour, and governance in cyberspace.
Exploiting User Behaviour
The main challenge is the manipulation of user behaviour. Cyber thieves have learned to analyse typical user habits, most notably the instinctive process of unsubscribing from spam mail. Taking advantage of this, they now place criminal codes in emails that pose as real subscription programs. These codes may redirect traffic to fake websites that attempt to steal credentials, force the installation of malicious code, or merely count the click as verification that the recipient's email address is valid. Once confirmed, these addresses tend to be resold on the dark web or included in additional spam lists, further elevating the threat of subsequent attacks.
A Social Engineering Trap
This type of cyber deception is a prime example of social engineering, where the weakest link in the security chain ends up being the human factor. In the same way, misinformation campaigns take advantage of cognitive biases such as confirmation or familiarity, and these unsubscribe traps exploit user convenience and habits. The bait is so simple, and that is exactly what makes it work. Someone attempting to combat spam may unknowingly walk into a sophisticated cyber threat. Unlike phishing messages impersonating banks or government agencies, which tend to elicit suspicion, spoofed unsubscribe links are integrated into regular digital habits, making them more difficult to recognise and resist.
Professional Disguise, Malicious Intent
Technical analysis determines that most of these messages come from suspicious domains or spoofed versions of valid ones, like "@offers-zomato.ru" in place of the authentic "@zomato.com." The appearance of the email looks professional, complete with logos and styling copied from reputable businesses. But behind the HTML styling lies redirection code and obfuscated scripts with a very different agenda. At times, users are redirected to sites that mimic login pages or questionnaire forms, capturing sensitive information under the guise of email preference management.
Beyond the Inbox: Broader Consequences
The consequences of this attack go beyond the individual user. The compromise of a personal email account can be used to carry out more extensive spamming campaigns, engage in botnets, or even execute identity theft. Furthermore, the compromised devices may become entry points for ransomware attacks or espionage campaigns, particularly if the individual works within sensitive sectors such as finance, defence, or healthcare. In this context, what appears to be a personal lapse becomes a national security risk. This is why the issue posed by the weaponised unsubscribe button must be considered not just as a cybersecurity risk but also as a policy and public awareness issue.
Platform Responsibility
Platform responsibility is yet another important aspect. Email service providers such as Gmail, Outlook, and ProtonMail do have native unsubscribe capabilities, under the List-Unsubscribe header mechanism. These tools enable users to remove themselves from valid mailing lists safely without engaging with the original email content. Yet many users do not know about these safer options and instead resort to in-body unsubscribe links that are easier to find but risky. To that extent, email platforms need to do more not only to enhance backend security but also to steer user actions through simple interfaces, safety messages, and digital hygiene alerts.
Education as a Defence
Education plays a central role in mitigation. Just as cyber hygiene campaigns have been launched to teach users not to click on suspicious links or download unknown attachments, similar efforts are needed to highlight the risks associated with casual unsubscribing. Cybersecurity literacy must evolve to match changing threat patterns. Rather than only targeting clearly malicious activity, awareness campaigns should start tackling deceptive tactics that disguise themselves as beneficial, including unsubscribe traps or simulated customer support conversations. Partnerships between public and private institutions might be vital in helping with this by leveraging their resources for mass digital education.
Practical Safeguards for Users
Users are advised to always check the sender's domain before clicking any link, avoid unknown promotional emails, and hover over any link to preview its true destination. Rather than clicking "unsubscribe," users can simply mark such emails as spam or junk so that their email providers can automatically filter similar messages in the future. For enhanced security, embracing mechanisms such as mail client sandboxing, two-factor authentication (2FA) support, and alias email addresses for sign-ups can also help create layered defences.
Policy and Regulatory Implications
Policy implications are also significant. Governments and data protection regulators must study the increasing misuse of misleading unsubscribe hyperlinks under electronic communication and consent laws. In India, the new Digital Personal Data Protection Act, 2023 (DPDPA), provides a legislative framework to counter such deceptive practices, especially under the principles of legitimate processing and purpose limitation. The law requires that the processing of data should be transparent and fair, a requirement that malicious emails obviously breach. Regulatory agencies like CERT-In can also release periodic notifications warning users against such trends as part of their charter to encourage secure digital practices.
The Trust Deficit
The vulnerability also relates to broader issues of trust in digital infrastructure. When widely used tools such as an unsubscribe feature become points of exploitation, user trust in digital platforms erodes. Such a trust deficit can lead to generalised distrust of email systems, digital communication, and even legitimate marketing. Restoring and maintaining such trust demands a unified response that includes technical measures, user education, and regulatory action.
Conclusion: Inbox Hygiene with Caution
The "unsubscribe button trap" is a parable of the modern age. It illustrates how mundane digital interactions, when manipulated, can do great damage not only to individual users but also to the larger ecosystem of online security and trust. As cyber-attacks grow increasingly psychologically advanced and behaviorally focused, our response must similarly become more sophisticated, interdisciplinary, and user-driven. Getting your inbox in order should never involve putting yourself in cyber danger. But as things stand, even that basic task requires caution, context, and clear thinking.
.webp)
Introduction
The spread of misinformation online has become a significant concern, with far-reaching social, political, economic and personal implications. The degree of vulnerability to misinformation differs from person to person, dependent on psychological elements such as personality traits, familial background and digital literacy combined with contextual factors like information source, repetition, emotional content and topic. How to reduce misinformation susceptibility in real-world environments where misinformation is regularly consumed on social media remains an open question. Inoculation theory has been proposed as a way to reduce susceptibility to misinformation by informing people about how they might be misinformed. Psychological inoculation campaigns on social media are effective at improving misinformation resilience at scale.
Prebunking has gained prominence as a means to preemptively build resilience against anticipated exposure to misinformation. This approach, grounded in Inoculation Theory, allows people to analyse and avoid manipulation without prior knowledge of specific misleading content by helping them build generalised resilience. We may draw a parallel here with broad spectrum antibiotics that can be used to fight infections and protect the body against symptoms before one is able to identify the particular pathogen at play.
Inoculation Theory and Prebunking
Inoculation theory is a promising approach to combat misinformation in the digital age. It involves exposing individuals to weakened forms of misinformation before encountering the actual false information. This helps develop resistance and critical thinking skills to identify and counter deceptive content.
Inoculation theory has been established as a robust framework for countering unwanted persuasion and can be applied within the modern context of online misinformation:
- Preemptive Inoculation: Preemptive inoculation entails exposing people to weaker kinds of misinformation before they encounter genuine erroneous information. Individuals can build resistance and critical thinking abilities by being exposed to typical misinformation methods and strategies.
- Technique/logic based Inoculation: Individuals can educate themselves about typical manipulative strategies used in online misinformation, which could be emotionally manipulative language, conspiratorial reasoning, trolling and logical fallacies. Learning to recognise these tactics as indicators of misinformation is an important first step to being able to recognise and reject the same. Through logical reasoning, individuals can recognize such tactics for what they are: attempts to distort the facts or spread misleading information. Individuals who are equipped with the capacity to discern weak arguments and misleading methods may properly evaluate the reliability and validity of information they encounter on the Internet.
- Educational Campaigns: Educational initiatives that increase awareness about misinformation, its consequences, and the tactics used to manipulate information can be useful inoculation tools. These programmes equip individuals with the knowledge and resources they need to distinguish between reputable and fraudulent sources, allowing them to navigate the online information landscape more successfully.
- Interactive Games and Simulations: Online games and simulations, such as ‘Bad News,’ have been created as interactive aids to protect people from misinformation methods. These games immerse users in a virtual world where they may learn about the creation and spread of misinformation, increasing their awareness and critical thinking abilities.
- Joint Efforts: Combining inoculation tactics with other anti-misinformation initiatives, such as accuracy primes, building resilience on social media platforms, and media literacy programmes, can improve the overall efficacy of our attempts to combat misinformation. Expert organisations and people can build a stronger defence against the spread of misleading information by using many actions at the same time.
CyberPeace Policy Recommendations for Tech/Social Media Platforms
Implementation of the Inoculation Theory on social media platforms can be seen as an effective strategy point for building resilience among users and combating misinformation. Tech/social media platforms can develop interactive and engaging content in the form of educational prebunking videos, short animations, infographics, tip sheets, and misinformation simulations. These techniques can be deployed through online games, collaborations with influencers and trusted sources that help design and deploy targeted campaigns whilst also educating netizens about the usefulness of Inoculation Theory so that they can practice critical thinking.
The approach will inspire self-monitoring amongst netizens so that people consume information mindfully. It is a powerful tool in the battle against misinformation because it not only seeks to prevent harm before it occurs, but also actively empowers the target audience. In other words, Inoculation Theory helps build people up, and takes them on a journey of transformation from ‘potential victim’ to ‘warrior’ in the battle against misinformation. Through awareness-building, this approach makes people more aware of their own vulnerabilities and attempts to exploit them so that they can be on the lookout while they read, watch, share and believe the content they receive online.
Widespread adoption of Inoculation Theory may well inspire systemic and technological change that goes beyond individual empowerment: these interventions on social media platforms can be utilized to advance digital tools and algorithms so that such interventions and their impact are amplified. Additionally, social media platforms can explore personalized inoculation strategies, and customized inoculation approaches for different audiences so as to be able to better serve more people. One such elegant solution for social media platforms can be to develop a dedicated prebunking strategy that identifies and targets specific themes and topics that could be potential vectors for misinformation and disinformation. This will come in handy, especially during sensitive and special times such as the ongoing elections where tools and strategies for ‘Election Prebunks’ could be transformational.
Conclusion
Applying Inoculation Theory in the modern context of misinformation can be an effective method of establishing resilience against misinformation, help in developing critical thinking and empower individuals to discern fact from fiction in the digital information landscape. The need of the hour is to prioritize extensive awareness campaigns that encourage critical thinking, educate people about manipulation tactics, and pre-emptively counter false narratives associated with information. Inoculation strategies can help people to build mental amour or mental defenses against malicious content and malintent that they may encounter in the future by learning about it in advance. As they say, forewarned is forearmed.
References
- https://www.science.org/doi/10.1126/sciadv.abo6254
- https://stratcomcoe.org/publications/download/Inoculation-theory-and-Misinformation-FINAL-digital-ISBN-ebbe8.pdf