Advisory for APS School Students
Pretext
The Army Welfare Education Society has informed the Parents and students that a Scam is targeting the Army schools Students. The Scamster approaches the students by faking the voice of a female and a male. The scamster asks for the personal information and photos of the students by telling them they are taking details for the event, which is being organised by the Army welfare education society for the celebration of independence day. The Army welfare education society intimated that Parents to beware of these calls from scammers.
The students of Army Schools of Jammu & Kashmir, Noida, are getting calls from the scamster. The students were asked to share sensitive information. Students across the country are getting calls and WhatsApp messages from two numbers, which end with 1715 and 2167. The Scamster are posing to be teachers and asking for the students’ names on the pretext of adding them to the WhatsApp Groups. The scamster then sends forms links to the WhatsApp groups and asking students to fill out the form to seek more sensitive information.
Do’s
- Do Make sure to verify the caller.
- Do block the caller while finding it suspicious.
- Do be careful while sharing personal Information.
- Do inform the School Authorities while receiving these types of calls and messages posing to be teachers.
- Do Check the legitimacy of any agency and organisation while telling the details
- Do Record Calls asking for personal information.
- Do inform parents about scam calling.
- Do cross-check the caller and ask for crucial information.
- Do make others aware of the scam.
Don’ts
- Don’t answer anonymous calls or unknown calls from anyone.
- Don’t share personal information with anyone.
- Don’t Share OTP with anyone.
- Don’t open suspicious links.
- Don’t fill any forms, asking for personal information
- Don’t confirm your identity until you know the caller.
- Don’t Reply to messages asking for financial information.
- Don’t go to a fake website by following a prompt call.
- Don’t share bank Details and passwords.
- Don’t Make payment over a prompt fake call.
Related Blogs
.webp)
Introduction
A Pew Research Center survey conducted in September 2023, found that among 1,453 age group of 13-17 year olds projected that the majority of the age group uses TikTok (63%), Snapchat (60%) and Instagram (59%) in the U.S. Further, in India the 13-19 year-olds age group makes up 31% of social media users in India, according to a report by Statista from 2021. This has been the leading cause of young users inadvertently or deliberately accessing adult content on social media platforms.
Brief Analysis of Meta’s Proposed AI Age Classifier
It can be seen as a step towards safer and moderated content for teen users, by placing age restrictions on teen social media users as sometimes they do not have enough cognitive skills to understand what content can be shared and consumed on these platforms and what can not as per their age. Moreover, there needs to be an understanding of platform policies and they need to understand that nothing can be completely erased from the internet.
Unrestricted access to social media exposes teens to potentially harmful or inappropriate online content, raising concerns about their safety and mental well-being. Meta's recent measures aim to address this, however striking a balance between engagement, protection, and privacy is also an essential part.
The AI-based Age Classifier proposed by Meta classifies users based on their age and places them in the ‘Teen Account’ category which has built-in limits on who can contact them, the content they see and more ways to connect and explore their interests. According to Meta, teens under 16 years of age will need parental permission to change these settings.
Meta's Proposed Solution: AI-Powered Age Classifier
This tool uses Artificial Intelligence (AI) to analyze users’ online behaviours and other profile information to estimate their age. It analyses different factors such as who follows the user, what kind of content they interact with, and even comments like birthday posts from friends. If the classifier detects that a user is likely under 18 years old, it will automatically switch them to a “Teen Account.” These accounts have more restricted privacy settings, such as limiting who can message the user and filtering the type of content they can see.
The adult classifier is anticipated to be deployed by next year and will start scanning for such users who may have lied about their age. All users found to be under 18 years old will be placed in the category of teen accounts, but 16-17 year olds will be able to adjust these settings if they want more flexibility, while younger teens will need parental permission. The effort is part of a broader strategy to protect teens from potentially harmful content on social media. This is especially important in today’s time as the invasion of privacy for anyone, particularly, can be penalised due to legal instruments like GDPR, DPDP Act, COPPA and many more.
Policy Implications and Compliances
Meta's AI Age Classifier addresses the growing concerns over teen safety on social media by categorizing users based on age, restricting minors' access to adult content, and enforcing parental controls. However, reliance on behavioural tracking might potentially impact the online privacy of teen users. Hence the approach of Meta needs to be aligned with applicable jurisdictional laws. In India, the recently enacted DPDP Act, of 2023 prohibits behavioural tracking and targeted advertising to children. Accuracy and privacy are the two main concerns that Meta should anticipate when they roll out the classifier.
Meta emphasises transparency to build user trust, and customizable parental controls empower families to manage teens' online experiences. This initiative reflects Meta's commitment to creating a safer, regulated digital space for young users worldwide, it must also align its policies properly with the regional policy and law standards. Meta’s proposed AI Age Classifier aims to protect teens from adult content, reassure parents by allowing them to curate acceptable content, and enhance platform integrity by ensuring a safer environment for teen users on Instagram.
Conclusion
Meta’s AI Age Classifier while promising to enhance teen safety and putting certain restrictions and parental controls on accounts categorised as ‘teen accounts’, must also properly align with global regulations like GDPR, and the DPDP Act with reference to India. This tool offers reassurance to parents and aims to foster a safer social media environment for teens. To support accurate age estimation and transparency, policy should focus on refining AI methods to minimise errors and ensure clear disclosures about data handling. Collaborative international standards are essential as privacy laws evolve. Meta’s initiative is intended to prioritise youth protection and build public trust in AI-driven moderation across social platforms, while it must also balance the online privacy of users while utilising these advanced tech measures on the platforms.
References
- https://familycenter.meta.com/in/our-products/instagram/
- https://www.indiatoday.in/technology/news/story/instagram-will-now-take-help-of-ai-to-check-if-kids-are-lying-about-their-age-on-app-2628464-2024-11-05
- https://www.bloomberg.com/news/articles/2024-11-04/instagram-plans-to-use-ai-to-catch-teens-lying-about-age
- https://tech.facebook.com/artificial-intelligence/2022/6/adult-classifier/
- https://indianexpress.com/article/technology/artificial-intelligence/too-young-to-use-instagram-metas-ai-classifier-could-help-catch-teens-lying-about-their-age-9658555/

Introduction
The G7 nations, a group of the most powerful economies, have recently turned their attention to the critical issue of cybercrimes and (AI) Artificial Intelligence. G7 summit has provided an essential platform for discussing the threats and crimes occurring from AI and lack of cybersecurity. These nations have united to share their expertise, resources, diplomatic efforts and strategies to fight against cybercrimes. In this blog, we shall investigate the recent development and initiatives undertaken by G7 nations, exploring their joint efforts to combat cybercrime and navigate the evolving landscape of artificial intelligence. We shall also explore the new and emerging trends in cybersecurity, providing insights into ongoing challenges and innovative approaches adopted by the G7 nations and the wider international community.
G7 Nations and AI
Each of these nations have launched cooperative efforts and measures to combat cybercrime successfully. They intend to increase their collective capacities in detecting, preventing, and responding to cyber assaults by exchanging intelligence, best practices, and experience. G7 nations are attempting to develop a strong cybersecurity architecture capable of countering increasingly complex cyber-attacks through information-sharing platforms, collaborative training programs, and joint exercises.
The G7 Summit provided an important forum for in-depth debates on the role of artificial intelligence (AI) in cybersecurity. Recognising AI’s transformational potential, the G7 nations have participated in extensive discussions to investigate its advantages and address the related concerns, guaranteeing responsible research and use. The nation also recognises the ethical, legal, and security considerations of deploying AI cybersecurity.
Worldwide Rise of Ransomware
High-profile ransomware attacks have drawn global attention, emphasising the need to combat this expanding threat. These attacks have harmed organisations of all sizes and industries, leading to data breaches, operational outages, and, in some circumstances, the loss of sensitive information. The implications of such assaults go beyond financial loss, frequently resulting in reputational harm, legal penalties, and service delays that affect consumers, clients, and the public. The increase in high-profile ransomware incidents has garnered attention worldwide, Cybercriminals have adopted a multi-faceted approach to ransomware attacks, combining techniques such as phishing, exploit kits, and supply chain Using spear-phishing, exploit kits, and supply chain hacks to obtain unauthorised access to networks and spread the ransomware. This degree of expertise and flexibility presents a substantial challenge to organisations attempting to protect against such attacks.

Focusing On AI and Upcoming Threats
During the G7 summit, one of the key topics for discussion on the role of AI (Artificial Intelligence) in shaping the future, Leaders and policymakers discuss the benefits and dangers of AI adoption in cybersecurity. Recognising AI’s revolutionary capacity, they investigate its potential to improve defence capabilities, predict future threats, and secure vital infrastructure. Furthermore, the G7 countries emphasise the necessity of international collaboration in reaping the advantages of AI while reducing the hazards. They recognise that cyber dangers transcend national borders and must be combated together. Collaboration in areas such as exchanging threat intelligence, developing shared standards, and promoting best practices is emphasised to boost global cybersecurity defences. The G7 conference hopes to set a global agenda that encourages responsible AI research and deployment by emphasising the role of AI in cybersecurity. The summit’s sessions present a path for maximising AI’s promise while tackling the problems and dangers connected with its implementation.
As the G7 countries traverse the complicated convergence of AI and cybersecurity, their emphasis on collaboration, responsible practices, and innovation lays the groundwork for international collaboration in confronting growing cyber threats. The G7 countries aspire to establish robust and secure digital environments that defend essential infrastructure, protect individuals’ privacy, and encourage trust in the digital sphere by collaboratively leveraging the potential of AI.
Promoting Responsible Al development and usage
The G7 conference will focus on developing frameworks that encourage ethical AI development. This includes fostering openness, accountability, and justice in AI systems. The emphasis is on eliminating biases in data and algorithms and ensuring that AI technologies are inclusive and do not perpetuate or magnify existing societal imbalances.
Furthermore, the G7 nations recognise the necessity of privacy protection in the context of AI. Because AI systems frequently rely on massive volumes of personal data, summit speakers emphasise the importance of stringent data privacy legislation and protections. Discussions centre around finding the correct balance between using data for AI innovation, respecting individuals’ privacy rights, and protecting data security. In addition to responsible development, the G7 meeting emphasises the importance of responsible AI use. Leaders emphasise the importance of transparent and responsible AI governance frameworks, which may include regulatory measures and standards to ensure AI technology’s ethical and legal application. The goal is to defend individuals’ rights, limit the potential exploitation of AI, and retain public trust in AI-driven solutions.
The G7 nations support collaboration among governments, businesses, academia, and civil society to foster responsible AI development and use. They stress the significance of sharing best practices, exchanging information, and developing international standards to promote ethical AI concepts and responsible practices across boundaries. The G7 nations hope to build the global AI environment in a way that prioritises human values, protects individual rights, and develops trust in AI technology by fostering responsible AI development and usage. They work together to guarantee that AI is a force for a good while reducing risks and resolving social issues related to its implementation.
Challenges on the way
During the summit, the nations, while the G7 countries are committed to combating cybercrime and developing responsible AI development, they confront several hurdles in their efforts. Some of them are:
A Rapidly Changing Cyber Threat Environment: Cybercriminals’ strategies and methods are always developing, as is the nature of cyber threats. The G7 countries must keep up with new threats and ensure their cybersecurity safeguards remain effective and adaptable.
Cross-Border Coordination: Cybercrime knows no borders, and successful cybersecurity necessitates international collaboration. On the other hand, coordinating activities among nations with various legal structures, regulatory environments, and agendas can be difficult. Harmonising rules, exchanging information, and developing confidence across states are crucial for effective collaboration.
Talent Shortage and Skills Gap: The field of cybersecurity and AI knowledge necessitates highly qualified personnel. However, skilled individuals in these fields need more supply. The G7 nations must attract and nurture people, provide training programs, and support research and innovation to narrow the skills gap.
Keeping Up with Technological Advancements: Technology changes at a rapid rate, and cyber-attacks become more complex. The G7 nations must ensure that their laws, legislation, and cybersecurity plans stay relevant and adaptive to keep up with future technologies such as AI, quantum computing, and IoT, which may both empower and challenge cybersecurity efforts.
Conclusion
To combat cyber threats effectively, support responsible AI development, and establish a robust cybersecurity ecosystem, the G7 nations must constantly analyse and adjust their strategy. By aggressively tackling these concerns, the G7 nations can improve their collective cybersecurity capabilities and defend their citizens’ and global stakeholders’ digital infrastructure and interests.

Introduction
February marks the beginning of Valentine’s Week, the time when we transcend from the season of smog to the season of love. This is a time when young people are more active on social media and dating apps with the hope of finding a partner to celebrate the occasion. Dating Apps, in order to capitalise on this occasion, launch special offers and campaigns to attract new users and string on the current users with the aspiration of finding their ideal partner. However, with the growing popularity of online dating, the tactics of cybercriminals have also penetrated this sphere. Scammers are now becoming increasingly sophisticated in manipulating individuals on digital platforms, often engaging in scams, identity theft, and financial fraud under the guise of romance. As love fills the air, netizens must stay vigilant and cautious while searching for a connection online and not fall into a scammer’s trap.
Here Are Some CyberPeace Tips To Avoid Romance Scams
- Recognize Red Flags of Romance Scams:- Online dating has made it easier to connect with people, but it has also become a tool for scammers to exploit the emotions of netizens for financial gain. They create fake profiles, build trust quickly, and then manipulate victims into sending money. Understanding their tactics can help you stay safe.
- Warning Signs of a Romance Scam:- If someone expresses strong feelings too soon, it’s a red flag. Scammers often claim to have fallen in love within days or weeks, despite never meeting in person. They use emotional pressure to create a false sense of connection. Their messages might seem off. Scammers often copy-paste scripted responses, making conversations feel unnatural. Poor grammar, inconsistencies in their stories, or vague answers are warning signs. Asking for money is the biggest red flag. They might have an emergency, a visa issue, or an investment opportunity they want you to help with. No legitimate relationship starts with financial requests.
- Manipulative Tactics Used by Scammers:- Scammers use love bombing to gain trust. They flood you with compliments, calling you their soulmate or destiny. This is meant to make you emotionally attached. They often share fake sob stories. It could be anything ranging from losing a loved one, facing a medical emergency, or even being stuck in a foreign country. These are designed to make you feel sorry for them and more willing to help. Some of these scammers might even pretend to be wealthy, being investors or successful business owners, showing off their fabricated luxury lifestyle in order to appear credible. Eventually, they’ll try to lure you into a fake investment. They create a sense of urgency. Whether it’s sending money, investing, or sharing personal details, scammers will push you to act fast. This prevents you from thinking critically or verifying your claims.
- Financial Frauds Linked to Romance Scams:- Romance scams have often led to financial fraud. Victims may be tricked into sending money directly or get roped into elaborate schemes. One common scam is the disappearing date, where someone insists on dining at an expensive restaurant, only to vanish before the bill arrives. Crypto scams are another major concern. Scammers convince victims to invest in fake cryptocurrency platforms, promising huge returns. Once the money is sent, the scammer disappears, leaving the victim with nothing.
- AI & Deepfake Risks in Online Dating:- Advancements in AI have made scams even more convincing. Scammers use AI-generated photos to create flawless, yet fake, profile pictures. These images often lack natural imperfections, making them hard to spot. Deepfake technology is also being used for video calls. Some scammers use pre-recorded AI-generated videos to fake live interactions. If a person’s expressions don’t match their words or their screen glitches oddly, it could be a deepfake.
- How to Stay Safe:-
- Always verify the identities of those who contact you on these sites. A simple reverse image search can reveal if someone’s profile picture is stolen.
- Avoid clicking suspicious links or downloading unknown apps sent by strangers. These can be used to steal your personal information.
- Trust your instincts. If something feels off, it probably is. Stay alert and protect yourself from online romance scams.
Best Online Safety Practices
- Prioritize Social Media Privacy:- Review and update your privacy settings regularly. Think before you share and be mindful of who can see your posts/stories. Avoid oversharing personal details.
- Report Suspicious Activities:- Even if a scam attempt doesn’t succeed, report it. Indian Cyber Crime Coordination Centre (I4C) 'Report Suspect' feature allow users to flag potential threats, helping prevent cybercrimes.
- Think Before You Click or Download:- Avoid clicking on unknown links or downloading attachments from unverified sources. These can be traps leading to phishing scams or malware attacks.
- Protect Your Personal Information:- Be cautious with whom and how you share your sensitive details online. Cybercriminals exploit even the smallest data points to orchestrate fraud.