Advisory for APS School Students
Pretext
The Army Welfare Education Society has informed the Parents and students that a Scam is targeting the Army schools Students. The Scamster approaches the students by faking the voice of a female and a male. The scamster asks for the personal information and photos of the students by telling them they are taking details for the event, which is being organised by the Army welfare education society for the celebration of independence day. The Army welfare education society intimated that Parents to beware of these calls from scammers.
The students of Army Schools of Jammu & Kashmir, Noida, are getting calls from the scamster. The students were asked to share sensitive information. Students across the country are getting calls and WhatsApp messages from two numbers, which end with 1715 and 2167. The Scamster are posing to be teachers and asking for the students’ names on the pretext of adding them to the WhatsApp Groups. The scamster then sends forms links to the WhatsApp groups and asking students to fill out the form to seek more sensitive information.
Do’s
- Do Make sure to verify the caller.
- Do block the caller while finding it suspicious.
- Do be careful while sharing personal Information.
- Do inform the School Authorities while receiving these types of calls and messages posing to be teachers.
- Do Check the legitimacy of any agency and organisation while telling the details
- Do Record Calls asking for personal information.
- Do inform parents about scam calling.
- Do cross-check the caller and ask for crucial information.
- Do make others aware of the scam.
Don’ts
- Don’t answer anonymous calls or unknown calls from anyone.
- Don’t share personal information with anyone.
- Don’t Share OTP with anyone.
- Don’t open suspicious links.
- Don’t fill any forms, asking for personal information
- Don’t confirm your identity until you know the caller.
- Don’t Reply to messages asking for financial information.
- Don’t go to a fake website by following a prompt call.
- Don’t share bank Details and passwords.
- Don’t Make payment over a prompt fake call.
Related Blogs

Introduction
Social media is the new platform for free speech and expressing one’s opinions. The latest news breaks out on social media and is often used by political parties to propagate their parties during the elections. Hashtag (#)is the new weapon, a powerful hashtag that goes a long way in making an impact in society that so at a global level. Various hashtags have gained popularity in the last years, such as – #blacklivesmatter, #metoo, #pride, #cybersecurity, and many more, which were influential in spreading awareness among the people regarding various social issues and taboos, which then were removed from multiple cultures. Social media is strengthened by social media influencers who are famous personalities with a massive following as they create regular content that the users consume and share with their friends. Social media is all about the message and its speed, and hence issues like misinformation and disinformation are widespread on nearly all social media platforms, so the influencers play a keen role in making sure the content on social media is in compliance with its community and privacy guidelines.
The Know-How
The Department of Consumer Affairs under the Ministry of Consumer Affairs, Food and Public Distribution released a guide, ‘Endorsements Know-hows!’ for celebrities, influencers, and virtual influencers on social media platforms, The guide aims to ensure that individuals do not mislead their audiences when endorsing products or services and that they are in compliance with the Consumer Protection Act and any associated rules or guidelines. Advertisements are no longer limited to traditional media like print, television, or radio, with the increasing reach of digital platforms and social media, such as Facebook, Twitter, and Instagram, there has been a rise in the influence of virtual influencers, celebrities, and social media influencers. This has led to an increased risk of consumers being misled by advertisements and unfair trade practices by these individuals on social media platforms. Endorsements must be made in simple, clear language, and terms such as “advertisement,” “sponsored,” or “paid promotion” can be used. They should not endorse any product or service and service in which they have done due diligence or that they have not personally used or experienced. The Act established guidelines for protecting consumers from unfair trade practices and misleading advertisements. The Department of Consumer Affairs published Guidelines for prevention of Misleading Advertisements and Endorsements for Misleading Advertisements, 2022, on 9th June 2022. These guidelines outline the criteria for valid advertisements and the responsibilities of manufacturers, service providers, advertisers, and advertising agencies. These guidelines also touched upon celebrities and endorsers. It states that misleading advertisements in any form, format, or medium are prohibited by law.
The guidelines apply to social media influencers as well as virtual avatars promoting products and services online. The disclosures should be easy to notice in post descriptions, where you can usually find hashtags or links. It should also be prominent enough to be noticeable in the content,
Changes Expected
The new guidelines will bring about uniformity in social media content in respect of privacy and the opinions of different people. The primary issue being addressed is misinformation, which was at its peak during the Covid-19 pandemic and impacted millions of people worldwide. The aspect of digital literacy and digital etiquette is a fundamental art of social media ethics, and hence social media influencers and celebrities can go a long way in spreading awareness about the same among common people and regular social media users. The increasing threats of cybercrimes and various exploitations over cyberspace can be eradicated with the help of efficient awareness and education among the youth and the vulnerable population, and the influencers can easily do the same, so its time that the influencers understand their responsibility of leading the masses online and create a healthy secure cyber ecosystem. Failing to follow the guidelines will make social media influencers liable for a fine of up to Rs 10 lakh. In the case of repeated offenders, the penalty can go up to Rs 50 lakh.
Conclusion
The size of the social media influencer market in India in 2022 was $157 million. It could reach as much as $345 million by 2025. Indian advertising industry’s self-regulatory body Advertising Standards Council of India (ASCI), shared that Influencer violations comprise almost 30% of ads taken up by ASCI, hence this legal backing for disclosure requirements is a welcome step. The Ministry of Consumer Affairs had been in touch with ASCI to review the various global guidelines on influencers. The social media guidelines from Clairfirnia and San Fransisco share the same basis, and hence guidelines inspired by different countries will allow the user and the influencer to understand the global perspective and work towards securing the bigger picture. As we know that cyberspace has no geographical boundaries and limitations; hence now is the time to think beyond conventional borders and start contributing towards securing and safeguarding global cyberspace.

Introduction
The recent inauguration of the Google Safety Engineering Centre (GSEC) in Hyderabad on 18th June, 2025, marks a pivotal moment not just for India, but for the entire Asia-Pacific region’s digital future. As only the fourth such centre in the world after Munich, Dublin, and Málaga, its presence signals a shift in how AI safety, cybersecurity, and digital trust are being decentralised, leading to a more globalised and inclusive tech ecosystem. India’s digitisation over the years has grown at a rapid scale, introducing millions of first-time internet users, who, depending on their awareness, are susceptible to online scams, phishing, deepfakes, and AI-driven fraud. The establishment of GSEC is not just about launching a facility but a step towards addressing AI readiness, user protection, and ecosystem resilience.
Building a Safer Digital Future in the Global South
The GSEC is set to operationalise the Google Safety Charter, designed around three core pillars: empowering users by protecting them from online fraud, strengthening government cybersecurity and enterprise, and advancing responsible AI in the platform design and execution. This represents a shift from the standard reactive safety responses to proactive, AI-driven risk mitigation. The goal is to make safety tools not only effective, but tailored to threats unique to the Global South, from multilingual phishing to financial fraud via unofficial lending apps. This centre is expected to stimulate regional cybersecurity ecosystems by creating jobs, fostering public-private partnerships, and enabling collaboration across academia, law enforcement, civil society, and startups. In doing so, it positions Asia-Pacific not as a consumer of the standard Western safety solutions but as an active contributor to the next generation of digital safeguards and customised solutions.
Previous piloted solutions by Google include DigiKavach, a real-time fraud detection framework, and tools like spam protection in mobile operating systems and app vetting mechanisms. What GSEC might aid with is the scaling and integration of these efforts into systems-level responses, where threat detection, safety warnings, and reporting mechanisms, etc., would ensure seamless coordination and response across platforms. This reimagines safety as a core design principle in India’s digital public infrastructure rather than focusing on attack-based response.
CyberPeace Insights
The launch aligns with events such as the AI Readiness Methodology Conference recently held in New Delhi, which brought together researchers, policymakers, and industry leaders to discuss ethical, secure, and inclusive AI implementation. As the world grapples with how to deal with AI technologies ranging from generative content to algorithmic decisions, centres like GSEC can play a critical role in defining the safeguards and governance structures that can support rapid innovation without compromising public trust and safety. The region’s experiences and innovations in AI governance must shape global norms, and the role of Tech firms in doing so is significant. Apart from this, efforts with respect to creating digital infrastructure and safety centres addressing their protection resonate with India’s vision of becoming a global leader in AI.
References
- https://www.thehindu.com/news/cities/Hyderabad/google-safety-engineering-centre-india-inaugurated-in-hyderabad/article69708279.ece
- https://www.businesstoday.in/technology/news/story/google-launches-safety-charter-to-secure-indias-ai-future-flags-online-fraud-and-cyber-threats-480718-2025-06-17?utm_source=recengine&utm_medium=web&referral=yes&utm_content=footerstrip-1&t_source=recengine&t_medium=web&t_content=footerstrip-1&t_psl=False
- https://blog.google/intl/en-in/partnering-indias-success-in-a-new-digital-paradigm/
- https://blog.google/intl/en-in/company-news/googles-safety-charter-for-indias-ai-led-transformation/
- https://economictimes.indiatimes.com/magazines/panache/google-rolls-out-hyderabad-hub-for-online-safety-launches-first-indian-google-safety-engineering-centre/articleshow/121928037.cms?from=mdr

Executive Summary:
An image has been spread on social media about the truck carrying money and gold coins impounded by Jharkhand Police that also during lok sabha elections in 2024. The Research Wing, CyberPeace has verified the image and found it to be generated using artificial intelligence. There are no credible news articles supporting claims about the police having made such a seizure in Jharkhand. The images were checked using AI image detection tools and proved to be AI made. It is advised to share any image or content after verifying its authenticity.

Claims:
The viral social media post depicts a truck intercepted by the Jharkhand Police during the 2024 Lok Sabha elections. It was claimed that the truck was filled with large amounts of cash and gold coins.



Fact Check:
On receiving the posts, we started with keyword-search to find any relevant news articles related to this post. If such a big incident really happened it would have been covered by most of the media houses. We found no such similar articles. We have closely analysed the image to find any anomalies that are usually found in AI generated images. And found the same.

The texture of the tree in the image is found to be blended. Also, the shadow of the people seems to be odd, which makes it more suspicious and is a common mistake in most of the AI generated images. If we closely look at the right hand of the old man wearing white attire, it is clearly visible that the thumb finger is blended with his apparel.
We then analysed the image in an AI image detection tool named ‘Hive Detector’. Hive Detector found the image to be AI-generated.

To validate the AI fabrication, we checked with another AI image detection tool named ‘ContentAtScale AI detection’ and it detected the image as 82% AI. Generated.

After validation of the viral post using AI detection tools, it is apparent that the claim is misleading and fake.
Conclusion:
The viral image of the truck impounded by Jharkhand Police is found to be fake and misleading. The viral image is found to be AI-generated. There has been no credible source that can support the claim made. Hence, the claim made is false and misleading. The Research Wing, CyberPeace previously debunked such AI-generated images with misleading claims. Netizens must verify such news that circulates in Social Media with bogus claims before sharing it further.
- Claim: The photograph shows a truck intercepted by Jharkhand Police during the 2024 Lok Sabha elections, which was allegedly loaded with huge amounts of cash and gold coins.
- Claimed on: Facebook, Instagram, X (Formerly known as Twitter)
- Fact Check: Fake & Misleading