Advisory for APS School Students
Pretext
The Army Welfare Education Society has informed the Parents and students that a Scam is targeting the Army schools Students. The Scamster approaches the students by faking the voice of a female and a male. The scamster asks for the personal information and photos of the students by telling them they are taking details for the event, which is being organised by the Army welfare education society for the celebration of independence day. The Army welfare education society intimated that Parents to beware of these calls from scammers.
The students of Army Schools of Jammu & Kashmir, Noida, are getting calls from the scamster. The students were asked to share sensitive information. Students across the country are getting calls and WhatsApp messages from two numbers, which end with 1715 and 2167. The Scamster are posing to be teachers and asking for the students’ names on the pretext of adding them to the WhatsApp Groups. The scamster then sends forms links to the WhatsApp groups and asking students to fill out the form to seek more sensitive information.
Do’s
- Do Make sure to verify the caller.
- Do block the caller while finding it suspicious.
- Do be careful while sharing personal Information.
- Do inform the School Authorities while receiving these types of calls and messages posing to be teachers.
- Do Check the legitimacy of any agency and organisation while telling the details
- Do Record Calls asking for personal information.
- Do inform parents about scam calling.
- Do cross-check the caller and ask for crucial information.
- Do make others aware of the scam.
Don’ts
- Don’t answer anonymous calls or unknown calls from anyone.
- Don’t share personal information with anyone.
- Don’t Share OTP with anyone.
- Don’t open suspicious links.
- Don’t fill any forms, asking for personal information
- Don’t confirm your identity until you know the caller.
- Don’t Reply to messages asking for financial information.
- Don’t go to a fake website by following a prompt call.
- Don’t share bank Details and passwords.
- Don’t Make payment over a prompt fake call.
Related Blogs

Introduction
In a setback to the Centre, the Bombay High Court on Friday 20th September 2024, struck down the provisions under IT Amendment Rules 2023, which empowered the Central Government to establish Fact Check Units (FCUs) to identify ‘fake and misleading’ information about its business on social media platforms.
Chronological Overview
- On 6th April 2023, the Ministry of Electronics and Information Technology (MeitY) notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2023 (IT Amendment Rules, 2023). These rules introduced new provisions to establish a fact-checking unit with respect to “any business of the central government”. This amendment was done In exercise of the powers conferred by section 87 of the Information Technology Act, 2000. (IT Act).
- On 20 March 2024, the Central Government notified the Press Information Bureau (PIB) as FCU under rule 3(1)(b)(v) of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules 2023 (IT Amendment Rules 2023).
- The next day on 21st March 2024, the Supreme Court stayed the Centre's decision on notifying PIB -FCU, considering the pendency of the proceedings before the High Court of Judicature at Bombay. A detailed analysis covered by CyberPeace on the Supreme Court Stay decision can be accessed here.
- In the latest development, the Bombay High Court on 20th September 2024, struck down the provisions under IT Amendment Rules 2023, which empowered the Central Government to establish Fact Check Units (FCUs) to identify ‘fake and misleading’ information about its business on social media platforms.
Brief Overview of Bombay High Court decision dated 20th September 2024
Justice AS Chandurkar was appointed as the third judge after a split verdict in January 2023 by a division bench consisting of Justices Gautam Patel and Neela Gokhal. As a Tie-breaker judge' Justice AS Chandurkar delivered the decision striking down provisions for setting up a Fact Check Unit under IT amendment 2023 rules. Striking down the Centre's proposed fact check unit provision, Justice A S Chandurkar of Bombay High Court also opined that there was no rationale to undertake an exercise in determining whether information related to the business of the Central govt was fake or false or misleading when in digital form but not doing the same when such information was in print. It was also contended that there is no justification to introduce an FCU only in relation to the business of the Central Government. Rule 3(1)(b)(v) has a serious chilling effect on the exercise of the freedom of speech and expression under Article 19(1)(a) of the Constitution since the communication of the view of the FCU will result in the intermediary simply pulling down the content for fear of consequences or losing the safe harbour provision given under IT Act.
Justice Chandurkar held that the expressions ‘fake, false or misleading’ are ‘vague and overbroad’, and that the ‘test of proportionality’ is not satisfied. Rule 3(1)(b)(v), was violative of Articles 14 and 19 (1) (a) and 19 (1) (g) of the Constitution and it is “ultra vires”, or beyond the powers, of the IT Act.
Role of Expert Organisations in Curbing Mis/Disinformation and Fake News
In light of the recent developments, and the rising incidents of Mis/Disinformation and Fake News it becomes significantly important that we all stand together in the fight against these challenges. The actions against Mis/Disinformation and fake news should be strengthened by collective efforts, the expert organisations like CyberPeace Foundation plays an key role in enabling and encouraging netizens to exercise caution and rely on authenticated sources, rather than solely rely on govt FCU to block the content.
Mis/Disinformation and Fake News should be stopped, identified and countered by netizens at the very first stage of its spread. In light of the Bombay High Court's decision to stuck down the provision related to setting up the FCU by the Central Government, it entails that the government's intention to address misinformation related solely to its business/operations may not have been effectively communicated in the eyes of the judiciary.
It is high time to exercise collective efforts against Mis/Disinformation and Fake News and support expert organizations who are actively engaged in conducting proactive measures, and campaigns to target these challenges, specifically in the online information landscape. CyberPeace actively publishes fact-checking reports and insights on Prebunking and Debunking, conducts expert sessions and takes various key steps aimed at empowering netizens to build cognitive defences to recognise the susceptible information, disregard misleading claims and prevent further spreads to ensure the true online information landscape.
References:
- https://www.scconline.com/blog/post/2024/09/20/bombay-high-court-it-rules-amendment-2023-fact-check-units-article14-article19-legal-news/#:~:text=Bombay%20High%20Court%3A%20A%20case,grounds%20that%20it%20violated%20constitutional
- https://indianexpress.com/article/cities/mumbai/bombay-hc-strikes-down-it-act-amendment-fact-check-unit-9579044/
- https://www.cyberpeace.org/resources/blogs/supreme-court-stay-on-centres-notification-of-pibs-fact-check-unit-under-it-amendment-rules-2023

Introduction
Public infrastructure has traditionally served as the framework for civilisation, transporting people, money, and ideas across time and space, from the iron veins of transcontinental railroads to the unseen arteries of the internet. In democracies where free markets and public infrastructure co-exist, this framework has not only facilitated but also accelerated progress. Digital Public Infrastructure (DPI), which powers inclusiveness, fosters innovation, and changes citizens from passive recipients to active participants in the digital age, is emerging as the new civic backbone as we move away from highways and towards high-speed data.
DPI makes it possible for innovation at the margins and for inclusion at scale by providing open-source, interoperable platforms for identities, payments, and data exchange. Examples of how the Global South is evolving from a passive consumer of technology to a creator of globally replicable governance models are India’s Aadhaar (digital identification), UPI (real-time payments), and DigiLocker (data empowerment). As the ‘digital commons’ emerges, DPI does more than simply link users; it also empowers citizens, eliminates inefficiencies from the past, and reimagines the creation and distribution of public value in the digital era.
Securing the Digital Infrastructure: A Contemporary Imperative
As humans, we are already the inhabitants of the future, we stand at the temporal threshold for reform. Digital Infrastructure is no longer just a public good. It’s now a strategic asset, akin to oil pipelines in the 20th century. India is recognised globally for the introduction of “India Stack”, through which the face of digital payments has also been changed. The economic value contributed by DPIs to India’s GDP is predicted to reach 2.9-4.2 percent by 2030, having already reached 0.9% in 2022. Its role in India’s economic development is partly responsible for its success; among emerging market economies, it helped propel India to the top of the revenue administrations’ digitalisation index. The other portion has to do with how India’s social service delivery has changed across the board. By enabling digital and financial inclusion, it has increased access to education (DIKSHA) and is presently being developed to offer agricultural (VISTAAR) and digital health (ABDM) services.
Securing the Foundations: Emerging Threats to Digital Public Infrastructure
The rising prominence of DPI is not without its risks, as adversarial forces are developing with comparable sophistication. The core underpinnings of public digital systems are the target of a new generation of cyber threats, ranging from hostile state actors to cybercriminal syndicates. The threats pose a great risk to the consistent development endeavours of the government. To elucidate, targeted attacks on Biometric databases, AI-based Misinformation and Psychological Warfare, Payment System Hacks, State-sponsored malware, cross-border phishing campaigns, surveillance spyware and Sovereign Malware are modern-day examples of cyber threats.
To secure DPI, a radical rethink beyond encryption methods and perimeter firewalls is needed. It requires an understanding of cybersecurity that is systemic, ethical, and geopolitical. Democracy, inclusivity, and national integrity are all at risk from DPI. To preserve the confidence and promise of digital public infrastructure, policy frameworks must change from fragmented responses to coordinated, proactive and people-centred cyber defence policies.
CyberPeace Recommendations
Powering Progress, Ignoring Protection: A Precarious Path
The Indian government is aware that cyberattacks are becoming more frequent and sophisticated in the nation. To address the nation’s cybersecurity issues, the government has implemented a number of legislative, technical, and administrative policy initiatives. While the initiatives are commendable, there are a few Non-Negotiables that need to be in place for effective protection:
- DPIs must be declared Critical Information Infrastructure. In accordance with the IT Act, 2000, the DPI (Aadhaar, UPI, DigiLocker, Account Aggregator, CoWIN, and ONDC) must be designated as Critical Information Infrastructure (CII) and be supervised by the NCIIPC, just like the banking, energy, and telecom industries. Give NCIIPC the authority to publish required security guidelines, carry out audits, and enforce adherence to the DPI stack, including incident response protocols tailored to each DPI.
- To solidify security, data sovereignty, and cyber responsibility, India should spearhead global efforts to create a Global DPI Cyber Compact through the “One Future Alliance” and the G20. To ensure interoperable cybersecurity frameworks for international DPI projects, promote open standards, cross-border collaboration on threat intelligence, and uniform incident reporting guidelines.
- Establish a DPI Threat Index to monitor vulnerabilities, including phishing attacks, efforts at biometric breaches, sovereign malware footprints, spikes in AI misinformation, and patterns in payment fraud. Create daily or weekly risk dashboards by integrating data from state CERTs, RBI, UIDAI, CERT-In, and NPCI. Use machine learning (ML) driven detection systems.
- Make explainability audits necessary for AI/ML systems used throughout DPI to make sure that the decision-making process is open, impartial, and subject to scrutiny (e.g., welfare algorithms, credit scoring). Use the recently established IndiaAI Safety Institute in line with India’s AI mission to conduct AI audits, establish explanatory standards, and create sector-specific compliance guidelines.
References
- https://orfamerica.org/newresearch/dpi-catalyst-private-sector-innovation?utm_source=chatgpt.com
- https://www.institutmontaigne.org/en/expressions/indias-digital-public-infrastructure-success-story-world
- https://www.pib.gov.in/PressReleasePage.aspx?PRID=2116341
- https://www.pib.gov.in/PressReleaseIframePage.aspx?PRID=2033389
- https://www.governancenow.com/news/regular-story/dpi-must-ensure-data-privacy-cyber-security-citizenfirst-approach
.webp)
Introduction
The Senate bill introduced on 19 March 2024 in the United States would require online platforms to obtain consumer consent before using their data for Artificial Intelligence (AI) model training. If a company fails to obtain this consent, it would be considered a deceptive or unfair practice and result in enforcement action from the Federal Trade Commission (FTC) under the AI consumer opt-in, notification standards, and ethical norms for training (AI Consent) bill. The legislation aims to strengthen consumer protection and give Americans the power to determine how their data is used by online platforms.
The proposed bill also seeks to create standards for disclosures, including requiring platforms to provide instructions to consumers on how they can affirm or rescind their consent. The option to grant or revoke consent should be made available at any time through an accessible and easily navigable mechanism, and the selection to withhold or reverse consent must be at least as prominent as the option to accept while taking the same number of steps or fewer as the option to accept.
The AI Consent bill directs the FTC to implement regulations to improve transparency by requiring companies to disclose when the data of individuals will be used to train AI and receive consumer opt-in to this use. The bill also commissions an FTC report on the technical feasibility of de-identifying data, given the rapid advancements in AI technologies, evaluating potential measures companies could take to effectively de-identify user data.
The definition of ‘Artificial Intelligence System’ under the proposed bill
ARTIFICIALINTELLIGENCE SYSTEM- The term artificial intelligence system“ means a machine-based system that—
- Is capable of influencing the environment by producing an output, including predictions, recommendations or decisions, for a given set of objectives; and
- 2. Uses machine or human-based data and inputs to
(i) Perceive real or virtual environments;
(ii) Abstract these perceptions into models through analysis in an automated manner (such as by using machine learning) or manually; and
(iii) Use model inference to formulate options for outcomes.
Importance of the proposed AI Consent Bill USA
1. Consumer Data Protection: The AI Consent bill primarily upholds the privacy rights of an individual. Consent is necessitated from the consumer before data is used for AI Training; the bill aims to empower individuals with unhinged autonomy over the use of personal information. The scope of the bill aligns with the greater objective of data protection laws globally, stressing the criticality of privacy rights and autonomy.
2. Prohibition Measures: The proposed bill intends to prohibit covered entities from exploiting the data of consumers for training purposes without their consent. This prohibition extends to the sale of data, transfer to third parties and usage. Such measures aim to prevent data misuse and exploitation of personal information. The bill aims to ensure companies are leveraged by consumer information for the development of AI without a transparent process of consent.
3. Transparent Consent Procedures: The bill calls for clear and conspicuous disclosures to be provided by the companies for the intended use of consumer data for AI training. The entities must provide a comprehensive explanation of data processing and its implications for consumers. The transparency fostered by the proposed bill allows consumers to make sound decisions about their data and its management, hence nurturing a sense of accountability and trust in data-driven practices.
4. Regulatory Compliance: The bill's guidelines call for strict requirements for procuring the consent of an individual. The entities must follow a prescribed mechanism for content solicitation, making the process streamlined and accessible for consumers. Moreover, the acquisition of content must be independent, i.e. without terms of service and other contractual obligations. These provisions underscore the importance of active and informed consent in data processing activities, reinforcing the principles of data protection and privacy.
5. Enforcement and Oversight: To enforce compliance with the provisions of the bill, robust mechanisms for oversight and enforcement are established. Violations of the prescribed regulations are treated as unfair or deceptive acts under its provisions. Empowering regulatory bodies like the FTC to ensure adherence to data privacy standards. By holding covered entities accountable for compliance, the bill fosters a culture of accountability and responsibility in data handling practices, thereby enhancing consumer trust and confidence in the digital ecosystem.
Importance of Data Anonymization
Data Anonymization is the process of concealing or removing personal or private information from the data set to safeguard the privacy of the individual associated with it. Anonymised data is a sort of information sanitisation in which data anonymisation techniques encrypt or delete personally identifying information from datasets to protect data privacy of the subject. This reduces the danger of unintentional exposure during information transfer across borders and allows for easier assessment and analytics after anonymisation. When personal information is compromised, the organisation suffers not just a security breach but also a breach of confidence from the client or consumer. Such assaults can result in a wide range of privacy infractions, including breach of contract, discrimination, and identity theft.
The AI consent bill asks the FTC to study data de-identification methods. Data anonymisation is critical to improving privacy protection since it reduces the danger of re-identification and unauthorised access to personal information. Regulatory bodies can increase privacy safeguards and reduce privacy risks connected with data processing operations by investigating and perhaps implementing anonymisation procedures.
The AI consent bill emphasises de-identification methods, as well as the DPDP Act 2023 in India, while not specifically talking about data de-identification, but it emphasises the data minimisation principles, which highlights the potential future focus on data anonymisation processes or techniques in India.
Conclusion
The proposed AI Consent bill in the US represents a significant step towards enhancing consumer privacy rights and data protection in the context of AI development. Through its stringent prohibitions, transparent consent procedures, regulatory compliance measures, and robust enforcement mechanisms, the bill strives to strike a balance between fostering innovation in AI technologies while safeguarding the privacy and autonomy of individuals.
References:
- https://fedscoop.com/consumer-data-consent-training-ai-models-senate-bill/#:~:text=%E2%80%9CThe%20AI%20CONSENT%20Act%20gives,Welch%20said%20in%20a%20statement
- https://www.dataguidance.com/news/usa-bill-ai-consent-act-introduced-house#:~:text=USA%3A%20Bill%20for%20the%20AI%20Consent%20Act%20introduced%20to%20House%20of%20Representatives,-ConsentPrivacy%20Law&text=On%20March%2019%2C%202024%2C%20US,the%20U.S.%20House%20of%20Representatives
- https://datenrecht.ch/en/usa-ai-consent-act-vorgeschlagen/
- https://www.lujan.senate.gov/newsroom/press-releases/lujan-welch-introduce-billto-require-online-platforms-receive-consumers-consent-before-using-their-personal-data-to-train-ai-models/