From Principles to Practice: Implementing Digital Trust Standards through ISO/IEC 25389
Ayndri
Research Analyst - Policy & Advocacy, CyberPeace
PUBLISHED ON
Aug 11, 2025
10
Introduction
In July 2025, the Digital Trust & Safety Partnership (DTSP) achieved a significant milestone with the formal acceptance of its Safe Framework Specification as an international standard, ISO/IEC 25389. This is the first globally recognised standard that is exclusively concerned with guaranteeing a secure online experience for the general public's use of digital goods and services.
Significance of the New Framework
Fundamentally, ISO/IEC 25389 provides organisations with an organised framework for recognising, controlling, and reducing risks associated with conduct or content. This standard, which was created under the direction of ISO/IEC's Joint Technical Committee 1 (JTC 1), integrates the best practices of DTSP and offers a precise way to evaluate organisational maturity in terms of safety and trust. Crucially, it offers the first unified international benchmark, allowing organisations globally to coordinate on common safety pledges and regularly assess progress.
Other Noteworthy Standards and Frameworks
While ISO/IEC 25389 is pioneering, it’s not the only framework shaping digital trust and safety:
One of the main outcomes of the United Nations’ 2024 Summit for the Future was the UN's Global Digital Compact, which describes cross-border cooperation on secure and reliable digital environments with an emphasis on countering harmful content, upholding online human rights, and creating accountability standards.
The World Economic Forum’s Digital Trust Framework defines the goals and values, such as cybersecurity, privacy, transparency, redressability, auditability, fairness, interoperability and safety, implicit to the concept of digital trust. It also provides a roadmap to digital trustworthiness that imbibes these dimensions.
The Framework for Integrity, Security and Trust (FIST) launched at the Cybereace Summit 2023 at USI of India in New Delhi, calls for a multistakeholder approach to co-create solutions and best practices for digital trust and safety.
While still in the finalisation stage for implementation rollout, India's Digital Personal Data Protection Act, 2023 (DPDP Act) and its Rules (2025) aim to strike a balance between individual rights and data processing needs by establishing a groundwork for data security and privacy.
India is developing frameworks in cutting-edge technologies like artificial intelligence. Using a hub-and-spoke model under the IndiaAI Mission, the AI Safety Institute was established in early 2025 with the goal of creating standards for trustworthy, moral, and safe AI systems. Furthermore, AI standards with an emphasis on safety and dependability are being drafted by the Bureau of Indian Standards (BIS).
Google's DigiKavach program (2023) and Google Safety Engineering Centre (GSEC) in Hyderabad are concrete efforts to support digital safety and fraud prevention in India's tech sector.
What It Means for India
India is already claiming its place in discussions about safety and trust around the world. Google's June 2025 safety charter for India, for example, highlights how India's distinct digital scale, diversity, and vast threat landscape provide insights that inform global cybersecurity strategies.
For India's digital ecosystem, ISO/IEC 25389 comes at a critical juncture. Global best practices in safety and trust are desperately needed as a result of the rapid adoption of digital technologies, including the growth of digital payments, e-governance, and artificial intelligence and a concomitant rise in instances of digital harms. Through its guidelines, ISO/IEC 25389 provides a reference benchmark that Indian startups, government agencies, and tech companies can use to improve their safety standards.
Conclusion
A global trust-and-safety standard like ISO/IEC 25389 is essential for making technology safer for people, even as we discuss the broader adoption of security and safety-by-design principles integrated into the processes of technological product development. India can improve user protection, build its reputation globally, and solidify its position as a key player in the creation of a safer, more resilient digital future by implementing this framework in tandem with its growing domestic regulatory framework (such as the DPDP Act and AI Safety policies).
The United Nations (UN) has unveiled a set of principles, known as the 'Global Principles for Information Integrity', to combat the spread of online misinformation, disinformation, and hate speech. These guidelines aim to address the widespread harm caused by false information on digital platforms. The UN's Global Principles are based on five core principles: social trust and resilience, independent, free, and pluralistic media, healthy incentives, transparency and research, and public empowerment. The UN chief emphasized that the threats to information integrity are not new but are now spreading at unprecedented speeds due to digital platforms and artificial intelligence technologies.
These principles aim to enhance global cooperation in order to create a safer online environment. It was further highlighted that the spread of misinformation, disinformation, hate speech, and other risks in the information environment poses threats to democracy, human rights, climate action, and public health. This impact is intensified by the emergence of rapidly advancing Artificial Intelligence Technology (AI tech) that poses a growing threat to vulnerable groups in information environments.
The Highlights of Key Principles
Societal Trust and Resilience: Trust in information sources and the ability and resilience to handle disruptions are critical for maintaining information integrity. Both are at risk from state and non-state actors exploiting the information ecosystem.
Healthy Incentives: Current business models reliant on targeted advertising threaten information integrity. The complex, opaque nature of digital advertising benefits large tech companies and it requires reforms to ensure transparency and accountability.
Public Empowerment: People require the capability to manage their online interactions, the availability of varied and trustworthy information, and the capacity to make informed decisions. Media and digital literacy are crucial, particularly for marginalized populations.
Independent, Free, and Pluralistic Media: A free press supports democracy by fostering informed discourse, holding power accountable, and safeguarding human rights. Journalists must operate safely and freely, with access to diverse news sources.
Transparency and research: Technology companies must be transparent about how information is propagated and how personal data is used. Research and privacy-preserving data access should be encouraged to address information integrity gaps while protecting those investigating and reporting on these issues.
Stakeholders Called for Action
Stakeholders, including technology companies, AI actors, advertisers, media, researchers, civil society organizations, state and political actors, and the UN, have been called to take action under the UN Global Principles for Information Integrity. These principles should be used to build and participate in broad cross-sector coalitions that bring together diverse expertise from civil society, academia, media, government, and the international private sector, focussing on capacity-building and meaningful youth engagement through dedicated advisory groups. Additionally, collaboration is required to develop multi-stakeholder action plans at regional, national, and local levels, engaging communities in grassroots initiatives and ensuring that youth are fully and meaningfully involved in the process.
Implementation and Monitoring
To effectively implement the UN Global Principles at large requires developing a multi-stakeholder action plan at various levels such as at the regional, national, and local levels. These plans should be informed and created by advice and counsel from an extensive range of communities including any of the grassroots initiatives having a deep understanding of regional challenges and their specific needs. Monitoring and evaluation are also regarded as essential components of the implementation process. Regular assessments of the progress, combined with the flexibility to adapt strategies as needed, will help ensure that the principles are effectively translated into practice.
Challenges and Considerations
Implementing these Global Principles of the UN will have certain challenges. The complexities that the digital landscape faces with the rapid pace of technological revamp, and alterations in the diversity of cultural and political contexts all present significant hurdles. Furthermore, the efforts to combat misinformation must be balanced with protecting fundamental rights, including the right to freedom of expression and privacy. Addressing these challenges to counter informational integrity will require continuous and ongoing collaboration with constant dialogue among stakeholders towards a commitment to innovation and continuous learning. It is also important to recognise and address the power imbalance within the information ecosystem, ensuring that all voices are heard and that any person, specifically, the marginalised communities is not cast aside.
Conclusion
The UN Global Principles for Online Misinformation and Information Integrity provide a comprehensive framework for addressing the critical challenges that are present while facing information integrity today. Advocating and promoting societal trust, healthy incentives, public empowerment, independent media, and transparency, these principles offer a passage towards a more resilient and trustworthy digital environment. The future success of these principles depends upon the collaborative efforts of all stakeholders, working together to safeguard the integrity of information for everyone.
Recently, our team encountered a post on X (formerly Twitter) pretending Chandra Arya, a Member of Parliament of Canada is speaking in Kannada and this video surfaced after he filed his nomination for the much-coveted position of Prime Minister of Canada. The video has taken the internet by storm and is being discussed as much as words can be. In this report, we shall consider the legitimacy of the above claim by examining the content of the video, timing and verifying information from reliable sources.
Claim:
The viral video claims Chandra Arya spoke Kannada after filing his nomination for the Canadian Prime Minister position in 2025, after the resignation of Justin Trudeau.
Fact Check:
Upon receiving the video, we performed a reverse image search of the key frames extracted from the video, we found that the video has no connection to any nominations for the Canadian Prime Minister position.Instead, we found that it was an old video of his speech in the Canadian Parliament in 2022. Simultaneously, an old post from the X (Twitter) handle of Mr. Arya’s account was posted at 12:19 AM, May 20, 2022, which clarifies that the speech has no link with the PM Candidature post in the Canadian Parliament.
Further our research led us to a YouTube video posted on a verified channel of Hindustan Times dated 20th May 2022 with a caption - “India-born Canadian MP Chandra Arya is winning hearts online after a video of his speech at the Canadian Parliament in Kannada went viral. Arya delivered a speech in his mother tongue - Kannada. Arya, who represents the electoral district of Nepean, Ontario, in the House of Commons, the lower house of Canada, tweeted a video of his address, saying Kannada is a beautiful language spoken by about five crore people. He said that this is the first time when Kannada is spoken in any Parliament outside India. Netizens including politicians have lauded Arya for the video.”
Conclusion:
The viral video claiming that Chandra Arya spoke in Kannada after filing his nomination for the Canadian Prime Minister position in 2025 is completely false. The video, dated May 2022, shows Chandra Arya delivering an address in Kannada in the Canadian Parliament, unrelated to any political nominations or events concerning the Prime Minister's post. This incident highlights the need for thorough fact-checking and verifying information from credible sources before sharing.
Claim: Misleading Claim About Chandra Arya’s PM Candidacy
In an era when misinformation spreads like wildfire across the digital landscape, the need for effective strategies to counteract these challenges has grown exponentially in a very short period. Prebunking and Debunking are two approaches for countering the growing spread of misinformation online. Prebunking empowers individuals by teaching them to discern between true and false information and acts as a protective layer that comes into play even before people encounter malicious content. Debunking is the correction of false or misleading claims after exposure, aiming to undo or reverse the effects of a particular piece of misinformation. Debunking includes methods such as fact-checking, algorithmic correction on a platform, social correction by an individual or group of online peers, or fact-checking reports by expert organisations or journalists. An integrated approach which involves both strategies can be effective in countering the rapid spread of misinformation online.
Brief Analysis of Prebunking
Prebunking is a proactive practice that seeks to rebut erroneous information before it spreads. The goal is to train people to critically analyse information and develop ‘cognitive immunity’ so that they are less likely to be misled when they do encounter misinformation.
The Prebunking approach, grounded in Inoculation theory, teaches people to recognise, analyse and avoid manipulation and misleading content so that they build resilience against the same. Inoculation theory, a social psychology framework, suggests that pre-emptively conferring psychological resistance against malicious persuasion attempts can reduce susceptibility to misinformation across cultures. As the term suggests, the MO is to help the mind in the present develop resistance to influence that it may encounter in the future. Just as medical vaccines or inoculations help the body build resistance to future infections by administering weakened doses of the harm agent, inoculation theory seeks to teach people fact from fiction through exposure to examples of weak, dichotomous arguments, manipulation tactics like emotionally charged language, case studies that draw parallels between truths and distortions, and so on. In showing people the difference, inoculation theory teaches them to be on the lookout for misinformation and manipulation even, or especially, when they least expect it.
The core difference between Prebunking and Debunking is that while the former is preventative and seeks to provide a broad-spectrum cover against misinformation, the latter is reactive and focuses on specific instances of misinformation. While Debunking is closely tied to fact-checking, Prebunking is tied to a wider range of specific interventions, some of which increase motivation to be vigilant against misinformation and others increase the ability to engage in vigilance with success.
There is much to be said in favour of the Prebunking approach because these interventions build the capacity to identify misinformation and recognise red flags However, their success in practice may vary. It might be difficult to scale up Prebunking efforts and ensure their reach to a larger audience. Sustainability is critical in ensuring that Prebunking measures maintain their impact over time. Continuous reinforcement and reminders may be required to ensure that individuals retain the skills and information they gained from the Prebunking training activities. Misinformation tactics and strategies are always evolving, so it is critical that Prebunking interventions are also flexible and agile and respond promptly to developing challenges. This may be easier said than done, but with new misinformation and cyber threats developing frequently, it is a challenge that has to be addressed for Prebunking to be a successful long-term solution.
Encouraging people to be actively cautious while interacting with information, acquire critical thinking abilities, and reject the effect of misinformation requires a significant behavioural change over a relatively short period of time. Overcoming ingrained habits and prejudices, and countering a natural reluctance to change is no mean feat. Developing a widespread culture of information literacy requires years of social conditioning and unlearning and may pose a significant challenge to the effectiveness of Prebunking interventions.
Brief Analysis of Debunking
Debunking is a technique for identifying and informing people that certain news items or information are incorrect or misleading. It seeks to lessen the impact of misinformation that has already spread. The most popular kind of Debunking occurs through collaboration between fact-checking organisations and social media businesses. Journalists or other fact-checkers discover inaccurate or misleading material, and social media platforms flag or label it. Debunking is an important strategy for curtailing the spread of misinformation and promoting accuracy in the digital information ecosystem.
Debunking interventions are crucial in combating misinformation. However, there are certain challenges associated with the same. Debunking misinformation entails critically verifying facts and promoting corrected information. However, this is difficult owing to the rising complexity of modern tools used to generate narratives that combine truth and untruth, views and facts. These advanced approaches, which include emotional spectrum elements, deepfakes, audiovisual material, and pervasive trolling, necessitate a sophisticated reaction at all levels: technological, organisational, and cultural.
Furthermore, It is impossible to debunk all misinformation at any given time, which effectively means that it is impossible to protect everyone at all times, which means that at least some innocent netizens will fall victim to manipulation despite our best efforts. Debunking is inherently reactive in nature, addressing misinformation after it has grown extensively. This reactionary method may be less successful than proactive strategies such as Prebunking from the perspective of total harm done. Misinformation producers operate swiftly and unexpectedly, making it difficult for fact-checkers to keep up with the rapid dissemination of erroneous or misleading information. Debunking may need continuous exposure to fact-check to prevent erroneous beliefs from forming, implying that a single Debunking may not be enough to rectify misinformation. Debunking requires time and resources, and it is not possible to disprove every piece of misinformation that circulates at any particular moment. This constraint may cause certain misinformation to go unchecked, perhaps leading to unexpected effects. The misinformation on social media can be quickly spread and may become viral faster than Debunking pieces or articles. This leads to a situation in which misinformation spreads like a virus, while the antidote to debunked facts struggles to catch up.
Prebunking vs Debunking: Comparative Analysis
Prebunking interventions seek to educate people to recognise and reject misinformation before they are exposed to actual manipulation. Prebunking offers tactics for critical examination, lessening the individuals' susceptibility to misinformation in a variety of contexts. On the other hand, Debunking interventions involve correcting specific false claims after they have been circulated. While Debunking can address individual instances of misinformation, its impact on reducing overall reliance on misinformation may be limited by the reactive nature of the approach.
CyberPeace Policy Recommendations for Tech/Social Media Platforms
With the rising threat of online misinformation, tech/social media platforms can adopt an integrated strategy that includes both Prebunking and Debunking initiatives to be deployed and supported on all platforms to empower users to recognise the manipulative messaging through Prebunking and be aware of the accuracy of misinformation through Debunking interventions.
Gamified Inoculation: Tech/social media companies can encourage gamified inoculation campaigns, which is a competence-oriented approach to Prebunking misinformation. This can be effective in helping people immunise the receiver against subsequent exposures. It can empower people to build competencies to detect misinformation through gamified interventions.
Promotion of Prebunking and Debunking Campaigns through Algorithm Mechanisms:Tech/social media platforms may promote and guarantee that algorithms prioritise the distribution of Prebunking materials to users, boosting educational content that strengthens resistance to misinformation. Platform operators should incorporate algorithms that prioritise the visibility of Debunking content in order to combat the spread of erroneous information and deliver proper corrections; this can eventually address and aid in Prebunking and Debunking methods to reach a bigger or targeted audience.
User Empowerment to Counter Misinformation:Tech/social media platforms can design user-friendly interfaces that allow people to access Prebunking materials, quizzes, and instructional information to help them improve their critical thinking abilities. Furthermore, they can incorporate simple reporting tools for flagging misinformation, as well as links to fact-checking resources and corrections.
Partnership with Fact-Checking/Expert Organizations:Tech/social media platforms can facilitate Prebunking and Debunking initiatives/campaigns by collaborating with fact-checking/expert organisations and promoting such initiatives at a larger scale and ultimately fighting misinformation with joint hands initiatives.
Conclusion
The threat of online misinformation is only growing with every passing day and so, deploying effective countermeasures is essential. Prebunking and Debunking are the two such interventions. To sum up: Prebunking interventions try to increase resilience to misinformation, proactively lowering susceptibility to erroneous or misleading information and addressing broader patterns of misinformation consumption, while Debunking is effective in correcting a particular piece of misinformation and having a targeted impact on belief in individual false claims. An integrated approach involving both the methods and joint initiatives by tech/social media platforms and expert organizations can ultimately help in fighting the rising tide of online misinformation and establishing a resilient online information landscape.
Your institution or organization can partner with us in any one of our initiatives or policy research activities and complement the region-specific resources and talent we need.