#FactCheck: Viral image shows the Maldives mocking India with a "SURRENDER" sign on photo of Prime Minister Narendra Modi
Executive Summary:
A manipulated viral photo of a Maldivian building with an alleged oversized portrait of Indian Prime Minister Narendra Modi and the words "SURRENDER" went viral on social media. People responded with fear, indignation, and anxiety. Our research, however, showed that the image was manipulated and not authentic.

Claim:
A viral image claims that the Maldives displayed a huge portrait of PM Narendra Modi on a building front, along with the phrase “SURRENDER,” implying an act of national humiliation or submission.

Fact Check:
After a thorough examination of the viral post, we got to know that it had been altered. While the image displayed the same building, it was wrong to say it included Prime Minister Modi’s portrait along with the word “SURRENDER” shown in the viral version. We also checked the image with the Hive AI Detector, which marked it as 99.9% fake. This further confirmed that the viral image had been digitally altered.

During our research, we also found several images from Prime Minister Modi’s visit, including one of the same building displaying his portrait, shared by the official X handle of the Maldives National Defence Force (MNDF). The post mentioned “His Excellency Prime Minister Shri @narendramodi was warmly welcomed by His Excellency President Dr.@MMuizzu at Republic Square, where he was honored with a Guard of Honor by #MNDF on his state visit to Maldives.” This image, captured from a different angle, also does not feature the word “surrender.


Conclusion:
The claim that the Maldives showed a picture of PM Modi with a surrender message is incorrect and misleading. The image is altered and is being spread to mislead people and stir up controversy. Users should check the authenticity of photos before sharing.
- Claim: Viral image shows the Maldives mocking India with a surrender sign
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
As India moves full steam ahead towards a trillion-dollar digital economy, how user data is gathered, processed and safeguarded is under the spotlight. One of the most pervasive but least known technologies used to gather user data is the cookie. Cookies are inserted into every website and application to improve functionality, measure usage and customize content. But they also present enormous privacy threats, particularly when used without explicit user approval.
In 2023, India passed the Digital Personal Data Protection Act (DPDP) to give strong legal protection to data privacy. Though the act does not refer to cookies by name, its language leaves no doubt as to the inclusion of any technology that gathers or processes personal information and thus cookies regulation is at the centre of digital compliance in India. This blog covers what cookies are, how international legislation, such as the GDPR, has addressed them and how India's DPDP will regulate their use.
What Are Cookies and Why Do They Matter?
Cookies are simply small pieces of data that a website stores in the browser. They were originally designed to help websites remember useful information about users, such as your login session or what is in your shopping cart. Netscape initially built them in 1994 to make web surfing more efficient.
Cookies exist in various types. Session cookies are volatile and are deleted when the browser is shut down, whereas persistent cookies are stored on the device to monitor users over a period of time. First-party cookies are made by the site one is visiting, while third-party cookies are from other domains, usually utilised for advertisements or analytics. Special cookies, such as secure cookies, zombie cookies and tracking cookies, differ in intent and danger. They gather information such as IP addresses, device IDs and browsing history information associated with a person, thus making it personal data per the majority of data protection regulations.
A Brief Overview of the GDPR and Cookie Policy
The GDPR regulates how personal data can be processed in general. However, if a cookie collects personal data (like IP addresses or identifiers that can track a person), then GDPR applies as well, because it sets the rules on how that personal data may be processed, what lawful bases are required, and what rights the user has.
The ePrivacy Directive (also called the “Cookie Law”) specifically regulates how cookies and similar technologies can be used. Article 5(3) of the ePrivacy Directive says that storing or accessing information (such as cookies) on a user’s device requires prior, informed consent, unless the cookie is strictly necessary for providing the service requested by the user.
In the seminal Planet49 decision, the Court of Justice of the European Union held that pre-ticked boxes do not represent valid consent. Another prominent enforcement saw Amazon fined €35 million by France's CNIL for using tracking cookies without user consent.
Cookies and India’s Digital Personal Data Protection Act (DPDP), 2023
India's Digital Personal Data Protection Act, 2023 does not refer to cookies specifically but its provisions necessarily come into play when cookies harvest personal data like user activity, IP addresses, or device data. According to DPDP, personal data is to be processed for legitimate purposes with the individual's consent. The consent has to be free, informed, clear and unambiguous. The individuals have to be informed of what data is collected, how it will be processed.. The Act also forbids behavioural monitoring and targeted advertising in the case of children.
The Ministry of Electronics and IT released the Business Requirements Document for Consent Management Systems (BRDCMS) in June 2025. Although it is not binding by law, it provides operational advice on cookie consent. It recommends that websites use cookie banners with "Accept," "Reject," and "Customize" choices. Users must be able to withdraw or change their consent at any moment. Multi-language handling and automatic expiry of cookie preferences are also suggested to suit accessibility and privacy requirements.
The DPDP Act and the BRDCMS together create a robust user-rights model, even in the absence of a special cookie law.
What Should Indian Websites Do?
For the purposes of staying compliant, Indian websites and online platforms need to act promptly to harmonise their use of cookies with DPDP principles. This begins with a transparent and simple cookie banner providing users with an opportunity to accept or decline non-essential cookies. Consent needs to be meaningful; coercive tactics such as cookie walls must not be employed. Websites need to classify cookies (e.g., necessary, analytics and ads) and describe each category's function in plain terms under the privacy policy. Users must be given the option to modify cookie settings anytime using a Consent Management Platform (CMP). Monitoring children or their behavioural information must be strictly off-limits.
These are not only about being compliant with the law, they're about adhering to ethical data stewardship and user trust building.
What Should Users Do?
Cookies need to be understood and controlled by users to maintain online personal privacy. Begin by reading cookie notices thoroughly and declining unnecessary cookies, particularly those associated with tracking or advertising. The majority of browsers today support blocking third-party cookies altogether or deleting them periodically.
It is also recommended to check and modify privacy settings on websites and mobile applications. It is possible to minimise surveillance with the use of browser add-ons such as ad blockers or privacy extensions. Users are also recommended not to blindly accept "accept all" in cookie notices and instead choose "customise" or "reject" where not necessary for their use.
Finally, keeping abreast of data rights under Indian law, such as the right to withdraw consent or to have data deleted, will enable people to reclaim control over their online presence.
Conclusion
Cookies are a fundamental component of the modern web, but they raise significant concerns about individual privacy. India's DPDP Act, 2023, though not explicitly referring to cookies, contains an effective legal framework that regulates any data collection activity involving personal data, including those facilitated by cookies.
As India continues to make progress towards comprehensive rulemaking and regulation, companies need to implement privacy-first practices today. And so must the users, in an active role in their own digital lives. Collectively, compliance, transparency and awareness can build a more secure and ethical internet ecosystem where privacy is prioritised by design.
References
- https://prsindia.org/billtrack/digital-personal-data-protection-bill-2023
- https://gdpr-info.eu/
- https://d38ibwa0xdgwxx.cloudfront.net/create-edition/7c2e2271-6ddd-4161-a46c-c53b8609c09d.pdf
- https://oag.ca.gov/privacy/ccpa
- https://www.barandbench.com/columns/cookie-management-under-the-digital-personal-data-protection-act-2023#:~:text=The%20Business%20Requirements%20Document%20for,the%20DPDP%20Act%20and%20Rules.
- https://samistilegal.in/cookies-meaning-legal-regulations-and-implications/#
- https://secureprivacy.ai/blog/india-digital-personal-data-protection-act-dpdpa-cookie-consent-requirements
- https://law.asia/cookie-use-india/
- https://www.cookielawinfo.com/major-gdpr-fines-2020-2021/#:~:text=4.,French%20websites%20could%20refuse%20cookies.
.webp)
Introduction
The spread of misinformation online has become a significant concern, with far-reaching social, political, economic and personal implications. The degree of vulnerability to misinformation differs from person to person, dependent on psychological elements such as personality traits, familial background and digital literacy combined with contextual factors like information source, repetition, emotional content and topic. How to reduce misinformation susceptibility in real-world environments where misinformation is regularly consumed on social media remains an open question. Inoculation theory has been proposed as a way to reduce susceptibility to misinformation by informing people about how they might be misinformed. Psychological inoculation campaigns on social media are effective at improving misinformation resilience at scale.
Prebunking has gained prominence as a means to preemptively build resilience against anticipated exposure to misinformation. This approach, grounded in Inoculation Theory, allows people to analyse and avoid manipulation without prior knowledge of specific misleading content by helping them build generalised resilience. We may draw a parallel here with broad spectrum antibiotics that can be used to fight infections and protect the body against symptoms before one is able to identify the particular pathogen at play.
Inoculation Theory and Prebunking
Inoculation theory is a promising approach to combat misinformation in the digital age. It involves exposing individuals to weakened forms of misinformation before encountering the actual false information. This helps develop resistance and critical thinking skills to identify and counter deceptive content.
Inoculation theory has been established as a robust framework for countering unwanted persuasion and can be applied within the modern context of online misinformation:
- Preemptive Inoculation: Preemptive inoculation entails exposing people to weaker kinds of misinformation before they encounter genuine erroneous information. Individuals can build resistance and critical thinking abilities by being exposed to typical misinformation methods and strategies.
- Technique/logic based Inoculation: Individuals can educate themselves about typical manipulative strategies used in online misinformation, which could be emotionally manipulative language, conspiratorial reasoning, trolling and logical fallacies. Learning to recognise these tactics as indicators of misinformation is an important first step to being able to recognise and reject the same. Through logical reasoning, individuals can recognize such tactics for what they are: attempts to distort the facts or spread misleading information. Individuals who are equipped with the capacity to discern weak arguments and misleading methods may properly evaluate the reliability and validity of information they encounter on the Internet.
- Educational Campaigns: Educational initiatives that increase awareness about misinformation, its consequences, and the tactics used to manipulate information can be useful inoculation tools. These programmes equip individuals with the knowledge and resources they need to distinguish between reputable and fraudulent sources, allowing them to navigate the online information landscape more successfully.
- Interactive Games and Simulations: Online games and simulations, such as ‘Bad News,’ have been created as interactive aids to protect people from misinformation methods. These games immerse users in a virtual world where they may learn about the creation and spread of misinformation, increasing their awareness and critical thinking abilities.
- Joint Efforts: Combining inoculation tactics with other anti-misinformation initiatives, such as accuracy primes, building resilience on social media platforms, and media literacy programmes, can improve the overall efficacy of our attempts to combat misinformation. Expert organisations and people can build a stronger defence against the spread of misleading information by using many actions at the same time.
CyberPeace Policy Recommendations for Tech/Social Media Platforms
Implementation of the Inoculation Theory on social media platforms can be seen as an effective strategy point for building resilience among users and combating misinformation. Tech/social media platforms can develop interactive and engaging content in the form of educational prebunking videos, short animations, infographics, tip sheets, and misinformation simulations. These techniques can be deployed through online games, collaborations with influencers and trusted sources that help design and deploy targeted campaigns whilst also educating netizens about the usefulness of Inoculation Theory so that they can practice critical thinking.
The approach will inspire self-monitoring amongst netizens so that people consume information mindfully. It is a powerful tool in the battle against misinformation because it not only seeks to prevent harm before it occurs, but also actively empowers the target audience. In other words, Inoculation Theory helps build people up, and takes them on a journey of transformation from ‘potential victim’ to ‘warrior’ in the battle against misinformation. Through awareness-building, this approach makes people more aware of their own vulnerabilities and attempts to exploit them so that they can be on the lookout while they read, watch, share and believe the content they receive online.
Widespread adoption of Inoculation Theory may well inspire systemic and technological change that goes beyond individual empowerment: these interventions on social media platforms can be utilized to advance digital tools and algorithms so that such interventions and their impact are amplified. Additionally, social media platforms can explore personalized inoculation strategies, and customized inoculation approaches for different audiences so as to be able to better serve more people. One such elegant solution for social media platforms can be to develop a dedicated prebunking strategy that identifies and targets specific themes and topics that could be potential vectors for misinformation and disinformation. This will come in handy, especially during sensitive and special times such as the ongoing elections where tools and strategies for ‘Election Prebunks’ could be transformational.
Conclusion
Applying Inoculation Theory in the modern context of misinformation can be an effective method of establishing resilience against misinformation, help in developing critical thinking and empower individuals to discern fact from fiction in the digital information landscape. The need of the hour is to prioritize extensive awareness campaigns that encourage critical thinking, educate people about manipulation tactics, and pre-emptively counter false narratives associated with information. Inoculation strategies can help people to build mental amour or mental defenses against malicious content and malintent that they may encounter in the future by learning about it in advance. As they say, forewarned is forearmed.
References
- https://www.science.org/doi/10.1126/sciadv.abo6254
- https://stratcomcoe.org/publications/download/Inoculation-theory-and-Misinformation-FINAL-digital-ISBN-ebbe8.pdf

Introduction
In 2025, the internet is entering a new paradigm and it is hard not to witness it. The internet as we know it is rapidly changing into a treasure trove of hyper-optimised material over which vast bot armies battle to the death, thanks to the amazing advancements in artificial intelligence. All of that advancement, however, has a price, primarily in human lives. It turns out that releasing highly personalised chatbots on a populace that is already struggling with economic stagnation, terminal loneliness, and the ongoing destruction of our planet isn’t exactly a formula for improved mental health. This is the truth of 75% of the kids and teen population who have had chats with chatbot-generated fictitious characters. AI, or artificial intelligence, Chatbots are becoming more and more integrated into our daily lives, assisting us with customer service, entertainment, healthcare, and education. But as the impact of these instruments grows, accountability and moral behaviour become more important. An investigation of the internal policies of a major international tech firm last year exposed alarming gaps: AI chatbots were allowed to create content with child romantic roleplaying, racially discriminatory reasoning, and spurious medical claims. Although the firm has since amended aspects of these rules, the exposé underscores an underlying global dilemma - how can we regulate AI to maintain child safety, guard against misinformation, and adhere to ethical considerations without suppressing innovation?
The Guidelines and Their Gaps
The tech giants like Meta and Google are often reprimanded for overlooking Child Safety and the overall increase in Mental health issues in children and adolescents. According to reports, Google introduced Gemini AI kids, a kid-friendly version of its Gemini AI chatbot, which represents a major advancement in the incorporation of generative artificial intelligence (Gen-AI) into early schooling. Users under the age of thirteen can use supervised accounts on the Family Link app to access this version of Gemini AI Kids.
AI operates on the premise of data collection and analysis. To safeguard children’s personal information in the digital world, the Digital Personal Data Protection Act, 2023 (DPDP Act) introduces particular safeguards. According to Section 9, before processing the data of children, who are defined as people under the age of 18, Data Fiduciaries, entities that decide the goals and methods of processing personal data, must get verified consent from a parent or legal guardian. Furthermore, the Act expressly forbids processing activities that could endanger a child’s welfare, such as behavioural surveillance and child-targeted advertising. According to court interpretations, a child's well-being includes not just medical care but also their moral, ethical, and emotional growth.
While the DPDP Act is a big start in the right direction, there are still important lacunae in how it addresses AI and Child Safety. Age-gating systems, thorough risk rating, and limitations specific to AI-driven platforms are absent from the Act, which largely concentrates on consent and damage prevention in data protection. Furthermore, it ignores the threats to children’s emotional safety or the long-term psychological effects of interacting with generative AI models. Current safeguards are self-regulatory in nature and dispersed across several laws, such as the Bhartiya Nyaya Sanhita, 2023. These include platform disclaimers, technology-based detection of child-sexual abuse content, and measures under the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
Child Safety and AI
- The Risks of Romantic Roleplay - Enabling chatbots to engage in romantic roleplaying with youngsters is among the most concerning discoveries. These interactions can result in grooming, psychological trauma, and relaxation to inappropriate behaviour, even if they are not explicitly sexual. Having illicit or sexual conversations with kids in cyberspace is unacceptable, according to child protection experts. However, permitting even "flirtatious" conversation could normalise risky boundaries.
- International Standards and Best Practices - The concept of "safety by design" is highly valued in child online safety guidelines from around the world, including UNICEF's Child Online Protection Guidelines and the UK's Online Safety Bill. This mandating of platforms and developers to proactively remove risks, not reactively to respond to harms, is the bare minimum standard that any AI guidelines must meet if they provide loopholes for child-directed roleplay.
Misinformation and Racism in AI Outputs
- The Disinformation Dilemma - The regulations also allowed AI to create fictional narratives with disclaimers. For example, chatbots were able to write articles promulgating false health claims or smears against public officials, as long as they were labelled as "untrue." While disclaimers might give thin legal cover, they add to the proliferation of misleading information. Indeed, misinformation tends to spread extensively because users disregard caveat labels in favour of provocative assertions.
- Ethical Lines and Discriminatory Content - It is ethically questionable to allow AI systems to generate racist arguments, even when requested. Though scholarly research into prejudice and bias may necessitate such examples, unregulated generation has the potential to normalise damaging stereotypes. Researchers warn that such practice brings platforms from being passive hosts of offensive speech to active generators of discriminatory content. It is a difference that makes a difference, as it places responsibility squarely on developers and corporations.
The Broader Governance Challenge
- Corporate Responsibility and AI Material generated by AI is not equivalent to user speech—it is a direct reflection of corporate training, policy decisions, and system engineering. This fact requires a greater level of accountability. Although companies can update guidelines following public criticism, that there were such allowances in the first place indicates a lack of strong ethical regulation.
- Regulatory Gaps Regulatory regimes for AI are currently in disarray. The EU AI Act, the OECD AI Principles, and national policies all emphasise human rights, transparency, and accountability. The few, though, specify clear guidelines for content risks such as child roleplay or hate narratives. This absence of harmonised international rules leaves companies acting in the shadows, establishing their own limits until contradicted.
An active way forward would include
- Express Child Protection Requirements: AI systems must categorically prohibit interactions with children involving flirting or romance.
- Misinformation Protections: Generative AI must not be allowed to generate knowingly false material, disclaimers being irrelevant.
- Bias Reduction: Developers need to proactively train systems against generating discriminatory accounts, not merely tag them as optional outputs.
- Independent Regulation: External audit and ethics review boards can supply transparency and accountability independent of internal company regulations.
Conclusion
The guidelines that are often contentious are more than the internal folly of just one firm; they point to a deeper systemic issue in AI regulation. The stakes rise as generative AI becomes more and more integrated into politics, healthcare, education, and social interaction. Racism, false information, and inadequate child safety measures are severe issues that require quick resolution. Corporate regulation is only one aspect of the future; other elements include multi-stakeholder participation, stronger global systems, and ethical standards. In the end, rather than just corporate interests, trust in artificial neural networks will be based on their ability to preserve the truth, protect the weak, and represent universal human values.
References
- https://www.esafety.gov.au/newsroom/blogs/ai-chatbots-and-companions-risks-to-children-and-young-people
- https://www.lakshmisri.com/insights/articles/ai-for-children/#
- https://the420.in/meta-ai-chatbot-guidelines-child-safety-racism-misinformation/
- https://www.unicef.org/documents/guidelines-industry-online-child-protection
- https://www.oecd.org/en/topics/sub-issues/ai-principles.html
- https://artificialintelligenceact.eu/