#FactCheck -AI-Generated Image Falsely Linked to Kotdwar Shop Controversy
Executive Summary
A dispute had recently emerged in Kotdwar, Uttarakhand, over the name of a shop. During the controversy, a local youth, Deepak Kumar, came forward in support of the shopkeeper. The incident subsequently became a subject of discussion on social media, with users expressing varied reactions. Meanwhile, a photo began circulating on social media showing a burqa-clad woman presenting a bouquet to Deepak Kumar. The image is being shared with the claim that All India Majlis-e-Ittehadul Muslimeen (AIMIM)’s women’s president, Rubina, welcomed “Mohammad Deepak Kumar” by presenting him with a bouquet. However, research conducted by the CyberPeace found the viral claim to be false. The research revealed that users are sharing an AI-generated image with a misleading claim.
Claim:
On social media platform Instagram, a user shared the viral image claiming that AIMIM’s women’s president Rubina welcomed “Mohammad Deepak Kumar” by presenting him with a bouquet. The link to the post, its archived version, and a screenshot are provided below.

Fact Check:
Upon closely examining the viral image, certain inconsistencies raised suspicion that it could be AI-generated. To verify its authenticity, the image was analysed using the AI detection tool Hive Moderation, which indicated a 96 percent probability that the image was AI-generated.

In the next stage of the research , the image was also analysed using another AI detection tool, Wasit AI, which likewise identified the image as AI-generated.

Conclusion
The research establishes that users are circulating an AI-generated image with a misleading claim linking it to the Kotdwar controversy.
Related Blogs

Introduction
The Sexual Harassment of minors in cyberspace has become a matter of grave concern that needs to be addressed. Sextortion is the practice of extorting individuals into sharing explicit and sexual content under the threat of exposure. This grim activity has evolved into a pervasive issue on several social media platforms, particularly Instagram. To combat this illicit act, big corporate giants such as Meta have deployed a comprehensive ‘nudity protection’ feature, leveraging the use of AI (Artificial Intelligence) algorithms to ascertain and address the rapid distribution of unsolicited explicit content.
The Meta Initiative presented a multifaceted approach to improve user safety, especially for young people online, who are more vulnerable to predatory behavior.
The Salient Feature
Instagram’s use of advanced AI algorithms to automatically identify and blur out explicit images shared within direct messages is the driving force behind this initiative. This new safety measure serves two essential purposes.
- Preventing dissemination of sensitive content - The feature, when enabled, obstructs the visibility of sensitive personal pictures and also limits dissemination of the same.
- Empower minors to exercise more control over their social media - This cutting feature comes with the ability to disable the nudity protection at the will of users, allowing users, including minors, to regulate their exposure to age-inappropriate and harmful materials online. The nudity protection feature is enabled for all users under 18 as a default setting on Instagram globally. This measure guarantees a baseline standard of security for the most vulnerable demographic of users. Adults are able to exercise more autonomy over the feature, receiving periodic prompts for its voluntary activationWhen this feature detects an explicit image, it automatically blurs the image with cautionary overlay, enabling recipients to make an informed decision about whether or not they wish to view the flagged content. The decision to introduce this feature is an interesting and sensitive approach to balancing individual agency with institutionalising online protection.
Comprehensive Safety Measures Beyond Nudity Detection
The cutting-edge nudity protection feature is a crucial element of Instagram’s new strategy and is supported by a comprehensive set of measures devised to tackle sextortion and ensure a safe cyber environment for its users:
Awareness Drives and Safety Tips - Users sending and receiving sexually explicit content are directed to a screen with curated safety tips to ensure complete user awareness and inspire due diligence. These safety tips are critical in raising awareness about the risks of sharing sensitive content and inculcating responsible online behaviour.
New Technology to Identify Sextortionists - Meta Platforms are constantly evolving, and new sophisticated algorithms are introduced to better detect malicious accounts engaged in possible sextortion. These proactive measures check for any predatory behaviour so that such threats can be neutralised before they escalate and do grave harm.
Superior Reporting and Support Mechanisms - Instagram is implementing new technology to bolster its reporting mechanisms so that users reporting concerns pertaining to nudity, sexual exploitation and threats are instantaneously directed to local child safety authorities for necessary support and assistance.
This new sophisticated approach highlights Instagram's Commitment to forging a safer haven for users by addressing various aspects of this grim issue through the three-pronged strategy of detection, prevention and support.
User’s Safety and Accountability
The implementation of the nudity protection feature and various associated safety measures is Meta’s way of tackling the growing concern about user safety in a more proactive manner, especially when it concerns minors. Instagram’s experience with this feature will likely be the sandbox in which Meta tests its new user protection strategy and refines it before extending it to other platforms like Facebook and WhatsApp.
Critical Reception and Future Outlook
The nudity protection feature has been met with positive feedback from experts and online safety advocates, commending Instagram for taking a proactive stance against sextortion and exploitation. However, critics also emphasise the need for continued innovation, transparency, and accountability to effectively address evolving threats and ensure comprehensive protection for all users.
Conclusion
As digital spaces continue to evolve, Meta Platforms must demonstrate an ongoing commitment to adapting its safety measures and collaborating with relevant stakeholders to stay ahead of emerging challenges. Ongoing investment in advanced technology, user education, and robust support systems will be crucial in maintaining a secure and responsible online environment. Ultimately, Instagram's nudity protection feature represents a significant step forward in the fight against online sexual exploitation and abuse. By leveraging cutting-edge technology, fostering user awareness, and implementing comprehensive safety protocols, Meta Platforms is setting a positive example for other social media platforms to prioritise user safety and combat predatory behaviour in digital spaces.
References
- https://www.nbcnews.com/tech/tech-news/instagram-testing-blurring-nudity-messages-protect-teens-sextortion-rcna147402
- https://techcrunch.com/2024/04/11/meta-will-auto-blur-nudity-in-instagram-dms-in-latest-teen-safety-step/
- https://hypebeast.com/2024/4/instagram-dm-nudity-blurring-feature-teen-safety-info

What Is a VPN and its Significance
A Virtual Private Network (VPN) creates a secure and reliable network connection between a device and the internet. It hides your IP address by rerouting it through the VPN’s host servers. For example, if you connect to a US server, you appear to be browsing from the US, even if you’re in India. It also encrypts the data being transferred in real-time so that it is not decipherable by third parties such as ad companies, the government, cyber criminals, or others.
All online activity leaves a digital footprint that is tracked for data collection, and surveillance, increasingly jeopardizing user privacy. VPNs are thus a powerful tool for enhancing the privacy and security of users, businesses, governments and critical sectors. They also help protect users on public Wi-Fi networks ( for example, at airports and hotels), journalists, activists and whistleblowers, remote workers and businesses, citizens in high-surveillance states, and researchers by affording them a degree of anonymity.
What VPNs Do and Don’t
- What VPNs Can Do:
- Mask your IP address to enhance privacy.
- Encrypt data to protect against hackers, especially on public Wi-Fi.
- Bypass geo-restrictions (e.g., access streaming content blocked in India).
- What VPNs Cannot Do:
- Make you completely anonymous and protect your identity (websites can still track you via cookies, browser fingerprinting, etc.).
- Protect against malware or phishing.
- Prevent law enforcement from tracing you if they have access to VPN logs.
- Free VPNs usually even share logs with third parties.
VPNs in the Context of India’s Privacy Policy Landscape
In April 2022, CERT-In (Computer Emergency Response Team- India) released Directions under Section 70B (6) of the Information Technology (“IT”) Act, 2000, mandating VPN service providers to store customer data such as “validated names of subscribers/customers hiring the services, period of hire including dates, IPs allotted to / being used by the members, email address and IP address and time stamp used at the time of registration/onboarding, the purpose for hiring services, validated address and contact numbers, and the ownership pattern of the subscribers/customers hiring services” collected as part of their KYC (Know Your Customer) requirements, for a period of five years, even after the subscription has been cancelled. While this directive was issued to aid with cybersecurity investigations, it undermines the core purpose of VPNs- anonymity and privacy. It also gave operators very little time to carry out compliance measures.
Following this, operators such as NordVPN, ExpressVPN, ProtonVPN, and others pulled their physical servers out of India, and now use virtual servers hosted abroad (e.g., Singapore) with Indian IP addresses. While the CERT-In Directions have extra-territorial applicability, virtual servers are able to bypass them since they physically operate from a foreign jurisdiction. This means that they are effectively not liable to provide user information to Indian investigative agencies, beating the whole purpose of the directive. To counter this, the Indian government could potentially block non-compliant VPN services in the future. Further, there are concerns about overreach since the Directions are unclear about how long CERT-In can retain the data it acquires from VPN operators, how it will be used and safeguarded, and the procedure of holding VPN operators responsible for compliance.
Conclusion: The Need for a Privacy-Conscious Framework
The CERT-In Directions reflect a governance model which, by prioritizing security over privacy, compromises on safeguards like independent oversight or judicial review to balance the two. The policy design renders a lose-lose situation because virtual VPN services are still available, while the government loses oversight. If anything, this can make it harder for the government to track suspicious activity. It also violates the principle of proportionality established in the landmark privacy judgment, Puttaswamy v. Union of India (II) by giving government agencies the power to collect excessive VPN data on any user. These issues underscore the need for a national-level, privacy-conscious cybersecurity framework that informs other policies on data protection and cybercrime investigations. In the meantime, users who use VPNs are advised to choose reputable providers, ensure strong encryption, and follow best practices to maintain online privacy and security.
References
- https://www.kaspersky.com/resource-center/definitions/what-is-a-vpn
- https://internetfreedom.in/top-secret-one-year-on-cert-in-refuses-to-reveal-information-about-compliance-notices-issued-under-its-2022-directions-on-cybersecurity/#:~:text=tl;dr,under%20this%20new%20regulatory%20mandate.
- https://www.wired.com/story/vpn-firms-flee-india-data-collection-law/#:~:text=Starting%20today%2C%20the%20Indian%20Computer,years%2C%20even%20after%20they%20have

Introduction
India is seeing a major change due to the introduction of Artificial Intelligence (AI) across all sectors of government, business, and the digital economy with regard to areas such as governance, healthcare, finance, and the infrastructure. The large scale and rapid pace of AI implementation are expected to lead to efficiency gains, innovations in products and services, and to drive economic growth; however, the growth of AI also creates many serious concerns regarding ethics, legality, and societal ramifications. Issues such as algorithmic bias in the use of algorithms by AI, a lack of transparency in decision-making algorithms, data protection risks resulting from AI employments, and unclear frameworks for determining accountability for AI-related action; bring issues of how we will govern AI in a responsible manner to the forefront of public policy discourse.
India wants to become an AI superpower and leader in technology on the world stage. As such, India has a dual responsibility to fuel innovation without discounting democratic ideals, human rights, and public trust. UNESCO's AI Readiness Assessment Methodology (RAM) is a global tool for AI governance, created to provide concrete policy guidance on how to make ethical AI a reality. The India AI RAM Report is set to be formally released by UNESCO during the India AI Impact Summit 2026, taking place in New Delhi, as a major milestone in India's developing AI governance journey.
What is UNESCO’s AI Readiness Assessment Methodology (RAM)?
UNESCO has created a simple yet effective tool, called the AI Readiness Assessment Methodology (RAM), that can assist governments in determining how well they are prepared to develop, deploy and manage Artificial Intelligence ethically, responsibly and trustworthily. RAM provides a framework for diagnosing and self-assessing the state of a country’s ability to govern AI on the basis of evidence-based decision making rather than serving as a regulatory framework or ranking system.
The most important goal of RAM is to assess a country’s overall state of readiness to govern AI based on four dimensions: institutionally, legally, socially and technologically. In doing so, RAM examines how institutions function, their maturity level and the extent to which various policies align with one another; thereby giving governments an overview of strengths, weaknesses and priorities for reform.
Unlike other frameworks, RAM does not prescribe any one-size-fits-all solutions; instead, it uses a context sensitive approach when implementing the concepts of AI governance due to differing national realities, developmental priorities and social/economic conditions. Using the ethical principles established by UNESCO, RAM converts these principles into practical actions that can guide countries in their transition from abstract commitments to concrete strategies for governing AI.
Key Dimensions Assessed Under RAM
UNESCO's AI Readiness Assessment Methodology (RAM) is a tool used to assess a country's readiness to implement ethical Artificial Intelligence through five interconnected dimensions. These include: the legal and regulatory dimension (which looks at the laws, rules, and safeguards that are currently in place related to AI), the social and cultural dimension (which looks at whether the public is aware of AI, whether it trusts AI, whether AI is an inclusive experience for all people who use AI and whether AI has affected society in various ways), and the economic dimension (which looks at innovation, participation from industry, and readiness of the market for AI).
Also included in the framework/functionality of the RAM are: scientific and educational dimension (which examines a country’s capacity to conduct serious scientific research, including research activities that prepare persons to be employable in AI jobs); and technological and infrastructure dimension (which examines the availability of data, digital infrastructure, and computing capabilities for AI projects in a country).
All five of these dimensions consider the entirety of the scope of an AI readiness evaluation to ensure that AI Governance is more than just a technical issue; rather, it is a condition of a country’s capacity to generate laws, create policy and maintain social equality in relation to all forms of Artificial Intelligence.
Methodology and Nationwide Consultative Process
RAM takes both qualitative and quantitative characteristics together to create an overall understanding of how ready any nation is for AI capabilities. It is designed with flexibility so nations can define their assessments with respect to their own institutional capabilities and development agenda.
Normally, RAM is implemented by an independent expert who is assisted by a national team consisting of various stakeholders. With respect to the RAM process used in India, it was conducted as a national consultation where representatives from across all sectors of society (government, private sector, academia, civil society, and young people) participated in the assessment's creation. This consultation process made sure there were many different viewpoints present, which increased the legitimacy of the assessment results and how relevant they are in each country. The consultation process also yields policy recommendations based on real life governing situations or challenges that are specific to different sectors.
Institutional Partnerships Behind India’s RAM
The India RAM Initiative was developed by the UNESCO South Asia Regional Office (as a partner of IndiaAI Mission and the Indian Ministry of Electronics and Information Technology) and implemented by Ikigai Law with the help of The Patrick J. McGovern Foundation. This demonstrated the need for and importance of partnership in developing governance frameworks for Artificial Intelligence (AI). The result of the RAM process is a collaborative effort that includes evidence-based international norm-setting capabilities from around the world; government policies under the guidance of national political leadership; independent legal-technical implementation; input from civil society; all with the goal of empowering (increasing) India's ability to establish and implement both a consistent (i.e., coherent) and comprehensive (i.e., inclusive) AI Governance Framework.
Significance of the India AI RAM Report and Its Launch
The India AI RAM Report provides a complete initial assessment of India’s AI ecosystem and includes key insights into AI readiness, governance strengths/weaknesses, and potential opportunities across multiple sectors. It identifies priority areas to promote a responsible and trustworthy AI ecosystem in India.
The report will be officially released during the India AI Impact Summit (February 16, 2026 at Bharat Mandapam, New Delhi) where Mr. Abhishek Venkateswaran (National Project Officer-Social and Human Sciences at UNESCO South Asia) offered additional insight into the consultative process and the overall importance of this launch on India's future AI policy path.
Policy Relevance and the Road Ahead
The RAM Framework gives the government a structure and roadmap for developing and implementing AI Governance. In doing this, RAM reinforces the alignment of IndiaAI Mission, which includes safety and trust in AI as one of the pillars. However, the results from this Assessment will not automatically translate to reforming institutions, issuing guidelines specific to sectors, or developing a mechanism for continued evaluation. Implementation will require strong and sustained commitment from political leaders, as well as the commitment of institutions involved in the reforms made possible by RAM's implementation.
Conclusion
UNESCO has developed an AI Readiness Assessment Methodology (AI-RAM) that can greatly advance the way India approaches governance with respect to artificial intelligence (AI). By focusing on "readiness" (doing what needs to be done), "responsibility" (being or having good moral principles) and "inclusivity" (including everyone), the AI-RAM will enable India to become an active participant in discussions around ethical use of AI at a global level. India is now positioned to take on a leadership role in the world by adopting this methodology, which provides a platform for establishing global standards for AI development. The real benefit of the AI-RAM will come from policy measures that will ensure future AI development in India is 'human-centered', 'trustworthy' and 'aligns with democratic values'.
References
- https://icaire.org/files/UNESCORam-en.pdf
- https://www.pib.gov.in/PressReleasePage.aspx?PRID=2134492®=3&lang=2
- https://www.facebook.com/unesconewdelhi/videos/unesco-is-set-to-launch-the-india-ai-readiness-assessment-methodology-ram-report/25955631820699516/
- https://www.unesco.org/ethics-ai/en/ram
- https://www.hindustantimes.com/india-news/unesco-meity-launch-exercise-to-assess-india-s-ai-readiness-101749188341803.html#
- https://www.manoramayearbook.in/current-affairs/india/2025/06/09/unesco-ai-readiness-assessment-methodology-ram.html