#FactCheck: Debunking the Edited Image Claim of PM Modi with Hafiz Saeed
Executive Summary:
A photoshopped image circulating online suggests Prime Minister Narendra Modi met with militant leader Hafiz Saeed. The actual photograph features PM Modi greeting former Pakistani Prime Minister Nawaz Sharif during a surprise diplomatic stopover in Lahore on December 25, 2015.
The Claim:
A widely shared image on social media purportedly shows PM Modi meeting Hafiz Saeed, a declared terrorist. The claim implies Modi is hostile towards India or aligned with terrorists.

Fact Check:
On our research and reverse image search we found that the Press Information Bureau (PIB) had tweeted about the visit on 25 December 2015, noting that PM Narendra Modi was warmly welcomed by then-Pakistani PM Nawaz Sharif in Lahore. The tweet included several images from various angles of the original meeting between Modi and Sharif. On the same day, PM Modi also posted a tweet stating he had spoken with Nawaz Sharif and extended birthday wishes. Additionally, no credible reports of any meeting between Modi and Hafiz Saeed, further validating that the viral image is digitally altered.


In our further research we found an identical photo, with former Pakistan Prime Minister Nawaz Sharif in place of Hafiz Saeed. This post was shared by Hindustan Times on X on 26 December 2015, pointing to the possibility that the viral image has been manipulated.
Conclusion:
The viral image claiming to show PM Modi with Hafiz Saeed is digitally manipulated. A reverse image search and official posts from the PIB and PM Modi confirm the original photo was taken during Modi’s visit to Lahore in December 2015, where he met Nawaz Sharif. No credible source supports any meeting between Modi and Hafiz Saeed, clearly proving the image is fake.
- Claim: Debunking the Edited Image Claim of PM Modi with Hafiz Saeed
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Executive Summary
A recent viral message on social media such as X and Facebook, claims that the Indian Government will start charging an 18% GST on "good morning" texts from April 1, 2024. This news is misinformation. The message includes a newspaper clipping and a video that was actually part of a fake news report from 2018. The newspaper article from Navbharat Times, published on March 2, 2018, was clearly intended as a joke. In addition to this, we also found a video of ABP News, originally aired on March 20, 2018, was part of a fact-checking segment that debunked the rumor of a GST on greetings.

Claims:
The claim circulating online suggests that the Government will start applying a 18% of GST on all "Good Morning" texts sent through mobile phones from 1st of April, this year. This tax would be added to the monthly mobile bills.




Fact Check:
When we received the news, we first did some relevant keyword searches regarding the news. We found a Facebook Video by ABP News titled Viral Sach: ‘Govt to impose 18% GST on sending good morning messages on WhatsApp?’


We have watched the full video and found out that the News is 6 years old. The Research Wing of CyberPeace Foundation also found the full version of the widely shared ABP News clip on its website, dated March 20, 2018. The video showed a newspaper clipping from Navbharat Times, published on March 2, 2018, which had a humorous article with the saying "Bura na mano, Holi hain." The recent viral image is a cutout image from ABP News that dates back to the year 2018.
Hence, the recent image that is spreading widely is Fake and Misleading.
Conclusion:
The viral message claiming that the government will impose GST (Goods and Services Tax) on "Good morning" messages is completely fake. The newspaper clipping used in the message is from an old comic article published by Navbharat Times, while the clip and image from ABP News have been taken out of context to spread false information.
Claim: India will introduce a Goods and Services Tax (GST) of 18% on all "good morning" messages sent through mobile phones from April 1, 2024.
Claimed on: Facebook, X
Fact Check: Fake, made as Comic article by Navbharat Times on 2 March 2018

Introduction
In September 2025, social media feeds were flooded with strikingly vintage saree-type portraits. These images were not taken by professional photographers, but AI-generated images. More than a million people turned to the "Nano Banana" AI tool of Google Gemini, uploading their ordinary selfies and watching them transform into Bollywood-style, cinematic, 1990s posters. The popularity of this trend is evident, as are the concerns of law enforcement agencies and cybersecurity experts regarding risks of infringement of privacy, unauthorised data sharing, and threats related to deepfake misuse.
What is the Trend?
This trend in AI sarees is created using Google Geminis' Nano Banana image-editing tool, editing and morphing uploaded selfies into glitzy vintage portraits in traditional Indian attire. A user would upload a clear photograph of a solo subject and enter prompts to generate images of cinematic backgrounds, flowing chiffon sarees, golden-hour ambience, and grainy film texture, reminiscent of classic Bollywood imagery. Since its launch, the tool has processed over 500 million images, with the saree trend marking one of its most popular uses. Photographs are uploaded to an AI system, which uses machine learning to alter the pictures according to the description specified. The transformed AI portraits are then shared by users on their Instagram, WhatsApp, and other social media platforms, thereby contributing to the viral nature of the trend.
Law Enforcement Agency Warnings
- A few Indian police agencies have issued strong advisories against participation in such trends. IPS Officer VC Sajjanar warned the public: "The uploading of just one personal photograph can make greedy operators go from clicking their fingers to joining hands with criminals and emptying one's bank account." His advisory had further warned that sharing personal information through trending apps can lead to many scams and fraud.
- Jalandhar Rural Police issued a comprehensive warning stating that such applications put the user at risk of identity theft and online fraud when personal pictures are uploaded. A senior police officer stated: "Once sensitive facial data is uploaded, it can be stored, analysed, and even potentially misused to open the way for cyber fraud, impersonation, and digital identity crimes.
The Cyber Crime Police also put out warnings on social media platforms regarding how photo applications appear entertaining but can pose serious risks to user privacy. They specifically warned that selfies uploaded can lead to data misuse, deepfake creation, and the generation of fake profiles, which are punishable under Sections 66C and 66D of the IT Act 2000.
Consequences of Such Trends
The massification of AI photo trends has several severe effects on private users and society as a whole. Identity fraud and theft are the main issues, as uploaded biometric information can be used by hackers to generate imitated identities, evading security measures or committing financial fraud. The facial recognition information shared by means of these trends remains a digital asset that could be abused years after the trend has passed. ‘Deepfake’ production is another tremendous threat because personal images shared on AI platforms can be utilised to create non-consensual artificial media. Studies have found that more than 95,000 deepfake videos circulated online in 2023 alone, a 550% increase from 2019. The images uploaded can be leveraged to produce embarrassing or harmful content that can cause damage to personal reputation, relationships, and career prospects.
Financial exploitation is also when fake applications in the guise of genuine AI tools strip users of their personal data and financial details. Such malicious platforms tend to look like well-known services so as to trick users into divulging sensitive information. Long-term privacy infringement also comes about due to the permanent retention and possible commercial exploitation of personal biometric information by AI firms, even when users close down their accounts.
Privacy Risks
A few months ago, the Ghibli trend went viral, and now this new trend has taken over. Such trends may subject users to several layers of privacy threats that go far beyond the instant gratification of taking pleasing images. Harvesting of biometric data is the most critical issue since facial recognition information posted on these sites becomes inextricably linked with user identities. Under Google's privacy policy for Gemini tools, uploaded images might be stored temporarily for processing and may be kept for longer periods if used for feedback purposes or feature development.
Illegal data sharing happens when AI platforms provide user-uploaded content to third parties without user consent. A Mozilla Foundation study in 2023 discovered that 80% of popular AI apps had either non-transparent data policies or obscured the ability of users to opt out of data gathering. This opens up opportunities for personal photographs to be shared with anonymous entities for commercial use. Exploitation of training data includes the use of personal photos uploaded to enhance AI models without notifying or compensating users. Although Google provides users with options to turn off data sharing within privacy settings, most users are ignorant of these capabilities. Integration of cross-platform data increases privacy threats when AI applications use data from interlinked social media profiles, providing detailed user profiles that can be taken advantage of for purposeful manipulation or fraud. Inadequacy of informed consent continues to be a major problem, with users engaging in trends unaware of the entire context of sharing information. Studies show that 68% of individuals show concern regarding the misuse of AI app data, but 42% use these apps without going through the terms and conditions.
CyberPeace Expert Recommendations
While the Google Gemini image trend feature operates under its own terms and conditions, it is important to remember that many other tools and applications allow users to generate similar content. Not every platform can be trusted without scrutiny, so users who engage in such trends should do so only on trustworthy platforms and make reliable, informed choices. Above all, following cybersecurity best practices and digital security principles remains essential.
Here are some best practices:-
1.Immediate Protection Measures for User
In a nutshell, protection of personal information may begin by not uploading high-resolution personal photos into AI-based applications, especially those trained for facial recognition. Instead, a person can play with stock images or non-identifiable pictures to the degree that it satisfies the program's creative features without compromising biometric security. Strong privacy settings should exist on every social media platform and AI app by which a person can either limit access to their data, content, or anything else.
2.Organisational Safeguards
AI governance frameworks within organisations should enumerate policies regarding the usage of AI tools by employees, particularly those concerning the upload of personal data. Companies should appropriately carry out due diligence before the adoption of an AI product made commercially available for their own use in order to ensure that such a product has its privacy and security levels as suitable as intended by the company. Training should instruct employees regarding deepfake technology.
3.Technical Protection Strategies
Deepfake detection software should be used. These tools, which include Microsoft Video Authenticator, Intel FakeCatcher, and Sensity AI, allow real-time detection with an accuracy higher than 95%. Use blockchain-based concepts to verify content to create tamper-proof records of original digital assets so that the method of proposing deepfake content as original remains very difficult.
4.Policy and Awareness Initiatives
For high-risk transactions, especially in banks and identity verification systems, authentication should include voice and face liveness checks to ensure the person is real and not using fake or manipulated media. Implement digital literacy programs to empower users with knowledge about AI threats, deepfake detection techniques, and safe digital practices. Companies should also liaise with law enforcement, reporting purported AI crimes, thus offering assistance in combating malicious applications of synthetic media technology.
5.Addressing Data Transparency and Cross-Border AI Security
Regulatory systems need to be called for requiring the transparency of data policies in AI applications, along with providing the rights and choices to users regarding either Biometric data or any other data. Promotion must be given to the indigenous development of AI pertaining to India-centric privacy concerns, assuring the creation of AI models in a secure, transparent, and accountable manner. In respect of cross-border AI security concerns, there must be international cooperation for setting common standards of ethical design, production, and use of AI. With the virus-like contagiousness of AI phenomena such as saree editing trends, they portray the potential and hazards of the present-day generation of artificial intelligence. While such tools offer newer opportunities, they also pose grave privacy and security concerns, which should have been considered quite some time ago by users, organisations, and policy-makers. Through the setting up of all-around protection mechanisms and keeping an active eye on digital privacy, both individuals and institutions will reap the benefits of this AI innovation, and they shall not fall on the darker side of malicious exploitation.
References
- https://www.hindustantimes.com/trending/amid-google-gemini-nano-banana-ai-trend-ips-officer-warns-people-about-online-scams-101757980904282.html%202
- https://www.moneycontrol.com/news/india/viral-banana-ai-saree-selfies-may-risk-fraud-warn-jalandhar-rural-police-13549443.html
- https://www.parliament.nsw.gov.au/researchpapers/Documents/Sexually%20explicit%20deepfakes.pdf
- https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year
- https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year
- https://socradar.io/top-10-ai-deepfake-detection-tools-2025/

Introduction
Embark on a groundbreaking exploration of the Darkweb Metaverse, a revolutionary fusion of the enigmatic dark web with the immersive realm of the metaverse. Unveiling a decentralised platform championing freedom of speech, the Darkverse promises unparalleled diversity of expression. However, as we delve into this digital frontier, we must tread cautiously, acknowledging the security risks and societal challenges that accompany the metaverse's emergence.
The Dark Metaverse is a unique combination of the mysterious dark web and the immersive digital world known as the metaverse. Imagine a place where users may participate in decentralised social networking, communicate anonymously, and freely express a range of viewpoints. It aims to provide an alternative to traditional online platforms, emphasizing privacy and freedom of speech. Nevertheless, it also brings new kinds of criminality and security issues, so it's important to approach this digital frontier cautiously.
In the vast expanse of the digital cosmos, there exists a realm that remains shrouded in mystery to the casual netizen—the dark web. It is a place where the surface web, the familiar territory of Google searches and social media feeds, constitutes a mere 5 per cent of the information iceberg floating in an ocean of data. Beneath this surface lies the deep web and the dark web, comprising the remaining 95 per cent, a staggering figure that beckons the brave and curious to explore its abysmal depths.
Imagine, a platform that not only ventures into these depths but intertwines them with the emerging concept of the metaverse—a digital realm that defeats the limitations of the physical world. This is the vision of the Darkweb Metaverse, the world’s premier endeavour to harness the enigmatic depths of the dark web and fuse it into the immersive experience of the metaverse.
As per Internet User Statistics 2024, There are over 5.3 billion Internet users in the world, meaning over 65% of the world’s population has access to the Internet. The Internet is used for various services. News, entertainment, and communication to name a few. The citizens of developed countries depend on the World Wide Web for a multitude of daily tasks such as academic research, online shopping, E-banking, accessing news and even ordering food online hence the Internet has become an integral part of our daily lives.
Surface Web
This layer of the internet is used by the general public on a daily basis. The contents of this layer are accessed by standard web browsers namely Google Chrome, and Mozilla Firefox to name a few. The contents of this layer of the internet are indexed by these search engines.
Deep Web
This is the second layer of the internet; its contents are not indexed by search engines. The content that is unavailable on the surface web is considered to be a part of the deep web. The deep web comprises a collection of various types of confidential information. Several Schools, Universities, Institutes, Government Offices and Departments, Multinational Companies (MNCs), and Private Companies store their database information and website-oriented server information such as online profile and accounts usernames or IDs and passwords or log in credentials and companies' premium subscription data and monetary transactional records in the Intra-net which is part of the deep web.
Dark Web
It is the least explored part of the internet which is considered to be a hub of various bizarre activities. The contents of the dark web are not indexed by search engines and specific software is required to access this layer of the internet namely TOR (The Onion Router) browser which cloaks to identify its users making them anonymous. The websites of the dark web are identified from .onion TLD (Top Level Domain). Due to anonymity provided in this layer, various criminal activities take place over there including Drugs trading, Arms trading, and Illegal PayPal account details to websites offering child pornography.
The Darkverse
The Darkweb Metaverse is not a mere novelty; it is a revolutionary step forward, a decentralised social networking platform that stands in stark contrast to centralised counterparts like YouTube or Twitter. Here, the spectre of censorship is banished, and the freedom of speech reigns supreme.
The architectonic prowess behind the Darkweb Metaverse is formidable. The development team is a coalition of former infrastructure maestros from Theta Network and virtuosos of metaverse design, bolstered by backend engineers from Gensokishi Metaverse. At the helm is a CEO whose tenure at the apex of large Japanese companies has endowed him with a profound understanding of the landscape, setting a solid foundation for the platform's future triumphs.
Financially, the dark web has been a flourishing underworld, with revenues ranging from $1.5 billion to $3.1 billion between 2020 and 2022. Darkverse, with its emphasis on user-friendliness and safety, is poised to capture a significant portion of this user base. The platform serves as a truly decentralised amalgamation of the Dark Web, Metaverse, and Social Networking Services (SNS), with a mission to provide an unassailable bastion for freedom of speech and expression.
The Darkweb Metaverse is not merely a sanctuary for anonymity and privacy; it is a crucible for the diversity of expression. In a world where centralised platforms can muzzle voices, Darkverse stands as a bulwark against such suppression, fostering a community where a kaleidoscope of opinions and information thrives. The ease of use is unparalleled—a one-time portal that obviates the need for third-party software to access the dark web, protecting users from the myriad risks that typically accompany such ventures.
Moreover, the platform's ability to verify the authenticity of information is a game-changer. In an era laced with misinformation, especially surrounding contentious issues like war, Darkverse offers a sign of truth where the source of information can be scrutinised for its accuracy.
Integrating Technologies
The metaverse will be an immersive iteration of the internet, decked with interactive features of emerging technologies such as artificial intelligence, virtual and augmented reality, 3D graphics, 5G, holograms, NFTs, blockchain and haptic sensors. Each building block, while innovative, carries its own set of risks—vulnerabilities and design flaws that could pose a serious threat to the integrated meta world.
The dark web's very nature of interaction through avatars makes it a perfect candidate for a metaverse iteration. Here, in this anonymous world, commercial and personal engagements occur without the desire to unveil real identities. The metaverse's DNA is well-suited to the dark web, presenting a formidable security challenge as it is likely to evolve more rapidly than its real-world counterpart.
While Meta (formerly Facebook) is a prominent entity developing the metaverse, other key players include NVIDIA, Epic Games, Microsoft, Apple, Decentraland, Roblox Corporation, Unity Software, Snapchat, and Amazon. These companies are integral to constructing the vast network of real-time 3D virtual worlds where users maintain their identities and payment histories.
Yet, with innovation comes risk. The metaverse will necessitate police stations, not as a dystopian oversight but as a means to address the inherent challenges of a new digital society. In India, for instance, the integration of law enforcement within the metaverse could revolutionize the public's interaction with the police, potentially increasing the reporting of crimes.
The Perils within the Darkverse
The metaverse will also be a fertile ground for crimes of a new dimension—identity theft, digital asset hijacking, and the influence of metaverse interactions on real-world decisions. With a significant portion of social media profiles potentially being fraudulent, the metaverse amplifies these challenges, necessitating robust identity access management.
The integration of NFTs into the metaverse ecosystem is not without its security concerns, as token breaches and hacks remain a persistent threat. The metaverse's parallel economy will test the developers' ability to engender trust, a Herculean task that will challenge the boundaries of national economies.
Moreover, the metaverse will be a crucible for social engineering-based attacks, where the real-time and immersive nature of interactions could make individuals particularly vulnerable to deception and manipulation. The potential for early-stage fraud, such as the hyping and selling of virtual assets at unrealistic prices, is a stark reality.
The metaverse also presents numerous risks, particularly for children and adolescents who may struggle to distinguish between virtual and real worlds. The implications of such immersive experiences are intense, with the potential to influence behaviour in hazardous ways.
Security risks extend to the technologies supporting the metaverse, such as virtual and augmented reality. The exploitation of biometric data, the bridging of virtual and real worlds, and the tendency for polarisation and societal isolation are all issues requiring immediate attention.
A Way Forward
As we stand on the cusp of this new digital frontier, it is evident that the metaverse, despite its reliance on blockchain, is not immune to the privacy and security breaches that have plagued conventional IT infrastructure. Data security, Identity theft, network security, and ransomware attacks are just a few of the challenges on the way.
In this quest into the unknown, the Darkweb Metaverse radiates with the promise of freedom and the thrill of discovery. Yet, as we navigate these shadowy depths, we must remain vigilant, for the very technologies that empower us also rear the seeds of our grim vulnerabilities. The metaverse is not just a new chapter in the story of the internet—it is a whole narrative, one that we must write with caution and care.
References
- https://spores.medium.com/the-worlds-first-platform-to-deploy-the-dark-web-in-the-metaverse-releap-ido-on-spores-launchpad-a36387b184de
- https://www.makeuseof.com/how-hackers-sell-trade-data-in-metaverse/
- https://www.demandsage.com/internet-user-statistics/#:~:text=There%20are%20over%205.3%20billion,has%20access%20to%20the%20Internet.