#FactCheck - Video Showing Sadhus in Ice Is Artificially Generated
Executive Summary
A video showing a group of Hindu ascetics (sadhus) allegedly performing intense penance while their bodies appear to be covered in ice is being widely shared on social media. Users are circulating the video as real and claiming that it represents an ancient tradition of Sanatan Dharma. CyberPeace research found the viral claim to be false.The research revealed that the video circulating on social media is not real but has been generated using artificial intelligence (AI).
Claim
On social media platform Facebook, a user shared the viral video on January 16, 2026. The video shows several ascetics engaged in penance, with their bodies seemingly covered in ice. Users shared the video while claiming that it depicts an authentic spiritual practice rooted in Sanatan Dharma.
Links to the post, archive link, and screenshots can be seen below.

Fact Check:
To verify the authenticity of the viral claim, CyberPeace searched relevant keywords on Google. However, no credible or reliable media reports supporting the claim were found. A close examination of the viral video raised suspicion that it may have been AI-generated. To verify this, the video was analysed using the AI detection tool Hive Moderation. According to the results, the video was found to be 99 percent AI-generated.

In the next step of the research, the same video was analysed using another AI detection tool, Sightengine. The results again indicated that the video was 99 percent AI-generated.

Conclusion
CyberPeace concludes that the video circulating on social media is not real. The viral video showing ascetics covered in ice was generated using artificial intelligence and does not depict an actual religious or spiritual practice.
Related Blogs
.webp)
Introduction
With the advent of the internet, the world revealed the promise of boundless connection and the ability to bridge vast distances with a single click. However, as we wade through the complex layers of the digital age, we find ourselves facing a paradoxical realm where anonymity offers both liberation and a potential for unforeseen dangers. Omegle, a chat and video messaging platform, epitomizes this modern conundrum. Launched over a decade ago in 2009, it has burgeoned into a popular avenue for digital interaction, especially amidst the heightened need for human connection spurred by the COVID-19 pandemic's social distancing requirements. Yet, this seemingly benign tool of camaraderie, tragically, doubles as a contemporary incarnation of Pandora's box, unleashing untold risks upon the online privacy and security landscape. Omegle shuts down its operations permanently after 14 years of its service.
The Rise of Omegle
The foundations of this nebulous virtual dominion can be traced back to the very architecture of Omegle. Introduced to the world as a simple, anonymous chat service, Omegle has since evolved, encapsulating the essence of unpredictable human interaction. Users enter this digital arena, often with the innocent desire to alleviate the pangs of isolation or simply to satiate curiosity; yet they remain blissfully unaware of the potential cybersecurity maelstrom that awaits them.
As we commence a thorough inquiry into the psyche of Omegle's vast user base, we observe a digital diaspora with staggering figures. The platform, in May 2022, counted 51.7 million unique visitors, a testament to its sprawling reach across the globe. Delve a bit deeper, and you will uncover that approximately 29.89% of these digital nomads originate from the United States. Others, in varying percentages, flock from India, the Philippines, the United Kingdom, and Germany, revealing a vast, intricate mosaic of international engagement.
Such statistics beguile the uninformed observer with the lie of demographic diversity. Yet we must proceed with caution, for while the platform boasts an impressive 63.91% male patronage, we cannot overlook the notable surge in female participation, which has climbed to 36.09% during the pandemic era. More alarming still is the revelation, borne out of a BBC investigation in February 2021, that children as young as seven have trespassed into Omegle's adult sections—a section purportedly guarded by a minimum age limit of thirteen. How we must ask, has underage presence burgeoned on this platform? A sobering pointer finger towards the platform's inadvertent marketing on TikTok, where youthful influencers, with abandon, promote their Omegle exploits under the #omegle hashtag.
The Omegle Allure
Omegle's allure is further compounded by its array of chat opportunities. It flaunts an adult section awash with explicit content, a moderated chat section that, despite the platform's own admissions, remains imperfectly patrolled, and an unmoderated section, its entry pasted with forewarnings of an 18+ audience. Beyond these lies the college chat option, a seemingly exclusive territory that only admits individuals armed with a verified '.edu' email address.
The effervescent charm of Omegle's interface, however, belies its underlying treacheries. Herein lies a digital wilderness where online predators and nefarious entities prowl, emboldened by the absence of requisite registration protocols. No email address, no unique identifier—pestilence to any notion of accountability or safeguarding. Within this unchecked reality, the young and unwary stand vulnerable, a hapless game for exploitation.
Threat to Users
Venture even further into Omegle's data fiefdom, and the spectre of compromise looms larger. Users, particularly the youth, risk exposure to unsuitable content, and their naivety might lead to the inadvertent divulgence of personal information. Skulking behind the facade of connection, opportunities abound for coercion, blackmail, and stalking—perils rendered more potent as every video exchange and text can be captured, and recorded by an unseen adversary. The platform acts as a quasi-familiar confidante, all the while harvesting chat logs, cookies, IP addresses, and even sensory data, which, instead of being ephemeral, endure within Omegle's databases, readily handed to law enforcement and partnered entities under the guise of due diligence.
How to Combat the threat
In mitigating these online gorgons, a multi-faceted approach is necessary. To thwart incursion into your digital footprint, adults, seeking the thrills of Omegle's roulette, would do well to cloak their activities with a Virtual Private Network (VPN), diligently pore over the privacy policy, deploy robust cybersecurity tools, and maintain an iron-clad reticence on personal disclosures. For children, the recommendation gravitates towards outright avoidance. There, a constellation of parental control mechanisms await the vigilant guardian, ready to shield their progeny from the internet's darker alcoves.
Conclusion
In the final analysis, Omegle emerges as a microcosm of the greater web—a vast, paradoxical construct proffering solace and sociability, yet riddled with malevolent traps for the uninformed. As digital denizens, our traverse through this interconnected cosmos necessitates a relentless guarding of our private spheres and the sober acknowledgement that amidst the keystrokes and clicks, we must tread with caution lest we unseal the perils of this digital Pandora's box.
References:

A photo featuring Bollywood actor Abhishek Bachchan and actress Aishwarya Rai is being widely shared on social media. In the image, the Kedarnath Temple is clearly visible in the background. Users are claiming that the couple recently visited the Kedarnath shrine for darshan.
Cyber Peace Foundation’s research found the viral claim to be false. Our research revealed that the image of Abhishek Bachchan and Aishwarya Rai is not real, but AI-generated, and is being misleadingly shared as a genuine photograph.
Claim
On January 14, 2026, a user on X (formerly Twitter) shared the viral image with a caption suggesting that all rumours had ended and that the couple had restarted their life together. The post further claimed that both actors were seen smiling after a long time, implying that the image was taken during their visit to Kedarnath Temple.
The post has since been widely circulated on social media platforms

Fact Check:
To verify the claim, we first conducted a keyword search on Google related to Abhishek Bachchan, Aishwarya Rai, and a Kedarnath visit. However, we did not find any credible media reports confirming such a visit.
On closely examining the viral image, several visual inconsistencies raised suspicion about it being artificially generated. To confirm this, we scanned the image using the AI detection tool Sightengine. According to the tool’s analysis, the image was found to be 84 percent AI-generated.

Additionally, we scanned the same image using another AI detection tool, HIVE Moderation. The results showed an even stronger indication, classifying the image as 99 percent AI-generated.

Conclusion
Our research confirms that the viral image showing Abhishek Bachchan and Aishwarya Rai at Kedarnath Temple is not authentic. The picture is AI-generated and is being falsely shared on social media to mislead users.
.webp)
Introduction
Misinformation is a major issue in the AI age, exacerbated by the broad adoption of AI technologies. The misuse of deepfakes, bots, and content-generating algorithms have made it simpler for bad actors to propagate misinformation on a large scale. These technologies are capable of creating manipulative audio/video content, propagate political propaganda, defame individuals, or incite societal unrest. AI-powered bots may flood internet platforms with false information, swaying public opinion in subtle ways. The spread of misinformation endangers democracy, public health, and social order. It has the potential to affect voter sentiments, erode faith in the election process, and even spark violence. Addressing misinformation includes expanding digital literacy, strengthening platform detection capabilities, incorporating regulatory checks, and removing incorrect information.
AI's Role in Misinformation Creation
AI's growth in its capabilities to generate content have grown exponentially in recent years. Legitimate uses or purposes of AI many-a-times take a backseat and result in the exploitation of content that already exists on the internet. One of the main examples of misinformation flooding the internet is when AI-powered bots flood social media platforms with fake news at a scale and speed that makes it impossible for humans to track and figure out whether the same is true or false.
The netizens in India are greatly influenced by viral content on social media. AI-generated misinformation can have particularly negative consequences. Being literate in the traditional sense of the word does not automatically guarantee one the ability to parse through the nuances of social media content authenticity and impact. Literacy, be it social media literacy or internet literacy, is under attack and one of the main contributors to this is the rampant rise of AI-generated misinformation. Some of the most common examples of misinformation that can be found are related to elections, public health, and communal issues. These issues have one common factor that connects them, which is that they evoke strong emotions in people and as such can go viral very quickly and influence social behaviour, to the extent that they may lead to social unrest, political instability and even violence. Such developments lead to public mistrust in the authorities and institutions, which is dangerous in any economy, but even more so in a country like India which is home to a very large population comprising a diverse range of identity groups.
Misinformation and Gen AI
Generative AI (GAI) is a powerful tool that allows individuals to create massive amounts of realistic-seeming content, including imitating real people's voices and creating photos and videos that are indistinguishable from reality. Advanced deepfake technology blurs the line between authentic and fake. However, when used smartly, GAI is also capable of providing a greater number of content consumers with trustworthy information, counteracting misinformation.
Generative AI (GAI) is a technology that has entered the realm of autonomous content production and language creation, which is linked to the issue of misinformation. It is often difficult to determine if content originates from humans or machines and if we can trust what we read, see, or hear. This has led to media users becoming more confused about their relationship with media platforms and content and highlighted the need for a change in traditional journalistic principles.
We have seen a number of different examples of GAI in action in recent times, from fully AI-generated fake news websites to fake Joe Biden robocalls telling the Democrats in the U.S. not to vote. The consequences of such content and the impact it could have on life as we know it are almost too vast to even comprehend at present. If our ability to identify reality is quickly fading, how will we make critical decisions or navigate the digital landscape safely? As such, the safe and ethical use and applications of this technology needs to be a top global priority.
Challenges for Policymakers
AI's ability to generate anonymous content makes it difficult to hold perpetrators accountable due to the massive amount of data generated. The decentralised nature of the internet further complicates regulation efforts, as misinformation can spread across multiple platforms and jurisdictions. Balancing the need to protect the freedom of speech and expression with the need to combat misinformation is a challenge. Over-regulation could stifle legitimate discourse, while under-regulation could allow misinformation to propagate unchecked. India's multilingual population adds more layers to already-complex issue, as AI-generated misinformation is tailored to different languages and cultural contexts, making it harder to detect and counter. Therefore, developing strategies catering to the multilingual population is necessary.
Potential Solutions
To effectively combat AI-generated misinformation in India, an approach that is multi-faceted and multi-dimensional is essential. Some potential solutions are as follows:
- Developing a framework that is specific in its application to address AI-generated content. It should include stricter penalties for the originator and spreader and dissemination of fake content in proportionality to its consequences. The framework should establish clear and concise guidelines for social media platforms to ensure that proactive measures are taken to detect and remove AI-generated misinformation.
- Investing in tools that are driven by AI for customised detection and flagging of misinformation in real time. This can help in identifying deepfakes, manipulated images, and other forms of AI-generated content.
- The primary aim should be to encourage different collaborations between tech companies, cyber security orgnisations, academic institutions and government agencies to develop solutions for combating misinformation.
- Digital literacy programs will empower individuals by training them to evaluate online content. Educational programs in schools and communities teach critical thinking and media literacy skills, enabling individuals to better discern between real and fake content.
Conclusion
AI-generated misinformation presents a significant threat to India, and it is safe to say that the risks posed are at scale with the rapid rate at which the nation is developing technologically. As the country moves towards greater digital literacy and unprecedented mobile technology adoption, one must be cognizant of the fact that even a single piece of misinformation can quickly and deeply reach and influence a large portion of the population. Indian policymakers need to rise to the challenge of AI-generated misinformation and counteract it by developing comprehensive strategies that not only focus on regulation and technological innovation but also encourage public education. AI technologies are misused by bad actors to create hyper-realistic fake content including deepfakes and fabricated news stories, which can be extremely hard to distinguish from the truth. The battle against misinformation is complex and ongoing, but by developing and deploying the right policies, tools, digital defense frameworks and other mechanisms, we can navigate these challenges and safeguard the online information landscape.
References:
- https://economictimes.indiatimes.com/news/how-to/how-ai-powered-tools-deepfakes-pose-a-misinformation-challenge-for-internet-users/articleshow/98770592.cms?from=mdr
- https://www.dw.com/en/india-ai-driven-political-messaging-raises-ethical-dilemma/a-69172400
- https://pure.rug.nl/ws/portalfiles/portal/975865684/proceedings.pdf#page=62