#FactCheck - Video Showing Sadhus in Ice Is Artificially Generated
Executive Summary
A video showing a group of Hindu ascetics (sadhus) allegedly performing intense penance while their bodies appear to be covered in ice is being widely shared on social media. Users are circulating the video as real and claiming that it represents an ancient tradition of Sanatan Dharma. CyberPeace research found the viral claim to be false.The research revealed that the video circulating on social media is not real but has been generated using artificial intelligence (AI).
Claim
On social media platform Facebook, a user shared the viral video on January 16, 2026. The video shows several ascetics engaged in penance, with their bodies seemingly covered in ice. Users shared the video while claiming that it depicts an authentic spiritual practice rooted in Sanatan Dharma.
Links to the post, archive link, and screenshots can be seen below.

Fact Check:
To verify the authenticity of the viral claim, CyberPeace searched relevant keywords on Google. However, no credible or reliable media reports supporting the claim were found. A close examination of the viral video raised suspicion that it may have been AI-generated. To verify this, the video was analysed using the AI detection tool Hive Moderation. According to the results, the video was found to be 99 percent AI-generated.

In the next step of the research, the same video was analysed using another AI detection tool, Sightengine. The results again indicated that the video was 99 percent AI-generated.

Conclusion
CyberPeace concludes that the video circulating on social media is not real. The viral video showing ascetics covered in ice was generated using artificial intelligence and does not depict an actual religious or spiritual practice.
Related Blogs

Introduction
Indian Cybercrime Coordination Centre (I4C) was established by the Ministry of Home Affairs (MHA) to provide a framework for law enforcement agencies (LEAs) to deal with cybercrime in a coordinated and comprehensive manner. The Indian Ministry of Home Affairs approved a scheme for the establishment of the Indian Cyber Crime Coordination Centre (I4C) in October 2018. I4C is actively working towards initiatives to combat the emerging threats in cyberspace and it has become a strong pillar of India’s cyber security and cybercrime prevention. The ‘National Cyber Crime Reporting Portal’ equipped with a 24x7 helpline number 1930, is one of the key components of the I4C.
On 10 September 2024, I4Ccelebrated its foundation day for the first time at Vigyan Bhawan, New Delhi. This celebration marked a major milestone in India’s efforts against cybercrimes and in enhancing its cybersecurity infrastructure. Union Home Minister and Minister of Cooperation, Shri Amit Shah, launched key initiatives aimed at strengthening the country’s cybersecurity landscape.
Launch of Key Initiatives to Strengthen Cybersecurity
- Cyber Fraud Mitigation Centre (CFMC): As a product of Prime Minister Shri Narendra Modi’s vision, the Cyber Fraud Mitigation Centre (CFMC), was incorporated to bring together banks, financial institutions, telecom companies, Internet Service Providers, and law enforcement agencies on a single platform to tackle online financial crimes efficiently. This integrated approach is expected to minimise the time required to streamline operations and to track and neutralise cyber fraud.
- Cyber Commando: The Cyber Commandos Program is an initiative in which a specialised wing of trained Cyber Commandos will be established in states, Union Territories, and Central Police Organizations. These commandos will work to secure the nation’s digital space and counter rising cyber threats. They will form the first line of defence in safeguarding India from the growing cyber threats.
- Samanvay Platform: The Samanvay platform is a web-based Joint Cybercrime Investigation Facility System that was introduced as a one-stop data repository for cybercrime. It facilitates cybercrime mapping, data analytics, and cooperation among law enforcement agencies across the country. This will play a pivotal role in fostering collaborations in combating cybercrimes. Mr. Shah recognised the Samanvay platform as a crucial step in fostering data sharing and collaboration. He called for a shift from the “need to know” principle to a “duty to share” mindset in dealing with cyber threats. The Samanvay platform will serve as India’s first shared data repository, significantly enhancing the country’s cybercrime response.
- Suspect Registry: The Suspect Registry Portal is a national-level platform that has been designed to track cybercriminals. The portal registry will be connected to the National Cybercrime Reporting Portal (NCRP) which aims to help banks, financial intermediaries, and law enforcement agencies strengthen fraud risk management. The initiative is expected to improve the real-time tracking of cyber suspects, preventing repeat offences and improving fraud detection mechanisms.
Rising Digitalization: Prioritizing Cybersecurity
The number of internet users in India has grown from 25 crores in 2014 to 95 crores in 2024, accompanied by a 78-foldincrease in data consumption. This growth is echoed in the number of growing cybersecurity challenges in the digital era. With the rise of digital transactions through Jan Dhan accounts, Rupay debit cards, and UPI systems, Shri Shah underscored the growing threat of digital fraud. He emphasised the need to protect personal data, prevent online harassment, and counter misinformation, fake news, and child abuse in the digital space.
The three new criminal laws, the Bharatiya Nyaya Sanhita (BNS), Bharatiya Nagrik Suraksha Sanhita (BNSS), and Bharatiya Sakshya Adhiniyam (BSA), which aim to strengthen India’s legal framework for cybercrime prevention, were also referred to in the address bythe Home Minister. These laws incorporate tech-driven solutions that will ensure investigations are conducted scientifically and effectively.
Mr. Shah emphasised popularising the 1930Cyber Crime Helpline. Additionally, he noted that I4C has issued over 600advisories, blocked numerous websites and social media pages operated by cybercriminals, and established a National Cyber Forensic Laboratory in Delhi. Over 1,100 officers have already received cyber forensics training under theI4C umbrella.
In response to the regional cybercrime challenges, the formation of Joint Cyber Coordination Teams in cybercrime hotspot areas like Mewat, Jamtara, Ahmedabad, Hyderabad, Chandigarh, Visakhapatnam and Guwahati was highlighted as a coordinated response to local cybercrime hotspot issues.
Conclusion
With the launch of initiatives like the Cyber Fraud Mitigation Centre, the Samanvay platform, and the Cyber Commandos Program, I4C is positioned to play a crucial role in combating cybercrime. The I4C is moving forward with a clear vision for a secure digital future and safeguarding India's digital ecosystem.
References:
● https://pib.gov.in/PressReleaseIframePage.aspx?PRID=2053438

Introduction
Artificial Intelligence (AI) is fast transforming our future in the digital world, transforming healthcare, finance, education, and cybersecurity. But alongside this technology, bad actors are also weaponising it. More and more, state-sponsored cyber actors are misusing AI tools such as ChatGPT and other generative models to automate disinformation, enable cyberattacks, and speed up social engineering operations. This write-up explores why and how AI, in the form of large language models (LLMs), is being exploited in cyber operations associated with adversarial states, and the necessity for international vigilance, regulation, and AI safety guidelines.
The Shift: AI as a Cyber Weapon
State-sponsored threat actors are misusing tools such as ChatGPT to turbocharge their cyber arsenal.
- Phishing Campaigns using AI- Generative AI allows for highly convincing and grammatically correct phishing emails. Unlike the shoddily written scams of yesteryears, these AI-based messages are tailored according to the victim's location, language, and professional background, increasing the attack success rate considerably. Example: It has recently been reported by OpenAI and Microsoft that Russian and North Korean APTs have employed LLMs to create customised phishing baits and malware obfuscation notes.
- Malware Obfuscation and Script Generation- Big Language Models (LLMs) such as ChatGPT may be used by cyber attackers to help write, debug, and camouflage malicious scripts. While the majority of AI instruments contain safety mechanisms to guard against abuse, threat actors often exploit "jailbreaking" to evade these protections. Once such constraints are lifted, the model can be utilised to develop polymorphic malware that alters its code composition to avoid detection. It can also be used to obfuscate PowerShell or Python scripts to render them difficult for conventional antivirus software to identify. Also, LLMs have been employed to propose techniques for backdoor installation, additional facilitating stealthy access to hijacked systems.
- Disinformation and Narrative Manipulation
State-sponsored cyber actors are increasingly employing AI to scale up and automate disinformation operations, especially on election, protest, and geopolitical dispute days. With LLMs' assistance, these actors can create massive amounts of ersatz news stories, deepfake interview transcripts, imitation social media posts, and bogus public remarks on online forums and petitions. The localisation of content makes this strategy especially perilous, as messages are written with cultural and linguistic specificity, making them credible and more difficult to detect. The ultimate aim is to seed societal unrest, manipulate public sentiments, and erode faith in democratic institutions.
Disrupting Malicious Uses of AI – OpenAI Report (June 2025)
OpenAI released a comprehensive threat intelligence report called "Disrupting Malicious Uses of AI" and the “Staying ahead of threat actors in the age of AI”, which outlined how state-affiliated actors had been testing and misusing its language models for malicious intent. The report named few advanced persistent threat (APT) groups, each attributed to particular nation-states. OpenAI highlighted that the threat actors used the models mostly for enhancing linguistic quality, generating social engineering content, and expanding operations. Significantly, the report mentioned that the tools were not utilized to produce malware, but rather to support preparatory and communicative phases of larger cyber operations.
AI Jailbreaking: Dodging Safety Measures
One of the largest worries is how malicious users can "jailbreak" AI models, misleading them into generating banned content using adversarial input. Some methods employed are:
- Roleplay: Simulating the AI being a professional criminal advisor
- Obfuscation: Concealing requests with code or jargon
- Language Switching: Proposing sensitive inquiries in less frequently moderated languages
- Prompt Injection: Lacing dangerous requests within innocent-appearing questions
These methods have enabled attackers to bypass moderation tools, transforming otherwise moral tools into cybercrime instruments.
Conclusion
As AI generations evolve and become more accessible, its application by state-sponsored cyber actors is unprecedentedly threatening global cybersecurity. The distinction between nation-state intelligence collection and cybercrime is eroding, with AI serving as a multiplier of adversarial campaigns. AI tools such as ChatGPT, which were created for benevolent purposes, can be targeted to multiply phishing, propaganda, and social engineering attacks. The cross-border governance, ethical development practices, and cyber hygiene practices need to be encouraged. AI needs to be shaped not only by innovation but by responsibility.
References
- https://www.microsoft.com/en-us/security/blog/2024/02/14/staying-ahead-of-threat-actors-in-the-age-of-ai/
- https://www.bankinfosecurity.com/openais-chatgpt-hit-nation-state-hackers-a-28640
- https://oecd.ai/en/incidents/2025-06-13-b5e9
- https://www.microsoft.com/en-us/security/security-insider/meet-the-experts/emerging-AI-tactics-in-use-by-threat-actors
- https://www.wired.com/story/youre-not-ready-for-ai-hacker-agents/
- https://www.cert-in.org.in/PDF/Digital_Threat_Report_2024.pdf
- https://cdn.openai.com/threat-intelligence-reports/5f73af09-a3a3-4a55-992e-069237681620/disrupting-malicious-uses-of-ai-june-2025.pdf

Introduction
Valentine’s Day celebrates the bond between people, their romantic love, and their deep relationships with others. The increasing use of digital platforms in modern relationships has created a situation where cybercriminals use this time of year to exploit human emotions for money-making schemes. The period around 14 February often sees a rise in online romance scams, phishing attacks, and fake shopping websites that specifically target people who are emotionally vulnerable and active online. People need to be aware of these scams because this awareness helps them protect their personal information and their financial resources.
The Rise of Romance Scams
Modern romance scams have evolved from their original form because criminals now execute their schemes through more advanced methods. Fraudsters create authentic-looking fake identities, which they use to deceive victims through dating applications, social media platforms and networking websites. The profiles use stolen images and fake job histories, together with convincing emotional stories, which help them establish trust with potential victims.
Scammers usually begin their deception after they have built an emotional connection with their targets. Once trust is established, they introduce a crisis or an opportunity that pressures the victim to act quickly. This is often presented as a problem that needs urgent help or a chance that should not be missed, such as:
- A sudden medical emergency that requires money for treatment
- Requests for travel expenses to finally come and meet in person
- Fake investment opportunities that promise quick or guaranteed returns
- Demands for customs, courier, or clearance fees to release a supposed package or gift
They make the victim give money to them and buy gift cards and handle personal banking details. The scam takes place for several weeks or months until the victim starts to show doubt about what is happening. The psychological manipulation that occurs in romance scams causes severe harm to their victims. Victims experience two types of damage because criminals steal their money, and they suffer emotional pain, and their social standing gets damaged.
Fake E-Commerce and “Valentine’s Deals”
Valentine's Day marks the beginning of a shopping rush, which leads people to buy various gifts, including flowers, jewellery and customised products, as well as making reservations for events. Cybercriminals create fake websites to exploit this demand by providing fake discounts and temporary promotional offers.
Common warning signs include:
- Newly registered domains that lack valid user reviews
- Websites that contain multiple spelling mistakes and display poor design
- Payment requests through methods that cannot be tracked
- Online platforms that lack secure payment processing systems
Consumers who make purchases on such sites face the risk of losing money while their card information is stolen for future fraudulent activities.
Phishing in the Name of Love
During the holiday season, phishing campaigns increase their focus on particular targets. Users may receive:
- Valentine's Day discount emails
- Messages that claim to show secret admirer intentions
- Links that lead to supposed romantic surprises
- Delivery notifications that inform about unreceived gifts
Malicious links result in credential theft, malware installation and unauthorised financial transactions. At first glance, these attacks show resemblance to authentic brands and logistics companies, which makes them hard to identify.
Investment and Crypto Romance Fraud
A rising type of romance scams now uses cryptocurrency and online trading platforms as their new approach. Scammers who establish trust with their victims will convince them to invest in digital assets that appear to generate high returns. The fake dashboards display excellent investment results to convince investors to commit more funds. The process stops when they block all withdrawal requests and stop all contact with the user. The combination of emotional manipulation with financial fraud shows how cybercrime develops according to technological advancements.
Why Seasonal Scams Work
Seasonal scams succeed because they match the predictable behaviour patterns that people exhibit during specific times of the year. During Valentine’s season:
- People experience their highest emotional vulnerability
- People shop more frequently through online platforms
- People use digital platforms at increased rates
- Users will decrease their level of scepticism while trying to establish connections with others
Cybercriminals use urgent situations together with emotional ties and social norms as their primary attack methods. The combination of psychological triggers and digital convenience creates fertile ground for deception.
CyberPeace Recommendations for Staying Safe This Valentine’s Season
The digital platforms provide people who search for connections with valuable opportunities to connect with others, yet users must remain careful about their online activities. People can protect themselves from online fraud by following these steps:
- They should confirm identity details before they give away their private data.
- They should not send money to people whom they met only through internet platforms.
- They should verify website ownership and examine customer feedback before making online purchases.
- They should activate multi-factor authentication for their social media accounts and financial accounts.
- People should treat unexpected links with great care, especially those links that create a sense of urgency.
- The Cybercrime reporting portal www.cybercrime.gov.in with 24x7 helpline 1930 is an effective tool at the disposal of victims of cybercrimes to report their complaints.
- In case of any cyber threat, issue or discrepancy, you can also seek assistance from the CyberPeace Helpline at +91 9570000066 or write to us at helpline@cyberpeace.net. Immediate reporting protects victims and helps to combat cybercrime.
Conlusion
Online safety during festive seasons requires shared responsibility among multiple parties. Digital resilience is strengthened through the combined efforts of platforms, financial institutions, regulators, and civil society organisations. The digital ecosystem becomes safer through three essential elements, which include awareness campaigns, stronger verification systems, and timely reporting mechanisms.
Valentine’s Day centres on the building of trust between people who want to connect with each other. To maintain trust in digital environments, users need to practice digital literacy skills, which should be shared by everyone. People who stay updated about cybersecurity threats can celebrate Valentine’s Day more safely, because their expressions of love remain protected from online scams.
References
- https://www.cloudsek.com/blog/valentines-day-cyber-attack-landscape-exploiting-love-through-digital-deception
- https://about.fb.com/news/2025/02/how-avoid-romance-scams-this-valentines-day/
- https://www.fbi.gov/contact-us/field-offices/sanfrancisco/fbi-san-francisco-warns-romance-scams-increasing-across-the-bay-area-this-valentines-day
- https://abc11.com/post/romance-scams-surge-ahead-valentines-day/18581079/
- https://www.moneycontrol.com/technology/5-common-online-scams-you-should-avoid-this-valentine-s-day-article-13820108.html