#FactCheck - False Claim about Video of Sadhu Lying in Fire at Mahakumbh 2025
Executive Summary:
Recently, our team came across a video on social media that appears to show a saint lying in a fire during the Mahakumbh 2025. The video has been widely viewed and comes with captions claiming that it is part of a ritual during the ongoing Mahakumbh 2025. After thorough research, we found that these claims are false. The video is unrelated to Mahakumbh 2025 and comes from a different context and location. This is an example of how the information posted was from the past and not relevant to the alleged context.

Claim:
A video has gone viral on social media, claiming to show a saint lying in fire during Mahakumbh 2025, suggesting that this act is part of the traditional rituals associated with the ongoing festival. This misleading claim falsely implies that the act is a standard part of the sacred ceremonies held during the Mahakumbh event.

Fact Check:
Upon receiving the post we conducted a reverse image search of the key frames extracted from the video, and traced the video to an old article. Further research revealed that the original post was from 2009, when Ramababu Swamiji, aged 80, laid down on a burning fire for the benefit of society. The video is not recent, as it had already gone viral on social media in November 2009. A closer examination of the scene, crowd, and visuals clearly shows that the video is unrelated to the rituals or context of Mahakumbh 2025. Additionally, our research found that such activities are not part of the Mahakumbh rituals. Reputable sources were also kept into consideration to cross-verify this information, effectively debunking the claim and emphasizing the importance of verifying facts before believing in anything.


For more clarity, the YouTube video attached below further clears the doubt, which reminds us to verify whether such claims are true or not.

Conclusion:
The viral video claiming to depict a saint lying in fire during Mahakumbh 2025 is entirely misleading. Our thorough fact-checking reveals that the video dates back to 2009 and is unrelated to the current event. Such misinformation highlights the importance of verifying content before sharing or believing it. Always rely on credible sources to ensure the accuracy of claims, especially during significant cultural or religious events like Mahakumbh.
- Claim: A viral video claims to show a saint lying in fire during the Mahakumbh 2025.
- Claimed On: X (Formerly Known As Twitter)
- Fact Check: False and Misleading
Related Blogs

Introduction
Advanced deepfake technology blurs the line between authentic and fake. To ascertain the credibility of the content it has become important to differentiate between genuine and manipulated or curated online content highly shared on social media platforms. AI-generated fake voice clone, videos are proliferating on the Internet and social media. There is the use of sophisticated AI algorithms that help manipulate or generate synthetic multimedia content such as audio, video and images. As a result, it has become increasingly difficult to differentiate between genuine, altered, or fake multimedia content. McAfee Corp., a well-known or popular global leader in online protection, has recently launched an AI-powered deepfake audio detection technology under Project “Mockingbird” intending to safeguard consumers against the surging threat of fabricated or AI-generated audio or voice clones to dupe people for money or unauthorisly obtaining their personal information. McAfee Corp. announced its AI-powered deepfake audio detection technology, known as Project Mockingbird, at the Consumer Electronics Show, 2024.
What is voice cloning?
To create a voice clone of anyone's, audio can be deeplyfaked, too, which closely resembles a real voice but, in actuality, is a fake voice created through deepfake technology.
Emerging Threats: Cybercriminal Exploitation of Artificial Intelligence in Identity Fraud, Voice Cloning, and Hacking Acceleration
AI is used for all kinds of things from smart tech to robotics and gaming. Cybercriminals are misusing artificial intelligence for rather nefarious reasons including voice cloning to commit cyber fraud activities. Artificial intelligence can be used to manipulate the lips of an individual so it looks like they're saying something different, it could also be used for identity fraud to make it possible to impersonate someone for a remote verification for your bank and it also makes traditional hacking more convenient. Cybercriminals have been misusing advanced technologies such as artificial intelligence, which has led to an increase in the speed and volume of cyber attacks, and that's been the theme in recent times.
Technical Analysis
To combat Audio cloning fraudulent activities, McAfee Labs has developed a robust AI model that precisely detects artificially generated audio used in videos or otherwise.
- Context-Based Recognition: Contextual assessment is used by technological devices to examine audio components in the overall setting of an audio. It improves the model's capacity to recognise discrepancies suggestive of artificial intelligence-generated audio by evaluating its surroundings information.
- Conductual Examination: Psychological detection techniques examine linguistic habits and subtleties, concentrating on departures from typical individual behaviour. Examining speech patterns, tempo, and pronunciation enables the model to identify artificially or synthetically produced material.
- Classification Models: Auditory components are categorised by categorisation algorithms for detection according to established traits of human communication. The technology differentiates between real and artificial intelligence-synthesized voices by comparing them against an extensive library of legitimate human speech features.
- Accuracy Outcomes: McAfee Labs' deepfake voice recognition solution, which boasts an impressive ninety per cent success rate, is based on a combined approach incorporating psychological, context-specific, and categorised identification models. Through examining audio components in the larger video context and examining speech characteristics, such as intonation, rhythm, and pronunciation, the system can identify discrepancies that could be signs of artificial intelligence-produced audio. Categorical models make an additional contribution by classifying audio information according to characteristics of known human speech. This all-encompassing strategy is essential for precisely recognising and reducing the risks connected to AI-generated audio data, offering a strong barrier against the growing danger of deepfake situations.
- Application Instances: The technique protects against various harmful programs, such as celebrity voice-cloning fraud and misleading content about important subjects.
Conclusion
It is important to foster ethical and responsible consumption of technology. Awareness of common uses of artificial intelligence is a first step toward broader public engagement with debates about the appropriate role and boundaries for AI. Project Mockingbird by Macafee employs AI-driven deepfake audio detection to safeguard against cyber criminals who are using fabricated AI-generated audio for scams and manipulating the public image of notable figures, protecting consumers from financial and personal information risks.
References:
- https://www.cnbctv18.com/technology/mcafee-deepfake-audio-detection-technology-against-rise-in-ai-generated-misinformation-18740471.htm
- https://www.thehindubusinessline.com/info-tech/mcafee-unveils-advanced-deepfake-audio-detection-technology/article67718951.ece
- https://lifestyle.livemint.com/smart-living/innovation/ces-2024-mcafee-ai-technology-audio-project-mockingbird-111704714835601.html
- https://news.abplive.com/fact-check/audio-deepfakes-adding-to-cacophony-of-online-misinformation-abpp-1654724

Introduction
We inhabit an era where digital connectivity, while empowering, has also unleashed a relentless tide of cyber vulnerabilities, where personal privacy is constantly threatened, and crimes like sextortion are the perfect example of the sinister side of our hyperconnected world. Social media platforms, instant messaging apps, and digital content-sharing tools have all grown rapidly, changing how people communicate with one another and making it harder to distinguish between the private and public domains. The rise of sophisticated cybercrimes that use the very tools meant to connect us is the price paid for this unparalleled convenience. Sextortion, a portmanteau of “sex’ and “extortion”, stands out among them as a particularly pernicious kind of internet exploitation. Under the threat of disclosing their private information, photos, or videos, people are forced to engage in sexual behaviours or provide intimate content. Sextortion’s psychological component is what makes it particularly harmful, it feeds on social stigma, shame, and fear, which discourage victims from reporting the crime and feed the cycle of victimisation and silence. This cybercrime targets vulnerable people from all socioeconomic backgrounds and is not limited by age, gender, or location.
The Economy of Shame: Sextortion as a Cybercrime Industry
A news report from June 03, 2025, reveals a sextortion racket busted in Delhi, where a money trail of over Rs. 5 crore was identified by different teams of the Crime branch. From synthetic financial identities to sextortion and other cyber frauds, a recipe for a sophisticated cybercrime chain was found. To believe this is an aberration is to overlook the reality that it is symptomatic of a much wider and largely uncharted criminal framework. According to the FBI’s 2024 IC3 report, “extortion (including sextortion)” has skyrocketed to 86,415 complaints with losses of $143 million reported in the United States (US) alone. This indicates that coercive image-based threats are no longer an isolated cybercrime but an everyday occurrence. Sextortion is no longer an isolated cybercrime; it has metamorphosed into a systematic, industrialised criminal enterprise. Another news report dated 19th July, 2025, where Delhi Police has detained four people suspected of participating in a sextortion scheme that targeted a resident of the Bhagwanpur Khera neighbourhood of Shahdara. The suspected people were allegedly arrested on a complaint wherein the victim was manipulated and fell prey to a dating site.
The threat is amplified by the usage of deepfake technology, which allows offenders to create obscene content that looks believable. The approach, which relies on the stigma attached to sexual imagery in conservative societies like India, is that victims frequently give in to requests out of fear of damaging their reputations. The combination of cybercrime and cutting-edge technology highlights the lopsided power that criminals possess, leaving victims defenceless and law enforcement unable to keep up.
Legal Remedies and the Evolving Battle Against Sextortion
Given the complexity of these crimes, India has recognised sextortion and similar cyber-enabled financial crimes under a number of legal frameworks. A change to recognising cyber-enabled sexual exploitation as an organised criminal business is shown by the introduction of specific provisions like Section 111 in the Bhartiya Nyaya Sanhita (BNS), 2023, which classifies organised cybercrimes including extortion and frauds which fall under its expansive interpretation, as a serious offence. Similarly, Section 318 (2) criminalises cheating with a maximum sentence of three years in prison or a fine, whereas Section 336 (2) makes digital forgery a crime with a maximum sentence with a maximum sentence of two years in prison or a fine. In addition to these regulations, cheating by personation through computer resources is punishable by the Information Technology Act, 2000, specifically Section 66D, which carries a maximum sentence of three years in prison and a maximum fine of Rs. 1 lakh. Due to issues with attribution, cross-border jurisdiction, and the discreet nature of digital evidence, enforcement is still inconsistent even with current statutory restrictions.
The government and its agencies recognise that laws achieve real impact only when backed by awareness initiatives and accessible, localised mechanisms for redressal. Several Indian states and the Department of Telecommunications launched numerous campaigns to educate the public about and safeguard their mobile communication assets against identity theft, financial fraud, and cyberscams. Initiatives like Cyber Saathi Initiative and Cyber Dost by MHA, with the goal of improving forensic and victim reporting skills.
Conclusion
At CyberPeace, we understand that the best defence against online abuse is prevention. Our goal is to provide people with the information and resources to identify, avoid and report sextortion attempts like CyberPeace Helpline and organise awareness campaigns on safe digital habits. In order to remain updated with the constantly looming danger, our research and policy advocacy also focus on developing more robust legal and technological safeguards.
To every reader: think before you share, secure your accounts, and never let shame silence you. If you or someone you know becomes a victim, report it immediately, help is available, and justice is possible. Together we can reclaim the internet as a space of trust, not terror.
References
- https://www.hindustantimes.com/india-news/delhi-police-busts-sextortion-cyberfraud-rackets-6-held-101748959601825.html
- https://timesofindia.indiatimes.com/city/delhi/delhi-police-arrests-four-for-sextortion-and-blackmail-in-shahdara/articleshow/122767656.cms
- https://cdn.ncw.gov.in/wp-content/uploads/2025/05/CyberSaheli.pdf

Introduction
An age of unprecedented problems has been brought about by the constantly changing technological world, and misuse of deepfake technology has become a reason for concern which has also been discussed by the Indian Judiciary. Supreme Court has expressed concerns about the consequences of this quickly developing technology, citing a variety of issues from security hazards to privacy violations to the spread of disinformation. In general, misuse of deepfake technology is particularly dangerous since it may fool even the sharpest eye because they are almost identical to the actual thing.
SC judge expressed Concerns: A Complex Issue
During a recent speech, Supreme Court Justice Hima Kohli emphasized the various issues that deepfakes present. She conveyed grave concerns about the possibility of invasions of privacy, the dissemination of false information, and the emergence of security threats. The ability of deepfakes to be created so convincingly that they seem to come from reliable sources is especially concerning as it increases the potential harm that may be done by misleading information.
Gender-Based Harassment Enhanced
In this internet era, there is a concerning chance that harassment based on gender will become more severe, as Justice Kohli noted. She pointed out that internet platforms may develop into epicentres for the quick spread of false information by anonymous offenders who act worrisomely and freely. The fact that virtual harassment is invisible may make it difficult to lessen the negative effects of toxic online postings. In response, It is advocated that we can develop a comprehensive policy framework that modifies current legal frameworks—such as laws prohibiting sexual harassment online —to adequately handle the issues brought on by technology breakthroughs.
Judicial Stance on Regulating Deepfake Content
In a different move, the Delhi High Court voiced concerns about the misuse of deepfake and exercised judicial intervention to limit the use of artificial intelligence (AI)-generated deepfake content. The intricacy of the matter was highlighted by a division bench. The bench proposed that the government, with its wider outlook, could be more qualified to handle the situation and come up with a fair resolution. This position highlights the necessity for an all-encompassing strategy by reflecting the court's acknowledgement of the technology's global and borderless character.
PIL on Deepfake
In light of these worries, an Advocate from Delhi has taken it upon himself to address the unchecked use of AI, with a particular emphasis on deepfake material. In the event that regulatory measures are not taken, his Public Interest Litigation (PIL), which is filed at the Delhi High Court, emphasises the necessity of either strict limits on AI or an outright prohibition. The necessity to discern between real and fake information is at the center of this case. Advocate suggests using distinguishable indicators, such as watermarks, to identify AI-generated work, reiterating the demand for openness and responsibility in the digital sphere.
The Way Ahead:
Finding a Balance
- The authorities must strike a careful balance between protecting privacy, promoting innovation, and safeguarding individual rights as they negotiate the complex world of deepfakes. The Delhi High Court's cautious stance and Justice Kohli's concerns highlight the necessity for a nuanced response that takes into account the complexity of deepfake technology.
- Because of the increased complexity with which the information may be manipulated in this digital era, the court plays a critical role in preserving the integrity of the truth and shielding people from the possible dangers of misleading technology. The legal actions will surely influence how the Indian judiciary and legislature respond to deepfakes and establish guidelines for the regulation of AI in the nation. The legal environment needs to change as technology does in order to allow innovation and accountability to live together.
Collaborative Frameworks:
- Misuse of deepfake technology poses an international problem that cuts beyond national boundaries. International collaborative frameworks might make it easier to share technical innovations, legal insights, and best practices. A coordinated response to this digital threat may be ensured by starting a worldwide conversation on deepfake regulation.
Legislative Flexibility:
- Given the speed at which technology is advancing, the legislative system must continue to adapt. It will be required to introduce new legislation expressly addressing developing technology and to regularly evaluate and update current laws. This guarantees that the judicial system can adapt to the changing difficulties brought forth by the misuse of deepfakes.
AI Development Ethics:
- Promoting moral behaviour in AI development is crucial. Tech businesses should abide by moral or ethical standards that place a premium on user privacy, responsibility, and openness. As a preventive strategy, ethical AI practices can lessen the possibility that AI technology will be misused for malevolent purposes.
Government-Industry Cooperation:
- It is essential that the public and commercial sectors work closely together. Governments and IT corporations should collaborate to develop and implement legislation. A thorough and equitable approach to the regulation of deepfakes may be ensured by establishing regulatory organizations with representation from both sectors.
Conclusion
A comprehensive strategy integrating technical, legal, and social interventions is necessary to navigate the path ahead. Governments, IT corporations, the courts, and the general public must all actively participate in the collective effort to combat the misuse of deepfakes, which goes beyond only legal measures. We can create a future where the digital ecosystem is safe and inventive by encouraging a shared commitment to tackling the issues raised by deepfakes. The Government is on its way to come up with dedicated legislation to tackle the issue of deepfakes. Followed by the recently issued government advisory on misinformation and deepfake.