#FactCheck - False Claim about Video of Sadhu Lying in Fire at Mahakumbh 2025
Executive Summary:
Recently, our team came across a video on social media that appears to show a saint lying in a fire during the Mahakumbh 2025. The video has been widely viewed and comes with captions claiming that it is part of a ritual during the ongoing Mahakumbh 2025. After thorough research, we found that these claims are false. The video is unrelated to Mahakumbh 2025 and comes from a different context and location. This is an example of how the information posted was from the past and not relevant to the alleged context.

Claim:
A video has gone viral on social media, claiming to show a saint lying in fire during Mahakumbh 2025, suggesting that this act is part of the traditional rituals associated with the ongoing festival. This misleading claim falsely implies that the act is a standard part of the sacred ceremonies held during the Mahakumbh event.

Fact Check:
Upon receiving the post we conducted a reverse image search of the key frames extracted from the video, and traced the video to an old article. Further research revealed that the original post was from 2009, when Ramababu Swamiji, aged 80, laid down on a burning fire for the benefit of society. The video is not recent, as it had already gone viral on social media in November 2009. A closer examination of the scene, crowd, and visuals clearly shows that the video is unrelated to the rituals or context of Mahakumbh 2025. Additionally, our research found that such activities are not part of the Mahakumbh rituals. Reputable sources were also kept into consideration to cross-verify this information, effectively debunking the claim and emphasizing the importance of verifying facts before believing in anything.


For more clarity, the YouTube video attached below further clears the doubt, which reminds us to verify whether such claims are true or not.

Conclusion:
The viral video claiming to depict a saint lying in fire during Mahakumbh 2025 is entirely misleading. Our thorough fact-checking reveals that the video dates back to 2009 and is unrelated to the current event. Such misinformation highlights the importance of verifying content before sharing or believing it. Always rely on credible sources to ensure the accuracy of claims, especially during significant cultural or religious events like Mahakumbh.
- Claim: A viral video claims to show a saint lying in fire during the Mahakumbh 2025.
- Claimed On: X (Formerly Known As Twitter)
- Fact Check: False and Misleading
Related Blogs

Introduction
The ‘Barbie’ fever is going high in India, and it’s hype to launch online scams in India. The cybercriminals attacking the ‘Barbie’ fans in India, as the popular malware and antivirus protection MacAfee has recently reported that India is in the top 3rd number among countries facing major malware attacks. After the release of ‘barbie’ in theatres, the Scams started spreading across India through the free download of the ‘Barbie’ movie from the link and other viruses. The scammers trick the victims by selling free ‘Barbie’ tickets and, after the movie’s hit, search for the free download links on websites which leads to the Scams.
What is the ‘Barbie’ malware?
After the release of the ‘Barbie’ movie, trying to keep up with the trend, Barbie fans started to search the links for free movie downloads from anonymous sources. And after downloading the movie, there was malware in the downloaded zip files. The online scam includes not genuine dubbed downloads of the movie that install malware, barbie-related viruses, and fake videos that point to free tickets, and also clicking on unverified links for the movie access resulted in Scam. It is important not to get stuck in these trends just because to keep up with them, as it could land you in trouble.
Case: As per the report of McAfee, several cases of malware trick victims into downloading the ‘ Barbie’ movie in different languages. By clicking the link, it prompts the user to download a Zip file, which is packed with malware
Countries-wise malware distribution
Cyber Scams witnessed a significant surge in just a few weeks, with hundreds of incidents of new malware cases. And The USA is on the top No. Among all the countries, In the USA there was 37 % of ‘Barbie’ malware attacks held per the, while Australia, the UK, and India suffered 6 % of malware attacks. And other countries like Japan, Ireland, and France faced 3% of Malware attacks.
What are the precautions?
Cyber scams are evolving everywhere, users must remain vigilant and take necessary precautions to protect their personal information. The user shall avoid clicking on suspicious links, also those which are related to unauthorised movie downloads or fake ticket offers. The people shall use legitimate and official platforms to access movie-related content. Keeping anti-malware and antivirus will add an extra layer of protection.
Here are some following precautions against Malware:
- Use security software.
- Use strong passwords and authentication.
- Enforce safe browsing and email.
- Data backup.
- Implement Anti-lateral Movement.
Conclusion
Cyberspace is evolving, and with that, Scams are also evolving. With the new trend of ‘Barbie’ Scams going on the rise everywhere, India is on top 3rd No. In India, McAfee reported several malicious attacks that attempted to trick the victims into downloading the free version of ‘Barbie’ movie in dubbed languages. This resulted in a Scam. People usually try to keep up with trends that land them in trouble. The users shall beware of these kinds of cyber-attacks. These scams result in huge losses. Technology should be used with proper precautions as per the incidents happening around.

Introduction:
This report examines ongoing phishing scams targeting "State Bank of India (SBI)" customers, India's biggest public bank using fake SelfKYC APKs to trick people. The image plays a part in a phishing plan to get users to download bogus APK files by claiming they need to update or confirm their "Know Your Customer (KYC)" info.
Fake Claim:
A picture making the rounds on social media comes with an APK file. It shows a phishing message that says the user's SBI YONO account will stop working because of their "Old PAN card." It then tells the user to install the "WBI APK" APK (Android Application Package) to check documents and keep their account open. This message is fake and aims to get people to download a harmful app.
Key Characteristics of the Scam:
- The messages "URGENTLY REQUIRED" and "Your account will be blocked today" show how scammers try to scare people into acting fast without thinking.
- PAN Card Reference: Crooks often use PAN card verification and KYC updates as a trick because these are normal for Indian bank customers.
- Risky APK Downloads: The message pushes people to get APK files, which can be dangerous. APKs from places other than the Google Play Store often have harmful software.
- Copying the Brand: The message looks a lot like SBI's real words and logos to seem legit.
- Shady Source: You can't find the APK they mention on Google Play or SBI's website, which means you should ignore the app right away.
Modus Operandi:
- Delivery Mechanism: Typically, users of messaging services like "WhatsApp," "SMS," or "email" receive identical messages with an APK link, which is how the scam is distributed.
- APK Installation: The phony APK frequently asks for a lot of rights once it is installed, including access to "SMS," "contacts," "calls," and "banking apps."
- Data Theft: Once installed, the program may have the ability to steal card numbers, personal information, OTPs, and banking credentials.
- Remote Access: These APKs may occasionally allow cybercriminals to remotely take control of the victim's device in order to carry out fraudulent financial activities.
While the user installs the application on their device the following interface opens:




It asks the user to allow the following:
- SMS is used to send and receive info from the bank.
- User details such as Username, Password, Mobile Number, and Captcha.
Technical Findings of the Application:
Static Analysis:
- File Name: SBI SELF KYC_015850.apk
- Package Name: com.mark.dot.comsbione.krishn
- Scan Date: Sept. 25, 2024, 6:45 a.m.
- App Security Score: 52/100 (MEDIUM RISK)
- Grade: B
File Information:
- File Name: SBI SELF KYC_015850.apk
- Size: 2.88MB
- MD5: 55fdb5ff999656ddbfa0284d0707d9ef
- SHA1: 8821ee6475576beb86d271bc15882247f1e83630
- SHA256: 54bab6a7a0b111763c726e161aa8a6eb43d10b76bb1c19728ace50e5afa40448
App Information:
- App Name: SBl Bank
- Package Name:: com.mark.dot.comsbione.krishn
- Main Activity: com.mark.dot.comsbione.krishn.MainActivity
- Target SDK: 34
- Min SDK: 24
- Max SDK:
- Android Version Name:: 1.0
- Android Version Code:: 1
App Components:
- Activities: 8
- Services: 2
- Receivers: 2
- Providers: 1
- Exported Activities: 0
- Exported Services: 1
- Exported Receivers: 2
- Exported Providers:: 0
Certificate Information:
- Binary is signed
- v1 signature: False
- v2 signature: True
- v3 signature: False
- v4 signature: False
- X.509 Subject: CN=PANDEY, OU=PANDEY, O=PANDEY, L=NK, ST=NK, C=91
- Signature Algorithm: rsassa_pkcs1v15
- Valid From: 20240904 07:38:35+00:00
- Valid To: 20490829 07:38:35+00:00
- Issuer: CN=PANDEY, OU=PANDEY, O=PANDEY, L=NK, ST=NK, C=91
- Serial Number: 0x1
- Hash Algorithm: sha256
- md5: 4536ca31b69fb68a34c6440072fca8b5
- sha1: 6f8825341186f39cfb864ba0044c034efb7cb8f4
- sha256: 6bc865a3f1371978e512fa4545850826bc29fa1d79cdedf69723b1e44bf3e23f
- sha512:05254668e1c12a2455c3224ef49a585b599d00796fab91b6f94d0b85ab48ae4b14868dabf16aa609c3b6a4b7ac14c7c8f753111b4291c4f3efa49f4edf41123d
- PublicKey Algorithm: RSA
- Bit Size: 2048
- Fingerprint: a84f890d7dfbf1514fc69313bf99aa8a826bade3927236f447af63fbb18a8ea6
- Found 1 unique certificate
App Permission

1. Normal Permissions
- Access_network_state: Allows the App to View the Network Status of All Networks.
- Foreground_service: Enables Regular Apps to Use Foreground Services.
- Foreground_service_data_sync: Allows Data Synchronization With Foreground Services.
- Internet: Grants Full Internet Access.
2. Signature Permission:
- Broadcast_sms: Sends Sms Received Broadcasts. It Can Be Abused by Malicious Apps to Forge Incoming Sms Messages.
3. Dangerous Permissions:
- Read_phone_numbers: Grants Access to the Device’s Phone Number(S).
- Read_phone_state: Reads the Phone’s State and Identity, Including Phone Features and Data.
- Read_sms: Allows the App to Read Sms or Mms Messages Stored on the Device or Sim Card. Malicious Apps Could Use This to Read Confidential Messages.
- Receive_sms: Enables the App to Receive and Process Sms Messages. Malicious Apps Could Monitor or Delete Messages Without Showing Them to the User.
- Send_sms: Allows the App to Send Sms Messages. Malicious Apps Could Send Messages Without the User’s Confirmation, Potentially Leading to Financial Costs.
On further analysis on virustotal platform using md5 hash file, the following results were retrieved where there are 24 security vendors out of 68, marked this apk file as malicious and the graph represents the distribution of malicious file in the environment.


Key Takeaways:
- Normal Permissions: Generally Safe for Accessing Basic Functionalities (Network State, Internet).
- Signature Permissions: May Pose Risks When Misused, Especially Related to Sms Broadcasts.
- Dangerous Permissions: Provide Sensitive Data Access, Such as Phone Numbers and Device Identity, Which Can Be Exploited by Malicious Apps.
- The Dangerous Permissions Pose Risks Regarding the Reading, Receiving, and Sending of Sms, Which Can Lead to Privacy Breaches or Financial Consequences.
How to Identify the Scam:
- Official Statement: SBI never asks clients to download unauthorized APKs for upgrades related to KYC or other services. All formal correspondence takes place via the SBI YONO app, which may be found in reputable app shops.
- No Immediate Threats: Bank correspondence never employs menacing language or issues harsh deadlines, such as "your account will be blocked today."
- Email Domain and SMS Number: Verified email addresses or phone numbers are used for official SBI correspondence. Generic, unauthorized numbers or addresses are frequently used in scams.
- Links and APK Files: Steer clear of downloading APK files from unreliable sources at all times. For app downloads, visit the Apple App Store or Google Play Store instead.
CyberPeace Advisory:
- The Research team recommends that people should avoid opening such messages sent via social platforms. One must always think before clicking on such links, or downloading any attachments from unauthorised sources.
- Downloading any application from any third party sources instead of the official app store should be avoided. This will greatly reduce the risk of downloading a malicious app, as official app stores have strict guidelines for app developers and review each app before it gets published on the store.
- Even if you download the application from an authorised source, check the app's permissions before you install it. Some malicious apps may request access to sensitive information or resources on your device. If an app is asking for too many permissions, it's best to avoid it.
- Keep your device and the app-store app up to date. This will ensure that you have the latest security updates and bug fixes.
- Falling into such a trap could result in a complete compromise of the system, including access to sensitive information such as microphone recordings, camera footage, text messages, contacts, pictures, videos, and even banking applications and could lead users to financial loss.
- Do not share confidential details like credentials, banking information with such types of Phishing scams.
- Never share or forward fake messages containing links on any social platform without proper verification.
Conclusion:
Fake APK phishing scams target financial institutions more often. This report outlines safety steps for SBI customers and ways to spot and steer clear of these cons. Keep in mind that legitimate banks never ask you to get an APK from shady websites or threaten to close your account right away. To stay safe, use SBI's official YONO app on both systems and get apps from trusted places like Google Play or the Apple App Store. Check if the info is true before you do anything turn on 2FA for all your bank and money accounts, and tell SBI or your local cyber police about any scams you see.

Introduction
Considering the development of technology, Voice cloning schemes are one such issue that has recently come to light. Scammers are moving forward with AI, and their methods and plans for deceiving and scamming people have also altered. Deepfake technology creates realistic imitations of a person’s voice that can be used to conduct fraud, dupe a person into giving up crucial information, or even impersonate a person for illegal purposes. We will look at the dangers and risks associated with AI voice cloning frauds, how scammers operate and how one might protect themselves from one.
What is Deepfake?
Artificial intelligence (AI), known as “deepfake,” can produce fake or altered audio, video, and film that pass for the real thing. The words “deep learning” and “fake” are combined to get the name “deep fake”. Deepfake technology creates content with a realistic appearance or sound by analysing and synthesising diverse volumes of data using machine learning algorithms. Con artists employ technology to portray someone doing something that has never been in audio or visual form. The best example is the American President, who used deep voice impersonation technology. Deep voice impersonation technology can be used maliciously, such as in deep voice fraud or disseminating false information. As a result, there is growing concerned about the potential influence of deep fake technology on society and the need for effective tools to detect and mitigate the hazards it may provide.
What exactly are deepfake voice scams?
Artificial intelligence (AI) is sometimes utilised in deepfake speech frauds to create synthetic audio recordings that seem like real people. Con artists can impersonate someone else over the phone and pressure their victims into providing personal information or paying money by using contemporary technology. A con artist may pose as a bank employee, a government official, or a friend or relative by utilising a deep false voice. It aims to earn the victim’s trust and raise the likelihood that they will fall for the hoax by conveying a false sense of familiarity and urgency. Deep fake speech frauds are increasing in frequency as deep fake technology becomes more widely available, more sophisticated, and harder to detect. In order to avoid becoming a victim of such fraud, it is necessary to be aware of the risks and take appropriate measures.
Why do cybercriminals use AI voice deep fake?
In order to mislead users into providing private information, money, or system access, cybercriminals utilise artificial intelligence (AI) speech-deep spoofing technology to claim to be people or entities. Using AI voice-deep fake technology, cybercriminals can create audio recordings that mimic real people or entities, such as CEOs, government officials, or bank employees, and use them to trick victims into taking activities that are advantageous to the criminals. This can involve asking victims for money, disclosing login credentials, or revealing sensitive information. In phishing assaults, where fraudsters create audio recordings that impersonate genuine messages from organisations or people that victims trust, deepfake AI voice technology can also be employed. These audio recordings can trick people into downloading malware, clicking on dangerous links, or giving out personal information. Additionally, false audio evidence can be produced using AI voice-deep fake technology to support false claims or accusations. This is particularly risky regarding legal processes because falsified audio evidence may lead to wrongful convictions or acquittals. Artificial intelligence voice deep fake technology gives con artists a potent tool for tricking and controlling victims. Every organisation and the general population must be informed of this technology’s risk and adopt the appropriate safety measures.
How to spot voice deepfake and avoid them?
Deep fake technology has made it simpler for con artists to edit audio recordings and create phoney voices that exactly mimic real people. As a result, a brand-new scam called the “deep fake voice scam” has surfaced. In order to trick the victim into handing over money or private information, the con artist assumes another person’s identity and uses a fake voice. What are some ways to protect oneself from deepfake voice scams? Here are some guidelines to help you spot them and keep away from them:
- Steer clear of telemarketing calls
- One of the most common tactics used by deep fake voice con artists, who pretend to be bank personnel or government officials, is making unsolicited phone calls.
- Listen closely to the voice
- Anyone who phones you pretending to be someone else should pay special attention to their voice. Are there any peculiar pauses or inflexions in their speech? Something that doesn’t seem right can be a deep voice fraud.
- Verify the caller’s identity
- It’s crucial to verify the caller’s identity in order to avoid falling for a deep false voice scam. You might ask for their name, job title, and employer when in doubt. You can then do some research to be sure they are who they say they are.
- Never divulge confidential information
- No matter who calls, never give out personal information like your Aadhar, bank account information, or passwords over the phone. Any legitimate companies or organisations will never request personal or financial information over the phone; if they do, it’s a warning sign that they’re a scammer.
- Report any suspicious activities
- Inform the appropriate authorities if you think you’ve fallen victim to a deep voice fraud. This may include your bank, credit card company, local police station, or the nearest cyber cell. By reporting the fraud, you could prevent others from being a victim.
Conclusion
In conclusion, the field of AI voice deep fake technology is fast expanding and has huge potential for beneficial and detrimental effects. While deep fake voice technology has the potential to be used for good, such as improving speech recognition systems or making voice assistants sound more realistic, it may also be used for evil, such as deep fake voice frauds and impersonation to fabricate stories. Users must be aware of the hazard and take the necessary precautions to protect themselves as AI voice deep fake technology develops, making it harder to detect and prevent deep fake schemes. Additionally, it is necessary to conduct ongoing research and develop efficient techniques to identify and control the risks related to this technology. We must deploy AI appropriately and ethically to ensure that AI voice-deep fake technology benefits society rather than harming or deceiving it.