#FactCheck - AI-Cloned Audio in Viral Anup Soni Video Promoting Betting Channel Revealed as Fake
Executive Summary:
A morphed video of the actor Anup Soni popular on social media promoting IPL betting Telegram channel is found to be fake. The audio in the morphed video is produced through AI voice cloning. AI manipulation was identified by AI detection tools and deepfake analysis tools. In the original footage Mr Soni explains a case of crime, a part of the popular show Crime Patrol which is unrelated to betting. Therefore, it is important to draw the conclusion that Anup Soni is in no way associated with the betting channel.

Claims:
The facebook post claims the IPL betting Telegram channel which belongs to Rohit Khattar is promoted by Actor Anup Soni.

Fact Check:
Upon receiving the post, the CyberPeace Research Team closely analyzed the video and found major discrepancies which are mostly seen in AI-manipulated videos. The lip sync of the video does not match the audio. Taking a cue from this we analyzed using a Deepfake detection tool by True Media. It is found that the voice of the video is 100% AI-generated.



We then extracted the audio and checked in an audio Deepfake detection tool named Hive Moderation. Hive moderation found the audio to be 99.9% AI-Generated.

We then divided the video into keyframes and reverse searched one of the keyframes and found the original video uploaded by the YouTube channel named LIV Crime.
Upon analyzing we found that in the 3:18 time frame the video was edited, and altered with an AI voice.

Hence, the viral video is an AI manipulated video and it’s not real. We have previously debunked such AI voice manipulation with different celebrities and politicians to misrepresent the actual context. Netizens must be careful while believing in such AI manipulation videos.
Conclusion:
In conclusion, the viral video claiming that IPL betting Telegram channel promotion by actor Anup Soni is false. The video has been manipulated using AI voice cloning technology, as confirmed by both the Hive Moderation AI detector and the True Media AI detection tool. Therefore, the claim is baseless and misleading.
- Claim: An IPL betting Telegram channel belonging to Rohit Khattar promoted by Actor Anup Soni.
- Claimed on: Facebook
- Fact Check: Fake & Misleading
Related Blogs

Introduction
As various technological developments enable our phones to take on a greater role, these devices, along with the applications they host, also become susceptible to greater risks. Recently, Zimperium, a tech company that provides security services for mobiles and applications from threats like malware, phishing, etc., has announced its identification of a malware that is targeted toward stealing information from Indian Banks. The Indian Express reports that data from over 25 million devices has been exfiltrated, making it increasingly dangerous, just going by the it has affected so far.
Understanding the Threat: The Case of FatBoyPanel
A malware is a malicious software that is a file or a program, intentionally harmful to a network, server, computer, and other devices. It is also of various types; however, in the context of the aforementioned case, it is a Trojan horse i.e., a file/program designed to trick the victim into assuming it to be a legitimate software program that is trying to gain access. They are able to execute malicious functions on a device as soon as they are activated post-installation.
The FatBoyPanel, as it is called, is a malware management system that carried out a massive cyberattack, targeting Indian mobile users and their bank details. Their modus operandi included the process of social engineering, wherein attackers posed as bank officials who called their target and warned them that if no immediate action was taken to update their bank details, their account would be suspended immediately. On panicking and asking for instructions, they were told to download a banking application from the link sent in the form of an Android Package Kit (APK) file (that requires one to enable “Install from Unknown Sources” ) and install it. Various versions of similar incidents were acted on by other attackers, all to trick the target into downloading the file sent. The apps sent through the links are fake, and once installed, they immediately ask for critical permissions such as access to contacts, device storage, overlay permissions (to show fake login pages over real apps), and access to SMS messages (to steal OTPs and banking alerts). This aids in capturing text messages (especially OTPs related to banks), read stored files, monitor app usage, etc. This data is stolen and then sent to the FatBoyPanel backend, where hackers are able to see real-time data on their dashboard, which they can further download and sell. FatBoyPanel is a C&C (command and control) server that acts as a centralised control room.
Protecting Yourself: Essential Precautions in the Digital Realm
Although there are various other types of malware, how one must deal with them remains the same. Following are a few instructions that one can practice in order to stay safe:
- Be cautious with app downloads: Only download apps from official app stores (Google Play Store, Apple App Store). Even then, check the developer's reputation, app permissions, and user reviews before installing.
- Keep your operating system and apps updated: Updates often include security patches that protect against known vulnerabilities.
- Be wary of suspicious links and attachments: Avoid clicking on links or opening attachments in unsolicited emails, SMS messages, or social media posts. Verify the sender's authenticity before interacting.
- Enable multi-factor authentication (MFA) wherever possible: While malware like FatBoyPanel can sometimes bypass OTP-based MFA, it still adds an extra layer of security against many other threats.
- Use strong and unique passwords: Employ a combination of uppercase and lowercase letters, numbers, and symbols for all your online accounts. Avoid reusing passwords across different platforms.
- Install and maintain a reputable mobile security app: These apps can help detect and remove malware, as well as warn you about malicious websites and links (Bitdefender, etc.)
- Regularly review app permissions and give access judiciously: Check what permissions your installed apps have and revoke any that seem unnecessary or excessive.
- Educate yourself and stay informed: Keep up-to-date with the latest cybersecurity threats and best practices.
Conclusion
The emergence of malware management systems indicates just how sophisticated the attackers have become over the years. Vigilance at the level of the general public is recommended, but so are increasing efforts in awareness regarding such methods of crime, as people continue to remain vulnerable in aspects related to cybersecurity. Sensitive information at stake, we must take steps to sensitise and better prepare the public to deal with the growing landscape of the digital world.
References
- https://zimperium.com/blog/mobile-indian-cyber-heist-fatboypanel-and-his-massive-data-breach
- https://indianexpress.com/article/technology/tech-news-technology/fatboypanel-new-malware-targeting-indian-users-what-is-it-9965305/
- https://www.techtarget.com/searchsecurity/definition/malware

Introduction
The Ministry of Communications, Department of Telecommunications notified the Telecommunications (Telecom Cyber Security) Rules, 2024 on 22nd November 2024. These rules were notified to overcome the vulnerabilities that rapid technological advancements pose. The evolving nature of cyber threats has contributed to strengthening and enhancing telecom cyber security. These rules empower the central government to seek traffic data and any other data (other than the content of messages) from service providers.
Background Context
The Telecommunications Act of 2023 was passed by Parliament in December, receiving the President's assent and being published in the official Gazette on December 24, 2023. The act is divided into 11 chapters 62 sections and 3 schedules. The said act has repealed the old legislation viz. Indian Telegraph Act of 1885 and the Indian Wireless Telegraphy Act of 1933. The government has enforced the act in phases. Sections 1, 2, 10-30, 42-44, 46, 47, 50-58, 61, and 62 came into force on June 26, 2024. While, sections 6-8, 48, and 59(b) were notified to be effective from July 05, 2024.
These rules have been notified under the powers granted by Section 22(1) and Section 56(2)(v) of the Telecommunications Act, 2023.
Key Provisions of the Rules
These rules collectively aim to reinforce telecom cyber security and ensure the reliability of telecommunication networks and services. They are as follows:
The Central Government agency authorized by it may request traffic or other data from a telecommunication entity through the Central Government portal to safeguard and ensure telecom cyber security. In addition, the Central Govt. can instruct telecommunication entities to establish the necessary infrastructure and equipment for data collection, processing, and storage from designated points.
● Obligations Relating To Telecom Cybersecurity:
Telecom entities must adhere to various obligations to prevent cyber security risks. Telecommunication cyber security must not be endangered, and no one is allowed to send messages that could harm it. Misuse of telecommunication equipment such as identifiers, networks, or services is prohibited. Telecommunication entities are also required to comply with directions and standards issued by the Central Govt. and furnish detailed reports of actions taken on the government portal.
● Compulsory Measures To Be Taken By Every Telecommunication Entity:
Telecom entities must adopt and notify the Central Govt. of a telecom cyber security policy to enhance cybersecurity. They have to identify and mitigate risks of security incidents, ensure timely responses, and take appropriate measures to address such incidents and minimize their impact. Periodic telecom cyber security audits must be conducted to assess network resilience against potential threats for telecom entities. They must report security incidents promptly to the Central Govt. and establish facilities like a Security Operations Centre.
● Reporting of Security Incidents:
- Telecommunication entities must report the detection of security incidents affecting their network or services within six hours.
- 24 hours are provided for submitting detailed information about the incident, including the number of affected users, the duration, geographical scope, the impact on services, and the remedial measures implemented.
The Central Govt. may require the affected entity to provide further information, such as its cyber security policy, or conduct a security audit.
CyberPeace Policy Analysis
The notified rules reflect critical updates from their draft version, including the obligation to report incidents immediately upon awareness. This ensures greater privacy for consumers while still enabling robust cybersecurity oversight. Importantly, individuals whose telecom identifiers are suspended or disconnected due to security concerns must be given a copy of the order and a chance to appeal, ensuring procedural fairness. The notified rules have removed "traffic data" and "message content" definitions that may lead to operational ambiguities. While the rules establish a solid foundation for protecting telecom networks, they pose significant compliance challenges, particularly for smaller operators who may struggle with costs associated with audits, infrastructure, and reporting requirements.
Conclusion
The Telecom Cyber Security Rules, 2024 represent a comprehensive approach to securing India’s communication networks against cyber threats. Mandating robust cybersecurity policies, rapid incident reporting, and procedural safeguards allows the rules to balance national security with privacy and fairness. However, addressing implementation challenges through stakeholder collaboration and detailed guidelines will be key to ensuring compliance without overburdening telecom operators. With adaptive execution, these rules have the potential to enhance the resilience of India’s telecom sector and also position the country as a global leader in digital security standards.
References
● Telecommunications Act, 2023 https://acrobat.adobe.com/id/urn:aaid:sc:AP:767484b8-4d05-40b3-9c3d-30c5642c3bac
● CyberPeace First Read of the Telecommunications Act, 2023 https://www.cyberpeace.org/resources/blogs/the-government-enforces-key-sections-of-the-telecommunication-act-2023
● Telecommunications (Telecom Cyber Security) Rules, 2024

Introduction
The advent of AI-driven deepfake technology has facilitated the creation of explicit counterfeit videos for sextortion purposes. There has been an alarming increase in the use of Artificial Intelligence to create fake explicit images or videos for sextortion.
What is AI Sextortion and Deepfake Technology
AI sextortion refers to the use of artificial intelligence (AI) technology, particularly deepfake algorithms, to create counterfeit explicit videos or images for the purpose of harassing, extorting, or blackmailing individuals. Deepfake technology utilises AI algorithms to manipulate or replace faces and bodies in videos, making them appear realistic and often indistinguishable from genuine footage. This enables malicious actors to create explicit content that falsely portrays individuals engaging in sexual activities, even if they never participated in such actions.
Background on the Alarming Increase in AI Sextortion Cases
Recently there has been a significant increase in AI sextortion cases. Advancements in AI and deepfake technology have made it easier for perpetrators to create highly convincing fake explicit videos or images. The algorithms behind these technologies have become more sophisticated, allowing for more seamless and realistic manipulations. And the accessibility of AI tools and resources has increased, with open-source software and cloud-based services readily available to anyone. This accessibility has lowered the barrier to entry, enabling individuals with malicious intent to exploit these technologies for sextortion purposes.

The proliferation of sharing content on social media
The proliferation of social media platforms and the widespread sharing of personal content online have provided perpetrators with a vast pool of potential victims’ images and videos. By utilising these readily available resources, perpetrators can create deepfake explicit content that closely resembles the victims, increasing the likelihood of success in their extortion schemes.
Furthermore, the anonymity and wide reach of the internet and social media platforms allow perpetrators to distribute manipulated content quickly and easily. They can target individuals specifically or upload the content to public forums and pornographic websites, amplifying the impact and humiliation experienced by victims.
What are law agencies doing?
The alarming increase in AI sextortion cases has prompted concern among law enforcement agencies, advocacy groups, and technology companies. This is high time to make strong Efforts to raise awareness about the risks of AI sextortion, develop detection and prevention tools, and strengthen legal frameworks to address these emerging threats to individuals’ privacy, safety, and well-being.
There is a need for Technological Solutions, which develops and deploys advanced AI-based detection tools to identify and flag AI-generated deepfake content on platforms and services. And collaboration with technology companies to integrate such solutions.
Collaboration with Social Media Platforms is also needed. Social media platforms and technology companies can reframe and enforce community guidelines and policies against disseminating AI-generated explicit content. And can ensure foster cooperation in developing robust content moderation systems and reporting mechanisms.
There is a need to strengthen the legal frameworks to address AI sextortion, including laws that specifically criminalise the creation, distribution, and possession of AI-generated explicit content. Ensure adequate penalties for offenders and provisions for cross-border cooperation.
Proactive measures to combat AI-driven sextortion
Prevention and Awareness: Proactive measures raise awareness about AI sextortion, helping individuals recognise risks and take precautions.
Early Detection and Reporting: Proactive measures employ advanced detection tools to identify AI-generated deepfake content early, enabling prompt intervention and support for victims.
Legal Frameworks and Regulations: Proactive measures strengthen legal frameworks to criminalise AI sextortion, facilitate cross-border cooperation, and impose offender penalties.
Technological Solutions: Proactive measures focus on developing tools and algorithms to detect and remove AI-generated explicit content, making it harder for perpetrators to carry out their schemes.
International Cooperation: Proactive measures foster collaboration among law enforcement agencies, governments, and technology companies to combat AI sextortion globally.
Support for Victims: Proactive measures provide comprehensive support services, including counselling and legal assistance, to help victims recover from emotional and psychological trauma.
Implementing these proactive measures will help create a safer digital environment for all.

Misuse of Technology
Misusing technology, particularly AI-driven deepfake technology, in the context of sextortion raises serious concerns.
Exploitation of Personal Data: Perpetrators exploit personal data and images available online, such as social media posts or captured video chats, to create AI- manipulation violates privacy rights and exploits the vulnerability of individuals who trust that their personal information will be used responsibly.
Facilitation of Extortion: AI sextortion often involves perpetrators demanding monetary payments, sexually themed images or videos, or other favours under the threat of releasing manipulated content to the public or to the victims’ friends and family. The realistic nature of deepfake technology increases the effectiveness of these extortion attempts, placing victims under significant emotional and financial pressure.
Amplification of Harm: Perpetrators use deepfake technology to create explicit videos or images that appear realistic, thereby increasing the potential for humiliation, harassment, and psychological trauma suffered by victims. The wide distribution of such content on social media platforms and pornographic websites can perpetuate victimisation and cause lasting damage to their reputation and well-being.
Targeting teenagers– Targeting teenagers and extortion demands in AI sextortion cases is a particularly alarming aspect of this issue. Teenagers are particularly vulnerable to AI sextortion due to their increased use of social media platforms for sharing personal information and images. Perpetrators exploit to manipulate and coerce them.
Erosion of Trust: Misusing AI-driven deepfake technology erodes trust in digital media and online interactions. As deepfake content becomes more convincing, it becomes increasingly challenging to distinguish between real and manipulated videos or images.
Proliferation of Pornographic Content: The misuse of AI technology in sextortion contributes to the proliferation of non-consensual pornography (also known as “revenge porn”) and the availability of explicit content featuring unsuspecting individuals. This perpetuates a culture of objectification, exploitation, and non-consensual sharing of intimate material.
Conclusion
Addressing the concern of AI sextortion requires a multi-faceted approach, including technological advancements in detection and prevention, legal frameworks to hold offenders accountable, awareness about the risks, and collaboration between technology companies, law enforcement agencies, and advocacy groups to combat this emerging threat and protect the well-being of individuals online.