#FactCheck - AI-Cloned Audio in Viral Anup Soni Video Promoting Betting Channel Revealed as Fake
Executive Summary:
A morphed video of the actor Anup Soni popular on social media promoting IPL betting Telegram channel is found to be fake. The audio in the morphed video is produced through AI voice cloning. AI manipulation was identified by AI detection tools and deepfake analysis tools. In the original footage Mr Soni explains a case of crime, a part of the popular show Crime Patrol which is unrelated to betting. Therefore, it is important to draw the conclusion that Anup Soni is in no way associated with the betting channel.

Claims:
The facebook post claims the IPL betting Telegram channel which belongs to Rohit Khattar is promoted by Actor Anup Soni.

Fact Check:
Upon receiving the post, the CyberPeace Research Team closely analyzed the video and found major discrepancies which are mostly seen in AI-manipulated videos. The lip sync of the video does not match the audio. Taking a cue from this we analyzed using a Deepfake detection tool by True Media. It is found that the voice of the video is 100% AI-generated.



We then extracted the audio and checked in an audio Deepfake detection tool named Hive Moderation. Hive moderation found the audio to be 99.9% AI-Generated.

We then divided the video into keyframes and reverse searched one of the keyframes and found the original video uploaded by the YouTube channel named LIV Crime.
Upon analyzing we found that in the 3:18 time frame the video was edited, and altered with an AI voice.

Hence, the viral video is an AI manipulated video and it’s not real. We have previously debunked such AI voice manipulation with different celebrities and politicians to misrepresent the actual context. Netizens must be careful while believing in such AI manipulation videos.
Conclusion:
In conclusion, the viral video claiming that IPL betting Telegram channel promotion by actor Anup Soni is false. The video has been manipulated using AI voice cloning technology, as confirmed by both the Hive Moderation AI detector and the True Media AI detection tool. Therefore, the claim is baseless and misleading.
- Claim: An IPL betting Telegram channel belonging to Rohit Khattar promoted by Actor Anup Soni.
- Claimed on: Facebook
- Fact Check: Fake & Misleading
Related Blogs

Executive Summary
A video of senior Congress leader Shashi Tharoor is widely circulating on social media, allegedly showing him praising Pakistan’s diplomatic stance over the ICC T20 World Cup issue. Many users are sharing the clip believing it to be genuine. However, research by the CyberPeace found the claim to be false. The viral video of Tharoor is a deepfake, and the Congress leader himself has described it as fabricated and fake.
Claim
A Facebook page named “Vok Sports” shared the video on February 11, 2026, claiming that Tharoor praised Pakistan. In the viral clip, he is purportedly heard saying in English that Pakistan’s diplomatic handling of the matter was “brilliant” and that it had outmanoeuvred the Indian cricket board, adding that good diplomacy could make a weak nation appear powerful.
The video was widely shared by social media users as authentic. (Archive links and post details provided.)
Fact Check
To verify the claim, we first scanned Tharoor’s official X (formerly Twitter) handle. We found a post dated February 12 in which he responded to a Pakistani journalist who had shared the video. Tharoor stated that the clip was AI-generated “fake news,” adding that neither the language nor the voice in the video was his.

A reverse image search using Google Lens led the Desk to a video uploaded on February 10, 2026, by India Today on its official YouTube channel. The visuals in this original video exactly matched those seen in the viral clip showing Tharoor speaking to the media. However, upon analysing the original footage, we found that Tharoor was speaking in Hindi about the controversy surrounding the T20 World Cup. He stated that politics should not be mixed with cricket or sports and did not praise Pakistan or the Pakistan Cricket Board at any point. This indicates that the audio in the viral clip had been manipulated and replaced. In the original video, Tharoor said that politicians should conduct politics separately, diplomats should handle diplomacy, and cricket players should focus on the game, expressing hope that cricket would move forward with the match.
- https://www.youtube.com/watch?v=GkA1mLlAT8Q&t=3s

To further verify the authenticity of the video, several AI detection tools were used. Analysis through Aurigin.ai suggested a 78 percent probability that the audio in the viral clip was AI-generated.

Conclusion
The CyberPeace confirmed that the viral video is a deepfake. Tharoor did not praise Pakistan’s diplomatic stance during the T20 World Cup controversy, and the circulating clip has been digitally manipulated.

Introduction
Considering the development of technology, Voice cloning schemes are one such issue that has recently come to light. Scammers are moving forward with AI, and their methods and plans for deceiving and scamming people have also altered. Deepfake technology creates realistic imitations of a person’s voice that can be used to conduct fraud, dupe a person into giving up crucial information, or even impersonate a person for illegal purposes. We will look at the dangers and risks associated with AI voice cloning frauds, how scammers operate and how one might protect themselves from one.
What is Deepfake?
Artificial intelligence (AI), known as “deepfake,” can produce fake or altered audio, video, and film that pass for the real thing. The words “deep learning” and “fake” are combined to get the name “deep fake”. Deepfake technology creates content with a realistic appearance or sound by analysing and synthesising diverse volumes of data using machine learning algorithms. Con artists employ technology to portray someone doing something that has never been in audio or visual form. The best example is the American President, who used deep voice impersonation technology. Deep voice impersonation technology can be used maliciously, such as in deep voice fraud or disseminating false information. As a result, there is growing concerned about the potential influence of deep fake technology on society and the need for effective tools to detect and mitigate the hazards it may provide.
What exactly are deepfake voice scams?
Artificial intelligence (AI) is sometimes utilised in deepfake speech frauds to create synthetic audio recordings that seem like real people. Con artists can impersonate someone else over the phone and pressure their victims into providing personal information or paying money by using contemporary technology. A con artist may pose as a bank employee, a government official, or a friend or relative by utilising a deep false voice. It aims to earn the victim’s trust and raise the likelihood that they will fall for the hoax by conveying a false sense of familiarity and urgency. Deep fake speech frauds are increasing in frequency as deep fake technology becomes more widely available, more sophisticated, and harder to detect. In order to avoid becoming a victim of such fraud, it is necessary to be aware of the risks and take appropriate measures.
Why do cybercriminals use AI voice deep fake?
In order to mislead users into providing private information, money, or system access, cybercriminals utilise artificial intelligence (AI) speech-deep spoofing technology to claim to be people or entities. Using AI voice-deep fake technology, cybercriminals can create audio recordings that mimic real people or entities, such as CEOs, government officials, or bank employees, and use them to trick victims into taking activities that are advantageous to the criminals. This can involve asking victims for money, disclosing login credentials, or revealing sensitive information. In phishing assaults, where fraudsters create audio recordings that impersonate genuine messages from organisations or people that victims trust, deepfake AI voice technology can also be employed. These audio recordings can trick people into downloading malware, clicking on dangerous links, or giving out personal information. Additionally, false audio evidence can be produced using AI voice-deep fake technology to support false claims or accusations. This is particularly risky regarding legal processes because falsified audio evidence may lead to wrongful convictions or acquittals. Artificial intelligence voice deep fake technology gives con artists a potent tool for tricking and controlling victims. Every organisation and the general population must be informed of this technology’s risk and adopt the appropriate safety measures.
How to spot voice deepfake and avoid them?
Deep fake technology has made it simpler for con artists to edit audio recordings and create phoney voices that exactly mimic real people. As a result, a brand-new scam called the “deep fake voice scam” has surfaced. In order to trick the victim into handing over money or private information, the con artist assumes another person’s identity and uses a fake voice. What are some ways to protect oneself from deepfake voice scams? Here are some guidelines to help you spot them and keep away from them:
- Steer clear of telemarketing calls
- One of the most common tactics used by deep fake voice con artists, who pretend to be bank personnel or government officials, is making unsolicited phone calls.
- Listen closely to the voice
- Anyone who phones you pretending to be someone else should pay special attention to their voice. Are there any peculiar pauses or inflexions in their speech? Something that doesn’t seem right can be a deep voice fraud.
- Verify the caller’s identity
- It’s crucial to verify the caller’s identity in order to avoid falling for a deep false voice scam. You might ask for their name, job title, and employer when in doubt. You can then do some research to be sure they are who they say they are.
- Never divulge confidential information
- No matter who calls, never give out personal information like your Aadhar, bank account information, or passwords over the phone. Any legitimate companies or organisations will never request personal or financial information over the phone; if they do, it’s a warning sign that they’re a scammer.
- Report any suspicious activities
- Inform the appropriate authorities if you think you’ve fallen victim to a deep voice fraud. This may include your bank, credit card company, local police station, or the nearest cyber cell. By reporting the fraud, you could prevent others from being a victim.
Conclusion
In conclusion, the field of AI voice deep fake technology is fast expanding and has huge potential for beneficial and detrimental effects. While deep fake voice technology has the potential to be used for good, such as improving speech recognition systems or making voice assistants sound more realistic, it may also be used for evil, such as deep fake voice frauds and impersonation to fabricate stories. Users must be aware of the hazard and take the necessary precautions to protect themselves as AI voice deep fake technology develops, making it harder to detect and prevent deep fake schemes. Additionally, it is necessary to conduct ongoing research and develop efficient techniques to identify and control the risks related to this technology. We must deploy AI appropriately and ethically to ensure that AI voice-deep fake technology benefits society rather than harming or deceiving it.
Reference

Overview:
The National Payments Corporation of India (NPCI) officially revealed on the 31st of July 2024 that its client C-Edge Technologies had been subject to a ransomware attack. These circumstances have caused C-Edge to be separated from retail payment systems to eliminate more threats to the national payment systems. More than 200 cooperative and regional rural banks have been affected leading to disruptions in normal services including ATM withdrawals and UPI transactions.
About C-Edge Technologies:
C-Edge Technologies was founded in the year 2010 especially to meet the specific requirements of the Indian banking and other allied sectors accentuating more on the cooperative and the regional rural banks. The company offers a range of services such as Core Banking Solutions by functioning as the center of a bank where customers’ records are managed and accounting of transactions takes place, Payment Solutions through the implementation of payment gateways and mobile banking facilities, cybersecurity through threat detection and incident response to protect banking organizations, data analytics and AI through the analytics of big banking data to reduce risks and detect frauds.
Details of Ransomware attack:
Reports say, this ransomware attack has been attributed by the RansomEXX group which primarily targeted Brontoo Technology Solutions, a key collaborator with C-Edge, through a misconfigured Jenkins server, which allowed unauthorized access to the systems.
The RansomExx group also known as Defray777 or Ransom X utilized a sophisticated variant known as RansomEXX v2.0 to execute the attack. This group often targets large organizations and demands substantial ransoms. RansomEXX uses various malware tools such as IcedID, Vatet Loader, and PyXie RAT. It typically infiltrates systems through phishing emails, exploiting vulnerabilities in applications and services, including Remote Desktop Protocol (RDP). The ransomware encrypts files using the Advanced Encryption Standard (AES), with the encryption key further secured using RSA encryption. This dual-layer encryption complicates recovery efforts for victims. RansomEXX operates on a ransomware-as-a-service model, allowing affiliates to conduct attacks using its infrastructure. Earlier in 2021, it attacked StarHub and Gigabyte’s servers for ransome.
Impact due to the attack:
The immediate consequences of the ransomware attack include:
- Service Disruption: This has negative implications to consumers especially the citizens who use the banks to do their day to day banking activities such as withdrawals and online transactions. Among the complaints some of them relate to cases where the sender’s account has been debited without the corresponding credit to the receiver account.
- Isolation Measures: Likely, NPCI is already following the right measures as it had disconnected C-Edge from its networks to contain the proliferation of the ransomware. This decision was made as a precautionary measure so that all functional aspects in a larger financial system are safeguarded.
Operations resumed:
The National Payments Corporation of India (NPCI) said it has restored connectivity with C-Edge Technologies Ltd after the latter’s network connection was severed by NPCI over security concerns that were evaluated by an external forensic auditing firm. The audit affirmed that all affected systems were contained in order to avoid the occurrence of ransomware attack contagion. All the affected systems were localized in C-Edge’s data center and no repercussion was evidenced regarding the infrastructure of the cooperative banks or the regional rural banks that are involved in the business. Both NPCI and C-Edge Technologies have resumed normalcy so that the banking and financial services being offered by these banks remain safe and secure.
Major Implications for Banking Sector:
The attack on C-Edge Technologies raises several critical concerns for the Indian banking sector:
- Cybersecurity Vulnerabilities: It also shows the weak linkages which are present within the technology system that help smaller sized banks. Nevertheless, the service has been offered by C-Edge regarding their cybersecurity solution, this attack evidence that the securities required should improve in all types of banks and banking applications.
- Financial Inclusion Risks: Co operative and regional rural banks also have its importance in the financial inclusion especially in rural and semi urban areas. Gradually, interruptions to their services pose a risk to signal diminished improvement in financial literacy for the excluded groups contrary to the common year advancement.
- Regulatory Scrutiny: After this event, agencies such as the Reserve Bank of India (RBI) may enhance the examination of the banking sector’s cybersecurity mechanisms. Some of the directives may even require institutions to adhere to higher compliance measures regarding the defense against cyber threats.
Way Forward: Mitigation
- Strengthening Cybersecurity: It is important to enhance the cyber security to eliminate this kind of attacks in the future. This may include using better threat detection systems, penetration testing to find the vulnerabilities, system hardening, and network monitoring from time to time.
- Transition to Cloud-Based Solutions: The application of adaptations in cloud solutions can contribute to the enhancement in operative efficiency as well as optimization in the utilization of resources. The security features of cloud should be implemented for safety and protection against cyber threats for SMEs in the banking sector.
- Leveraging AI and Data Analytics: Development of the AI-based solutions for fraud and risk control means that bank organizations get the chance to address threats and to regain clients’ trust.
Conclusion:
This ransomware attack in C-Edge Technologies in the banking sector provides a warning for all the infrastructures. Initial cleanup methodologies and quarantining are effective. The continuous monitoring of cyber security features in the infrastructure and awareness between employees helps to avoid these kinds of attacks. Building up cyber security areas will also effectively safeguard the institution against other cyber risks in the future and fortify the confidence and reliability of the financial system, especially the regional rural banks.
Reference:
- https://www.businesstoday.in/technology/news/story/c-edge-technologies-a-deep-dive-into-the-indian-fintech-powerhouse-hit-by-major-cyberattack-439657-2024-08-01
- https://www.thehindu.com/sci-tech/technology/customers-at-several-small-sized-banks-affected-as-tech-provider-c-edge-suffers-ransomware-attack/article68470198.ece
- https://www.cnbctv18.com/technology/ransomware-attack-disrupts-over-200-co-operative-banks-regional-rural-banks-19452521.htm
- https://timesofindia.indiatimes.com/city/ahmedabad/ransomware-breach-at-c-edge-impacts-transactions-for-cooperative-banks/articleshow/112180914.cms
- https://www.emsisoft.com/en/blog/41027/ransomware-profile-ransomexx/