#FactCheck - AI-Cloned Audio in Viral Anup Soni Video Promoting Betting Channel Revealed as Fake
Executive Summary:
A morphed video of the actor Anup Soni popular on social media promoting IPL betting Telegram channel is found to be fake. The audio in the morphed video is produced through AI voice cloning. AI manipulation was identified by AI detection tools and deepfake analysis tools. In the original footage Mr Soni explains a case of crime, a part of the popular show Crime Patrol which is unrelated to betting. Therefore, it is important to draw the conclusion that Anup Soni is in no way associated with the betting channel.

Claims:
The facebook post claims the IPL betting Telegram channel which belongs to Rohit Khattar is promoted by Actor Anup Soni.

Fact Check:
Upon receiving the post, the CyberPeace Research Team closely analyzed the video and found major discrepancies which are mostly seen in AI-manipulated videos. The lip sync of the video does not match the audio. Taking a cue from this we analyzed using a Deepfake detection tool by True Media. It is found that the voice of the video is 100% AI-generated.



We then extracted the audio and checked in an audio Deepfake detection tool named Hive Moderation. Hive moderation found the audio to be 99.9% AI-Generated.

We then divided the video into keyframes and reverse searched one of the keyframes and found the original video uploaded by the YouTube channel named LIV Crime.
Upon analyzing we found that in the 3:18 time frame the video was edited, and altered with an AI voice.

Hence, the viral video is an AI manipulated video and it’s not real. We have previously debunked such AI voice manipulation with different celebrities and politicians to misrepresent the actual context. Netizens must be careful while believing in such AI manipulation videos.
Conclusion:
In conclusion, the viral video claiming that IPL betting Telegram channel promotion by actor Anup Soni is false. The video has been manipulated using AI voice cloning technology, as confirmed by both the Hive Moderation AI detector and the True Media AI detection tool. Therefore, the claim is baseless and misleading.
- Claim: An IPL betting Telegram channel belonging to Rohit Khattar promoted by Actor Anup Soni.
- Claimed on: Facebook
- Fact Check: Fake & Misleading
Related Blogs
.webp)
Introduction
As the 2024 Diwali festive season approaches, netizens eagerly embrace the spirit of celebration with online shopping, gifting, and searching for the best festive deals on online platforms. Historical web data from India shows that netizens' online activity spikes at this time as people shop online to upgrade their homes, buy unique presents for loved ones and look for services and products to make their celebrations more joyful.
However, with the increase in online transactions and digital interactions, cybercriminals take advantage of the festive rush by enticing users with fake schemes, fake coupons offering freebies, fake offers of discounted jewellery, counterfeit product sales, festival lotteries, fake lucky draws and charity appeals, malicious websites and more. Cybercrimes, especially phishing attempts, also spike in proportion to user activity and shopping trends at this time.
Hence, it becomes important for all netizens to stay alert, making sure their personal information and financial data is protected and ensure that they exercise due care and caution before clicking on any suspicious links or offers. Additionally, brands and platforms also must make strong cybersecurity a top priority to safeguard their customers and build trust.
Diwali Season and Phishing Attempts
Last year's report from CloudSEK's research team noted an uptick in cyber threats during the Diwali period, where cybercriminals leveraged the festive mood to launch phishing, betting and crypto scams. The report revealed that phishing attempts target the e-commerce industries and seek to damage the image of reputable brands. An astounding 828 distinct domains devoted to phishing activities were found in the Facebook Ads Library by CloudSEK's investigators. The report also highlighted the use of typosquatting techniques to create phony-but-plausible domains that trick users into believing they are legitimate websites, by exploiting common typing errors or misspellings of popular domain names. As fraudsters are increasingly misusing AI and deepfake technologies to their advantage, we expect even more of these dangers to surface this year over the festive season.
CyberPeace Advisory
It is important that netizens exercise caution, especially during the festive period and follow cyber safety practices to avoid cybercrimes and phishing attempts. Some of the cyber hygiene best practices suggested by CyberPeace are as follows:
- Netizens must verify the sender’s email, address, and domain with the official site for the brand/ entity the sender claims to be affiliated with.
- Netizens must avoid clicking links received through email, messages or shared on social media and consider visiting the official website directly.
- Beware of urgent, time-sensitive offers pressuring immediate action.
- Spot phishing signs like spelling errors and suspicious URLs to avoid typosquatting tactics used by cybercriminals.
- Netizens must enable two-factor authentication (2FA) for an additional layer of security.
- Have authenticated antivirus software and malware detection software installed on your devices.
- Be wary of unsolicited festive deals, gifts and offers.
- Stay informed on common tactics used by cybercriminals to launch phishing attacks and recognise the red flags of any phishing attempts.
- To report cybercrimes, file a complaint at cybercrime.gov.in or helpline number 1930. You can also seek assistance from the CyberPeace helpline at +91 9570000066.
References
- https://www.outlookmoney.com/plan/financial-plan/this-diwali-beware-of-these-financial-scams
- https://www.businesstoday.in/technology/news/story/diwali-and-pooja-domains-being-exploited-by-online-scams-see-tips-to-help-you-stay-safe-405323-2023-11-10
- https://www.abplive.com/states/bihar/bihar-crime-news-15-cyber-fraud-arrested-in-nawada-before-diwali-2024-ann-2805088
- https://economictimes.indiatimes.com/tech/technology/phishing-you-a-happy-diwali-ai-advancements-pave-way-for-cybercriminals/articleshow/113966675.cms?from=mdr
.png)
Introduction
The fast-paced development of technology and the wider use of social media platforms have led to the rapid dissemination of misinformation with characteristics such as diffusion, fast propagation speed, wide influence, and deep impact through these platforms. Social Media Algorithms and their decisions are often perceived as a black box introduction that makes it impossible for users to understand and recognise how the decision-making process works.
Social media algorithms may unintentionally promote false narratives that garner more interactions, further reinforcing the misinformation cycle and making it harder to control its spread within vast, interconnected networks. Algorithms judge the content based on the metrics, which is user engagement. It is the prerequisite for algorithms to serve you the best. Hence, algorithms or search engines enlist relevant items you are more likely to enjoy. This process, initially, was created to cut the clutter and provide you with the best information. However, sometimes it results in unknowingly widespread misinformation due to the viral nature of information and user interactions.
Analysing the Algorithmic Architecture of Misinformation
Social media algorithms, designed to maximize user engagement, can inadvertently promote misinformation due to their tendency to trigger strong emotions, creating echo chambers and filter bubbles. These algorithms prioritize content based on user behaviour, leading to the promotion of emotionally charged misinformation. Additionally, the algorithms prioritize content that has the potential to go viral, which can lead to the spread of false or misleading content faster than corrections or factual content.
Additionally, popular content is amplified by platforms, which spreads it faster by presenting it to more users. Limited fact-checking efforts are particularly difficult since, by the time they are reported or corrected, erroneous claims may have gained widespread acceptance due to delayed responses. Social media algorithms find it difficult to distinguish between real people and organized networks of troll farms or bots that propagate false information. This creates a vicious loop where users are constantly exposed to inaccurate or misleading material, which strengthens their convictions and disseminates erroneous information through networks.
Though algorithms, primarily, aim to enhance user engagement by curating content that aligns with the user's previous behaviour and preferences. Sometimes this process leads to "echo chambers," where individuals are exposed mainly to information that reaffirms their beliefs which existed prior, effectively silencing dissenting voices and opposing viewpoints. This curated experience reduces exposure to diverse opinions and amplifies biased and polarising content, making it arduous for users to discern credible information from misinformation. Algorithms feed into a feedback loop that continuously gathers data from users' activities across digital platforms, including websites, social media, and apps. This data is analysed to optimise user experiences, making platforms more attractive. While this process drives innovation and improves user satisfaction from a business standpoint, it also poses a danger in the context of misinformation. The repetitive reinforcement of user preferences leads to the entrenchment of false beliefs, as users are less likely to encounter fact-checks or corrective information.
Moreover, social networks and their sheer size and complexity today exacerbate the issue. With billions of users participating in online spaces, misinformation spreads rapidly, and attempting to contain it—such as by inspecting messages or URLs for false information—can be computationally challenging and inefficient. The extensive amount of content that is shared daily means that misinformation can be propagated far quicker than it can get fact-checked or debunked.
Understanding how algorithms influence user behaviour is important to tackling misinformation. The personalisation of content, feedback loops, the complexity of network structures, and the role of superspreaders all work together to create a challenging environment where misinformation thrives. Hence, highlighting the importance of countering misinformation through robust measures.
The Role of Regulations in Curbing Algorithmic Misinformation
The EU's Digital Services Act (DSA) applicable in the EU is one of the regulations that aims to increase the responsibilities of tech companies and ensure that their algorithms do not promote harmful content. These regulatory frameworks play an important role they can be used to establish mechanisms for users to appeal against the algorithmic decisions and ensure that these systems do not disproportionately suppress legitimate voices. Independent oversight and periodic audits can ensure that algorithms are not biased or used maliciously. Self-regulation and Platform regulation are the first steps that can be taken to regulate misinformation. By fostering a more transparent and accountable ecosystem, regulations help mitigate the negative effects of algorithmic misinformation, thereby protecting the integrity of information that is shared online. In the Indian context, the Intermediary Guidelines, 2023, Rule 3(1)(b)(v) explicitly prohibits the dissemination of misinformation on digital platforms. The ‘Intermediaries’ are obliged to ensure reasonable efforts to prevent users from hosting, displaying, uploading, modifying, publishing, transmitting, storing, updating, or sharing any information related to the 11 listed user harms or prohibited content. This rule aims to ensure platforms identify and swiftly remove misinformation, and false or misleading content.
Cyberpeace Outlook
Understanding how algorithms prioritise content will enable users to critically evaluate the information they encounter and recognise potential biases. Such cognitive defenses can empower individuals to question the sources of the information and report misleading content effectively. In the future of algorithms in information moderation, platforms should evolve toward more transparent, user-driven systems where algorithms are optimised not just for engagement but for accuracy and fairness. Incorporating advanced AI moderation tools, coupled with human oversight can improve the detection and reduction of harmful and misleading content. Collaboration between regulatory bodies, tech companies, and users will help shape the algorithms landscape to promote a healthier, more informed digital environment.
References:
- https://www.advancedsciencenews.com/misformation-spreads-like-a-nuclear-reaction-on-the-internet/
- https://www.niemanlab.org/2024/09/want-to-fight-misinformation-teach-people-how-algorithms-work/
- Press Release: Press Information Bureau (pib.gov.in)

Introduction
In the era of digitalisation, social media has become an essential part of our lives, with people spending a lot of time updating every moment of their lives on these platforms. Social media networks such as WhatsApp, Facebook, and YouTube have emerged as significant sources of Information. However, the proliferation of misinformation is alarming since misinformation can have grave consequences for individuals, organisations, and society as a whole. Misinformation can spread rapidly via social media, leaving a higher impact on larger audiences. Bad actors can exploit algorithms for their benefit or some other agenda, using tactics such as clickbait headlines, emotionally charged language, and manipulated algorithms to increase false information.
Impact
The impact of misinformation on our lives is devastating, affecting individuals, communities, and society as a whole. False or misleading health information can have serious consequences, such as believing in unproven remedies or misinformation about some vaccines can cause serious illness, disability, or even death. Any misinformation related to any financial scheme or investment can lead to false or poor financial decisions that could lead to bankruptcy and loss of long-term savings.
In a democratic nation, misinformation plays a vital role in forming a political opinion, and the misinformation spread on social media during elections can affect voter behaviour, damage trust, and may cause political instability.
Mitigating strategies
The best way to minimise or stop the spreading of misinformation requires a multi-faceted approach. These strategies include promoting media literacy with critical thinking, verifying information before sharing, holding social media platforms accountable, regulating misinformation, supporting critical research, and fostering healthy means of communication to build a resilient society.
To put an end to the cycle of misinformation and move towards a better future, we must create plans to combat the spread of false information. This will require coordinated actions from individuals, communities, tech companies, and institutions to promote a culture of information accuracy and responsible behaviour.
The widespread spread of false information on social media platforms presents serious problems for people, groups, and society as a whole. It becomes clear that battling false information necessitates a thorough and multifaceted strategy as we go deeper into comprehending the nuances of this problem.
Encouraging consumers to develop media literacy and critical thinking abilities is essential to preventing the spread of false information. Being educated is essential for equipping people to distinguish between reliable sources and false information. Giving individuals the skills to assess information critically will enable them to choose the content they share and consume with knowledge. Public awareness campaigns should be used to promote and include initiatives that aim to improve media literacy in school curriculum.
Ways to Stop Misinformation
As we have seen, misinformation can cause serious implications; the best way to minimise or stop the spreading of misinformation requires a multifaceted approach; here are some strategies to combat misinformation.
- Promote Media Literacy with Critical Thinking: Educate individuals about how to critically evaluate information, fact check, and recognise common tactics used to spread misinformation, the users must use their critical thinking before forming any opinion or perspective and sharing the content.
- Verify Information: we must encourage people to verify the information before sharing, especially if it seems sensational or controversial, and encourage the consumption of news or any information from a reputable source of news that follows ethical journalistic standards.
- Accountability: Advocate for social media networks' openness and responsibility in the fight against misinformation. Encourage platforms to put in place procedures to detect and delete fraudulent content while boosting credible sources.
- Regulate Misinformation: Looking at the current situation, it is important to advocate for policies and regulations that address the spread of misinformation while safeguarding freedom of expression. Transparency in online communication by identifying the source of information and disclosing any conflict of interest.
- Support Critical Research: Invest in research and study on the sources, impacts, and remedies to misinformation. Support collaborative initiatives by social scientists, psychologists, journalists, and technology to create evidence-based techniques for countering misinformation.
Conclusion
To prevent the cycle of misinformation and move towards responsible use of the Internet, we must create strategies to combat the spread of false information. This will require coordinated actions from individuals, communities, tech companies, and institutions to promote a culture of information accuracy and responsible behaviour.