#FactCheck - AI-Cloned Audio in Viral Anup Soni Video Promoting Betting Channel Revealed as Fake
Executive Summary:
A morphed video of the actor Anup Soni popular on social media promoting IPL betting Telegram channel is found to be fake. The audio in the morphed video is produced through AI voice cloning. AI manipulation was identified by AI detection tools and deepfake analysis tools. In the original footage Mr Soni explains a case of crime, a part of the popular show Crime Patrol which is unrelated to betting. Therefore, it is important to draw the conclusion that Anup Soni is in no way associated with the betting channel.

Claims:
The facebook post claims the IPL betting Telegram channel which belongs to Rohit Khattar is promoted by Actor Anup Soni.

Fact Check:
Upon receiving the post, the CyberPeace Research Team closely analyzed the video and found major discrepancies which are mostly seen in AI-manipulated videos. The lip sync of the video does not match the audio. Taking a cue from this we analyzed using a Deepfake detection tool by True Media. It is found that the voice of the video is 100% AI-generated.



We then extracted the audio and checked in an audio Deepfake detection tool named Hive Moderation. Hive moderation found the audio to be 99.9% AI-Generated.

We then divided the video into keyframes and reverse searched one of the keyframes and found the original video uploaded by the YouTube channel named LIV Crime.
Upon analyzing we found that in the 3:18 time frame the video was edited, and altered with an AI voice.

Hence, the viral video is an AI manipulated video and it’s not real. We have previously debunked such AI voice manipulation with different celebrities and politicians to misrepresent the actual context. Netizens must be careful while believing in such AI manipulation videos.
Conclusion:
In conclusion, the viral video claiming that IPL betting Telegram channel promotion by actor Anup Soni is false. The video has been manipulated using AI voice cloning technology, as confirmed by both the Hive Moderation AI detector and the True Media AI detection tool. Therefore, the claim is baseless and misleading.
- Claim: An IPL betting Telegram channel belonging to Rohit Khattar promoted by Actor Anup Soni.
- Claimed on: Facebook
- Fact Check: Fake & Misleading
Related Blogs

Executive Summary:
An alleged video is making the rounds on the internet featuring Ranveer Singh criticizing the Prime Minister Narendra Modi and his Government. But after examining the video closely it revealed that it has been tampered with to change the audio. In fact, the original videos posted by different media outlets actually show Ranveer Singh praising Varanasi, professing his love for Lord Shiva, and acknowledging Modiji’s role in enhancing the cultural charms and infrastructural development of the city. Differences in lip synchronization and the fact that the original video has no sign of criticizing PM Modi show that the video has been potentially manipulated in order to spread misinformation.

Claims:
The Viral Video of Bollywood actor Ranveer Singh criticizing Prime Minister Narendra Modi.

Fact Check:
Upon receiving the Video we divided the video into keyframes and reverse-searched one of the images, we landed on another video of Ranveer Singh with lookalike appearance, posted by an Instagram account named, “The Indian Opinion News''. In the video Ranveer Singh talks about his experience of visiting Kashi Vishwanath Temple with Bollywood actress Kriti Sanon. When we watched the Full video we found no indication of criticizing PM Modi.

Taking a cue from this we did some keyword search to find the full video of the interview. We found many videos uploaded by media outlets but none of the videos indicates criticizing PM Modi as claimed in the viral video.

Ranveer Singh shared his thoughts about how he feels about Lord Shiva, his opinions on the city and the efforts undertaken by the Prime Minister Modi to keep history and heritage of Varanasi alive as well as the city's ongoing development projects. The discrepancy in the viral video clip is clearly seen when we look at it closely. The lips are not in synchronization with the words which we can hear. It is clearly seen in the original video that the lips are in perfect synchronization with the words of audio. Upon lack of evidence to the claim made and discrepancies in the video prove that the video was edited to misrepresent the original interview of Bollywood Actor Ranveer Singh. Hence, the claim made is misleading and false.
Conclusion:
The video that claims Ranveer Singh criticizing PM Narendra Modi is not genuine. Further investigation shows that it has been edited by changing the audio. The original footage actually shows Singh speaking positively about Varanasi and Modi's work. Differences in lip-syncing and upon lack of evidence highlight the danger of misinformation created by simple editing. Ultimately, the claim made is false and misleading.
- Claim: A viral featuring Ranveer Singh criticizing the Prime Minister Narendra Modi and his Government.
- Claimed on: X (formerly known as Twitter)
- Fact Check: Fake & Misleading
.webp)
Introduction
Misinformation is a major issue in the AI age, exacerbated by the broad adoption of AI technologies. The misuse of deepfakes, bots, and content-generating algorithms have made it simpler for bad actors to propagate misinformation on a large scale. These technologies are capable of creating manipulative audio/video content, propagate political propaganda, defame individuals, or incite societal unrest. AI-powered bots may flood internet platforms with false information, swaying public opinion in subtle ways. The spread of misinformation endangers democracy, public health, and social order. It has the potential to affect voter sentiments, erode faith in the election process, and even spark violence. Addressing misinformation includes expanding digital literacy, strengthening platform detection capabilities, incorporating regulatory checks, and removing incorrect information.
AI's Role in Misinformation Creation
AI's growth in its capabilities to generate content have grown exponentially in recent years. Legitimate uses or purposes of AI many-a-times take a backseat and result in the exploitation of content that already exists on the internet. One of the main examples of misinformation flooding the internet is when AI-powered bots flood social media platforms with fake news at a scale and speed that makes it impossible for humans to track and figure out whether the same is true or false.
The netizens in India are greatly influenced by viral content on social media. AI-generated misinformation can have particularly negative consequences. Being literate in the traditional sense of the word does not automatically guarantee one the ability to parse through the nuances of social media content authenticity and impact. Literacy, be it social media literacy or internet literacy, is under attack and one of the main contributors to this is the rampant rise of AI-generated misinformation. Some of the most common examples of misinformation that can be found are related to elections, public health, and communal issues. These issues have one common factor that connects them, which is that they evoke strong emotions in people and as such can go viral very quickly and influence social behaviour, to the extent that they may lead to social unrest, political instability and even violence. Such developments lead to public mistrust in the authorities and institutions, which is dangerous in any economy, but even more so in a country like India which is home to a very large population comprising a diverse range of identity groups.
Misinformation and Gen AI
Generative AI (GAI) is a powerful tool that allows individuals to create massive amounts of realistic-seeming content, including imitating real people's voices and creating photos and videos that are indistinguishable from reality. Advanced deepfake technology blurs the line between authentic and fake. However, when used smartly, GAI is also capable of providing a greater number of content consumers with trustworthy information, counteracting misinformation.
Generative AI (GAI) is a technology that has entered the realm of autonomous content production and language creation, which is linked to the issue of misinformation. It is often difficult to determine if content originates from humans or machines and if we can trust what we read, see, or hear. This has led to media users becoming more confused about their relationship with media platforms and content and highlighted the need for a change in traditional journalistic principles.
We have seen a number of different examples of GAI in action in recent times, from fully AI-generated fake news websites to fake Joe Biden robocalls telling the Democrats in the U.S. not to vote. The consequences of such content and the impact it could have on life as we know it are almost too vast to even comprehend at present. If our ability to identify reality is quickly fading, how will we make critical decisions or navigate the digital landscape safely? As such, the safe and ethical use and applications of this technology needs to be a top global priority.
Challenges for Policymakers
AI's ability to generate anonymous content makes it difficult to hold perpetrators accountable due to the massive amount of data generated. The decentralised nature of the internet further complicates regulation efforts, as misinformation can spread across multiple platforms and jurisdictions. Balancing the need to protect the freedom of speech and expression with the need to combat misinformation is a challenge. Over-regulation could stifle legitimate discourse, while under-regulation could allow misinformation to propagate unchecked. India's multilingual population adds more layers to already-complex issue, as AI-generated misinformation is tailored to different languages and cultural contexts, making it harder to detect and counter. Therefore, developing strategies catering to the multilingual population is necessary.
Potential Solutions
To effectively combat AI-generated misinformation in India, an approach that is multi-faceted and multi-dimensional is essential. Some potential solutions are as follows:
- Developing a framework that is specific in its application to address AI-generated content. It should include stricter penalties for the originator and spreader and dissemination of fake content in proportionality to its consequences. The framework should establish clear and concise guidelines for social media platforms to ensure that proactive measures are taken to detect and remove AI-generated misinformation.
- Investing in tools that are driven by AI for customised detection and flagging of misinformation in real time. This can help in identifying deepfakes, manipulated images, and other forms of AI-generated content.
- The primary aim should be to encourage different collaborations between tech companies, cyber security orgnisations, academic institutions and government agencies to develop solutions for combating misinformation.
- Digital literacy programs will empower individuals by training them to evaluate online content. Educational programs in schools and communities teach critical thinking and media literacy skills, enabling individuals to better discern between real and fake content.
Conclusion
AI-generated misinformation presents a significant threat to India, and it is safe to say that the risks posed are at scale with the rapid rate at which the nation is developing technologically. As the country moves towards greater digital literacy and unprecedented mobile technology adoption, one must be cognizant of the fact that even a single piece of misinformation can quickly and deeply reach and influence a large portion of the population. Indian policymakers need to rise to the challenge of AI-generated misinformation and counteract it by developing comprehensive strategies that not only focus on regulation and technological innovation but also encourage public education. AI technologies are misused by bad actors to create hyper-realistic fake content including deepfakes and fabricated news stories, which can be extremely hard to distinguish from the truth. The battle against misinformation is complex and ongoing, but by developing and deploying the right policies, tools, digital defense frameworks and other mechanisms, we can navigate these challenges and safeguard the online information landscape.
References:
- https://economictimes.indiatimes.com/news/how-to/how-ai-powered-tools-deepfakes-pose-a-misinformation-challenge-for-internet-users/articleshow/98770592.cms?from=mdr
- https://www.dw.com/en/india-ai-driven-political-messaging-raises-ethical-dilemma/a-69172400
- https://pure.rug.nl/ws/portalfiles/portal/975865684/proceedings.pdf#page=62

Introduction
The 2025 Delhi Legislative Assembly election is just around the corner, scheduled for February 5, 2025, with all 70 constituencies heading to the polls. The eagerly awaited results will be announced on February 8, bringing excitement as the people of Delhi prepare to see their chosen leader take the helm as Chief Minister. As the election season unfolds, social media becomes a buzzing hub of activity, with information spreading rapidly across platforms. However, this period also sees a surge in online mis/disinformation, making elections a hotspot for misleading content. It is crucial for citizens to exercise caution and remain vigilant against false or deceptive online posts, videos, or content. Empowering voters to distinguish facts from fiction and recognize the warning signs of misinformation is essential to ensure informed decision-making. By staying alert and well-informed, we can collectively safeguard the integrity of the democratic process.
Risks of Mis/Disinformation
According to the 2024 survey report titled ‘Truth Be Told’ by ‘The 23 Watts’, 90% of Delhi’s youth (Gen Z) report witnessing a spike in fake news during elections, and 91% believe it influences voting patterns. Furthermore, the research highlights that 14% of Delhi’s youth tend to share sensational news without fact-checking, relying solely on conjecture.
Recent Measures by ECI
Recently the Election Commission of India (EC) has issued a fresh advisory to political parties to ensure responsible use of AI-generated content in their campaigns. The EC has issued guidelines to curb the potential use of "deepfakes" and AI-generated distorted content by political parties and their representatives to disturb the level playing field. EC has mandated the labelling of all AI-generated content used in election campaigns to enhance transparency, combat misinformation, ensuring a fair electoral process in the face of rapidly advancing AI technologies.
Best Practices to Avoid Electoral Mis/Disinformation
- Seek Information from Official Sources: Voters should rely on authenticated sources for information. These include reading official manifestos, following verified advisory notifications from the Election Commission, and avoiding unverified claims or rumours.
- Consume News Responsibly: Voters must familiarize themselves with dependable news channels and make use of reputable fact-checking organizations that uphold the integrity of news content. It is crucial to refrain from randomly sharing or forwarding any news post, video, or message without verifying its authenticity. Consume responsibly, fact-check thoroughly, and share cautiously.
- Role of Fact-Checking: Cross-checking and verifying information from credible sources are indispensable practices. Reliable and trustworthy fact-checking tools are vital for assessing the authenticity of information in the digital space. Voters are encouraged to use these tools to validate information from authenticated sources and adopt a habit of verification on their own. This approach fosters a culture of critical thinking, empowering citizens to counter deceptive deepfakes and malicious misinformation effectively. It also helps create a more informed and resilient electorate.
- Be Aware of Electoral Deepfakes: In the era of artificial intelligence, synthetic media presents significant challenges. Just as videos can be manipulated, voices can also be cloned. It is essential to remain vigilant against the misuse of deepfake audio and video content by malicious actors. Recognize the warning signs, such as inconsistencies or unnatural details, and stay alert to misleading multimedia content. Proactively question and verify such material to avoid falling prey to deception.
References
- https://www.financialexpress.com/business/brandwagon-90-ofnbsp-delhi-youth-witness-spike-in-fake-news-during-elections-91-believe-it-influences-voting-patterns-revealed-the-23-watts-report-3483166/
- https://timesofindia.indiatimes.com/india/election-commission-urges-parties-to-disclose-ai-generated-campaign-content-in-interest-of-transparency/articleshow/117306865.cms
- https://www.thehindu.com/news/national/election-commission-issues-advisory-on-use-of-ai-in-poll-campaigning/article69103888.ece
- https://indiaai.gov.in/article/election-commission-of-india-embraces-ai-ethics-in-campaigning-advisory-on-labelling-ai-generated-content