#FactCheck - Debunked: Viral Video Falsely Claims Allu Arjun Joins Congress Campaign
Executive Summary
The viral video, in which south actor Allu Arjun is seen supporting the Congress Party's campaign for the upcoming Lok Sabha Election, suggests that he has joined Congress Party. Over the course of an investigation, the CyberPeace Research Team uncovered that the video is a close up of Allu Arjun marching as the Grand Marshal of the 2022 India Day parade in New York to celebrate India’s 75th Independence Day. Reverse image searches, Allu Arjun's official YouTube channel, the news coverage, and stock images websites are also proofs of this fact. Thus, it has been firmly established that the claim that Allu Arjun is in a Congress Party's campaign is fabricated and misleading

Claims:
The viral video alleges that the south actor Allu Arjun is using his popularity and star status as a way of campaigning for the Congress party during the 2024 upcoming Lok Sabha elections.



Fact Check:
Initially, after hearing the news, we conducted a quick search using keywords to relate it to actor Allu Arjun joining the Congress Party but came across nothing related to this. However, we found a video by SoSouth posted on Feb 20, 2022, of Allu Arjun’s Father-in-law Kancharla Chandrasekhar Reddy joining congress and quitting former chief minister K Chandrasekhar Rao's party.

Next, we segmented the video into keyframes, and then reverse searched one of the images which led us to the Federation of Indian Association website. It says that the picture is from the 2022 India Parade. The picture looks similar to the viral video, and we can compare the two to help us determine if they are from the same event.

Taking a cue from this, we again performed a keyword search using “India Day Parade 2022”. We found a video uploaded on the official Allu Arjun YouTube channel, and it’s the same video that has been shared on Social Media in recent times with different context. The caption of the original video reads, “Icon Star Allu Arjun as Grand Marshal @ 40th India Day Parade in New York | Highlights | #IndiaAt75”

The Reverse Image search results in some more evidence of the real fact, we found the image on Shutterstock, the description of the photo reads, “NYC India Day Parade, New York, NY, United States - 21 Aug 2022 Parade Grand Marshall Actor Allu Arjun is seen on a float during the annual Indian Day Parade on Madison Avenue in New York City on August 21, 2022.”

With this, we concluded that the Claim made in the viral video of Allu Arjun supporting the Lok Sabha Election campaign 2024 is baseless and false.
Conclusion:
The viral video circulating on social media has been put out of context. The clip, which depicts Allu Arjun's participation in the Indian Day parade in 2022, is not related to the ongoing election campaigns for any Political Party.
Hence, the assertion that Allu Arjun is campaigning for the Congress party is false and misleading.
- Claim: A video, which has gone viral, says that actor Allu Arjun is rallying for the Congress party.
- Claimed on: X (Formerly known as Twitter) and YouTube
- Fact Check: Fake & Misleading
Related Blogs

Executive Summary:
A viral online video claims Billionaire and Founder of Tesla & SpaceX Elon Musk of promoting Cryptocurrency. The CyberPeace Research Team has confirmed that the video is a deepfake, created using AI technology to manipulate Elon’s facial expressions and voice through the use of relevant, reputed and well verified AI tools and applications to arrive at the above conclusion for the same. The original footage had no connections to any cryptocurrency, BTC or ETH apportion to the ardent followers of crypto-trading. The claim that Mr. Musk endorses the same and is therefore concluded to be false and misleading.

Claims:
A viral video falsely claims that Billionaire and founder of Tesla Elon Musk is endorsing a Crypto giveaway project for the crypto enthusiasts which are also his followers by consigning a portion of his valuable Bitcoin and Ethereum stock.


Fact Check:
Upon receiving the viral posts, we conducted a Google Lens search on the keyframes of the video. The search led us to various legitimate sources featuring Mr. Elon Musk but none of them included any promotion of any cryptocurrency giveaway. The viral video exhibited signs of digital manipulation, prompting a deeper investigation.
We used AI detection tools, such as TrueMedia.org, to analyze the video. The analysis confirmed with 99.0% confidence that the video was a deepfake. The tools identified "substantial evidence of manipulation," particularly in the facial movements and voice, which were found to be artificially generated.



Additionally, an extensive review of official statements and interviews with Mr. Musk revealed no mention of any such giveaway. No credible reports were found linking Elon Musk to this promotion, further confirming the video’s inauthenticity.
Conclusion:
The viral video claiming that Elon Musk promotes a crypto giveaway is a deep fake. The research using various tools such as Google Lens, AI detection tool confirms that the video is manipulated using AI technology. Additionally, there is no information in any official sources. Thus, the CyberPeace Research Team confirms that the video was manipulated using AI technology, making the claim false and misleading.
- Claim: Elon Musk conducting giving away Cryptocurrency viral on social media.
- Claimed on: X(Formerly Twitter)
- Fact Check: False & Misleading

Disclaimer:
This report is based on extensive research conducted by CyberPeace Research using publicly available information, and advanced analytical techniques. The findings, interpretations, and conclusions presented are based on the data available at the time of study and aim to provide insights into global ransomware trends.
The statistics mentioned in this report are specific to the scope of this research and may vary based on the scope and resources of other third-party studies. Additionally, all data referenced is based on claims made by threat actors and does not imply confirmation of the breach by CyberPeace. CyberPeace includes this detail solely to provide factual transparency and does not condone any unlawful activities. This information is shared only for research purposes and to spread awareness. CyberPeace encourages individuals and organizations to adopt proactive cybersecurity measures to protect against potential threats.
CyberPeace Research does not claim to have identified or attributed specific cyber incidents to any individual, organization, or nation-state beyond the scope of publicly observable activities and available information. All analyses and references are intended for informational and awareness purposes only, without any intention to defame, accuse, or harm any entity.
While every effort has been made to ensure accuracy, CyberPeace Research is not liable for any errors, omissions, subsequent interpretations and any unlawful activities of the findings by third parties. The report is intended to inform and support cybersecurity efforts globally and should be used as a guide to foster proactive measures against cyber threats.
Executive Summary:
The 2024 ransomware landscape reveals alarming global trends, with 166 Threat Actor Groups leveraging 658 servers/underground resources and mirrors to execute 5,233 claims across 153 countries. Monthly fluctuations in activity indicate strategic, cyclical targeting, with peak periods aligned with vulnerabilities in specific sectors and regions. The United States was the most targeted nation, followed by Canada, the UK, Germany, and other developed countries, with the northwestern hemisphere experiencing the highest concentration of attacks. Business Services and Healthcare bore the brunt of these operations due to their high-value data, alongside targeted industries such as Pharmaceuticals, Mechanical, Metal, Electronics, and Government-related professional firms. Retail, Financial, Technology, and Energy sectors were also significantly impacted.
This research was conducted by CyberPeace Research using a systematic modus operandi, which included advanced OSINT (Open-Source Intelligence) techniques, continuous monitoring of Ransomware Group activities, and data collection from 658 servers and mirrors globally. The team utilized data scraping, pattern analysis, and incident mapping to track trends and identify hotspots of ransomware activity. By integrating real-time data and geographic claims, the research provided a comprehensive view of sectoral and regional impacts, forming the basis for actionable insights.
The findings emphasize the urgent need for proactive Cybersecurity strategies, robust defenses, and global collaboration to counteract the evolving and persistent threats posed by ransomware.
Overview:
This report provides insights into ransomware activities monitored throughout 2024. Data was collected by observing 166 Threat Actor Groups using ransomware technologies across 658 servers/underground resources and mirrors, resulting in 5,233 claims worldwide. The analysis offers a detailed examination of global trends, targeted sectors, and geographical impact.
Top 10 Threat Actor Groups:
The ransomware group ‘ransomhub’ has emerged as the leading threat actor, responsible for 527 incidents worldwide. Following closely are ‘lockbit3’ with 522 incidents and ‘play’ with 351. Other Groups are ‘akira’, ‘hunters’, ‘medusa’, ‘blackbasta’, ‘qilin’, ‘bianlian’, ‘incransom’. These groups usually employ advanced tactics to target critical sectors, highlighting the urgent need for robust cybersecurity measures to mitigate their impact and protect organizations from such threats.

Monthly Ransomware Incidents:
In January 2024, the value began at 284, marking the lowest point on the chart. The trend rose steadily in the subsequent months, reaching its first peak at 557 in May 2024. However, after this peak, the value dropped sharply to 339 in June. A gradual recovery follows, with the value increasing to 446 by August. September sees another decline to 389, but a sharp rise occurs afterward, culminating in the year’s highest point of 645 in November. The year concludes with a slight decline, ending at 498 in December 2024 (till 28th of December).

Top 10 Targeted Countries:
- The United States consistently topped the list as the primary target probably due to its advanced economic and technological infrastructure.
- Other heavily targeted nations include Canada, UK, Germany, Italy, France, Brazil, Spain, and India.
- A total of 153 countries reported ransomware attacks, reflecting the global scale of these cyber threats

Top Affected Sectors:
- Business Services and Healthcare faced the brunt of ransomware threat due to the sensitive nature of their operations.
- Specific industries under threats:
- Pharmaceutical, Mechanical, Metal, and Electronics industries.
- Professional firms within the Government sector.
- Other sectors:
- Retail, Financial, Technology, and Energy sectors were also significant targets.

Geographical Impact:
The continuous and precise OSINT(Open Source Intelligence) work on the platform, performed as a follow-up action to data scraping, allows a complete view of the geography of cyber attacks based on their claims. The northwestern region of the world appears to be the most severely affected by Threat Actor groups. The figure below clearly illustrates the effects of this geographic representation on the map.

Ransomware Threat Trends in India:
In 2024, the research identified 98 ransomware incidents impacting various sectors in India, marking a 55% increase compared to the 63 incidents reported in 2023. This surge highlights a concerning trend, as ransomware groups continue to target India's critical sectors due to its growing digital infrastructure and economic prominence.

Top Threat Actors Group Targeted India:
Among the following threat actors ‘killsec’ is the most frequent threat. ‘lockbit3’ follows as the second most prominent threat, with significant but lower activity than killsec. Other groups, such as ‘ransomhub’, ‘darkvault’, and ‘clop’, show moderate activity levels. Entities like ‘bianlian’, ‘apt73/bashe’, and ‘raworld’ have low frequencies, indicating limited activity. Groups such as ‘aps’ and ‘akira’ have the lowest representation, indicating minimal activity. The chart highlights a clear disparity in activity levels among these threats, emphasizing the need for targeted cybersecurity strategies.

Top Impacted Sectors in India:
The pie chart illustrates the distribution of incidents across various sectors, highlighting that the industrial sector is the most frequently targeted, accounting for 75% of the total incidents. This is followed by the healthcare sector, which represents 12% of the incidents, making it the second most affected. The finance sector accounts for 10% of the incidents, reflecting a moderate level of targeting. In contrast, the government sector experiences the least impact, with only 3% of the incidents, indicating minimal targeting compared to the other sectors. This distribution underscores the critical need for enhanced cybersecurity measures, particularly in the industrial sector, while also addressing vulnerabilities in healthcare, finance, and government domains.

Month Wise Incident Trends in India:
The chart indicates a fluctuating trend with notable peaks in May and October, suggesting potential periods of heightened activity or incidents during these months. The data starts at 5 in January and drops to its lowest point, 2, in February. It then gradually increases to 6 in March and April, followed by a sharp rise to 14 in May. After peaking in May, the metric significantly declines to 4 in June but starts to rise again, reaching 7 in July and 8 in August. September sees a slight dip to 5 before the metric spikes dramatically to its highest value, 24, in October. Following this peak, the count decreases to 10 in November and then drops further to 7 in December.

CyberPeace Advisory:
- Implement Data Backup and Recovery Plans: Backups are your safety net. Regularly saving copies of your important data ensures you can bounce back quickly if ransomware strikes. Make sure these backups are stored securely—either offline or in a trusted cloud service—to avoid losing valuable information or facing extended downtime.
- Enhance Employee Awareness and Training: People often unintentionally open the door to ransomware. By training your team to spot phishing emails, social engineering tricks, and other scams, you empower them to be your first line of defense against attacks.
- Adopt Multi-Factor Authentication (MFA): Think of MFA as locking your door and adding a deadbolt. Even if attackers get hold of your password, they’ll still need that second layer of verification to break in. It’s an easy and powerful way to block unauthorized access.
- Utilize Advanced Threat Detection Tools: Smart tools can make a world of difference. AI-powered systems and behavior-based monitoring can catch ransomware activity early, giving you a chance to stop it in its tracks before it causes real damage.
- Conduct Regular Vulnerability Assessments: You can’t fix what you don’t know is broken. Regularly checking for vulnerabilities in your systems helps you identify weak spots. By addressing these issues proactively, you can stay one step ahead of attackers.
Conclusion:
The 2024 ransomware landscape reveals the critical need for proactive cybersecurity strategies. High-value sectors and technologically advanced regions remain the primary targets, emphasizing the importance of robust defenses. As we move into 2025, it is crucial to anticipate the evolution of ransomware tactics and adopt forward-looking measures to address emerging threats.
Global collaboration, continuous innovation in cybersecurity technologies, and adaptive strategies will be imperative to counteract the persistent and evolving threats posed by ransomware activities. Organizations and governments must prioritize preparedness and resilience, ensuring that lessons learned in 2024 are applied to strengthen defenses and minimize vulnerabilities in the year ahead.

In the vast, uncharted territories of the digital world, a sinister phenomenon is proliferating at an alarming rate. It's a world where artificial intelligence (AI) and human vulnerability intertwine in a disturbing combination, creating a shadowy realm of non-consensual pornography. This is the world of deepfake pornography, a burgeoning industry that is as lucrative as it is unsettling.
According to a recent assessment, at least 100,000 deepfake porn videos are readily available on the internet, with hundreds, if not thousands, being uploaded daily. This staggering statistic prompts a chilling question: what is driving the creation of such a vast number of fakes? Is it merely for amusement, or is there a more sinister motive at play?
Recent Trends and Developments
An investigation by India Today’s Open-Source Intelligence (OSINT) team reveals that deepfake pornography is rapidly morphing into a thriving business. AI enthusiasts, creators, and experts are extending their expertise, investors are injecting money, and even small financial companies to tech giants like Google, VISA, Mastercard, and PayPal are being misused in this dark trade. Synthetic porn has existed for years, but advances in AI and the increasing availability of technology have made it easier—and more profitable—to create and distribute non-consensual sexually explicit material. The 2023 State of Deepfake report by Home Security Heroes reveals a staggering 550% increase in the number of deepfakes compared to 2019.
What’s the Matter with Fakes?
But why should we be concerned about these fakes? The answer lies in the real-world harm they cause. India has already seen cases of extortion carried out by exploiting deepfake technology. An elderly man in UP’s Ghaziabad, for instance, was tricked into paying Rs 74,000 after receiving a deep fake video of a police officer. The situation could have been even more serious if the perpetrators had decided to create deepfake porn of the victim.
The danger is particularly severe for women. The 2023 State of Deepfake Report estimates that at least 98 percent of all deepfakes is porn and 99 percent of its victims are women. A study by Harvard University refrained from using the term “pornography” for creating, sharing, or threatening to create/share sexually explicit images and videos of a person without their consent. “It is abuse and should be understood as such,” it states.
Based on interviews of victims of deepfake porn last year, the study said 63 percent of participants talked about experiences of “sexual deepfake abuse” and reported that their sexual deepfakes had been monetised online. It also found “sexual deepfake abuse to be particularly harmful because of the fluidity and co-occurrence of online offline experiences of abuse, resulting in endless reverberations of abuse in which every aspect of the victim’s life is permanently disrupted”.
Creating deepfake porn is disturbingly easy. There are largely two types of deepfakes: one featuring faces of humans and another featuring computer-generated hyper-realistic faces of non-existing people. The first category is particularly concerning and is created by superimposing faces of real people on existing pornographic images and videos—a task made simple and easy by AI tools.
During the investigation, platforms hosting deepfake porn of stars like Jennifer Lawrence, Emma Stone, Jennifer Aniston, Aishwarya Rai, Rashmika Mandanna to TV actors and influencers like Aanchal Khurana, Ahsaas Channa, and Sonam Bajwa and Anveshi Jain were encountered. It takes a few minutes and as little as Rs 40 for a user to create a high-quality fake porn video of 15 seconds on platforms like FakeApp and FaceSwap.
The Modus Operandi
These platforms brazenly flaunt their business association and hide behind frivolous declarations such as: the content is “meant solely for entertainment” and “not intended to harm or humiliate anyone”. However, the irony of these disclaimers is not lost on anyone, especially when they host thousands of non-consensual deepfake pornography.
As fake porn content and its consumers surge, deepfake porn sites are rushing to forge collaborations with generative AI service providers and have integrated their interfaces for enhanced interoperability. The promise and potential of making quick bucks have given birth to step-by-step guides, video tutorials, and websites that offer tools and programs, recommendations, and ratings.
Nearly 90 per cent of all deepfake porn is hosted by dedicated platforms that charge for long-duration premium fake content and for creating porn—of whoever a user wants, and take requests for celebrities. To encourage them further, they enable creators to monetize their content.
One such website, Civitai, has a system in place that pays “rewards” to creators of AI models that generate “images of real people'', including ordinary people. It also enables users to post AI images, prompts, model data, and LoRA (low-rank adaptation of large language models) files used in generating the images. Model data designed for adult content is gaining great popularity on the platform, and they are not only targeting celebrities. Common people are equally susceptible.
Access to premium fake porn, like any other content, requires payment. But how can a gateway process payment for sexual content that lacks consent? It seems financial institutes and banks are not paying much attention to this legal question. During the investigation, many such websites accepting payments through services like VISA, Mastercard, and Stripe were found.
Those who have failed to register/partner with these fintech giants have found a way out. While some direct users to third-party sites, others use personal PayPal accounts to manually collect money in the personal accounts of their employees/stakeholders, which potentially violates the platform's terms of use that ban the sale of “sexually oriented digital goods or content delivered through a digital medium.”
Among others, the MakeNude.ai web app – which lets users “view any girl without clothing” in “just a single click” – has an interesting method of circumventing restrictions around the sale of non-consensual pornography. The platform has partnered with Ukraine-based Monobank and Dublin’s BetaTransfer Kassa which operates in “high-risk markets”.
BetaTransfer Kassa admits to serving “clients who have already contacted payment aggregators and received a refusal to accept payments, or aggregators stopped payments altogether after the resource was approved or completely freeze your funds”. To make payment processing easy, MakeNude.ai seems to be exploiting the donation ‘jar’ facility of Monobank, which is often used by people to donate money to Ukraine to support it in the war against Russia.
The Indian Scenario
India currently is on its way to design dedicated legislation to address issues arising out of deepfakes. Though existing general laws requiring such platforms to remove offensive content also apply to deepfake porn. However, persecution of the offender and their conviction is extremely difficult for law enforcement agencies as it is a boundaryless crime and sometimes involves several countries in the process.
A victim can register a police complaint under provisions of Section 66E and Section 66D of the IT Act, 2000. Recently enacted Digital Personal Data Protection Act, 2023 aims to protect the digital personal data of users. Recently Union Government issued an advisory to social media intermediaries to identify misinformation and deepfakes. Comprehensive law promised by Union IT minister Ashwini Vaishnav will be able to address these challenges.
Conclusion
In the end, the unsettling dance of AI and human vulnerability continues in the dark web of deepfake pornography. It's a dance that is as disturbing as it is fascinating, a dance that raises questions about the ethical use of technology, the protection of individual rights, and the responsibility of financial institutions. It's a dance that we must all be aware of, for it is a dance that affects us all.
References
- https://www.indiatoday.in/india/story/deepfake-porn-artificial-intelligence-women-fake-photos-2471855-2023-12-04
- https://www.hindustantimes.com/opinion/the-legal-net-to-trap-peddlers-of-deepfakes-101701520933515.html
- https://indianexpress.com/article/opinion/columns/with-deepfakes-getting-better-and-more-alarming-seeing-is-no-longer-believing/