#FactCheck - Afghan Cricket Team's Chant Misrepresented in Viral Video
Executive Summary:
Footage of the Afghanistan cricket team singing ‘Vande Mataram’ after India’s triumph in ICC T20 WC 2024 exposed online. The CyberPeace Research team carried out a thorough research to uncover the truth about the viral video. The original clip was posted on X platform by Afghan cricketer Mohammad Nabi on October 23, 2023 where the Afghan players posted the video chanting ‘Allah-hu Akbar’ after winning the ODIs in the World Cup against Pakistan. This debunks the assertion made in the viral video about the people chanting Vande Mataram.

Claims:
Afghan cricket players chanted "Vande Mataram" to express support for India after India’s victory over Australia in the ICC T20 World Cup 2024.

Fact Check:
Upon receiving the posts, we analyzed the video and found some inconsistency in the video such as the lip sync of the video.
We checked the video in an AI audio detection tool named “True Media”, and the detection tool found the audio to be 95% AI-generated which made us more suspicious of the authenticity of the video.


For further verification, we then divided the video into keyframes. We reverse-searched one of the frames of the video to find any credible sources. We then found the X account of Afghan cricketer Mohammad Nabi, where he uploaded the same video in his account with a caption, “Congratulations! Our team emerged triumphant n an epic battle against ending a long-awaited victory drought. It was a true test of skills & teamwork. All showcased thr immense tlnt & unwavering dedication. Let's celebrate ds 2gether n d glory of our great team & people” on 23 Oct, 2023.

We found that the audio is different from the viral video, where we can hear Afghan players chanting “Allah hu Akbar” in their victory against Pakistan. The Afghan players were not chanting Vande Mataram after India’s victory over Australia in T20 World Cup 2014.
Hence, upon lack of credible sources and detection of AI voice alteration, the claim made in the viral posts is fake and doesn’t represent the actual context. We have previously debunked such AI voice alteration videos. Netizens must be careful before believing misleading information.
Conclusion:
The viral video claiming that Afghan cricket players chanted "Vande Mataram" in support of India is false. The video was altered from the original video by using audio manipulation. The original video of Afghanistan players celebrating victory over Pakistan by chanting "Allah-hu Akbar" was posted in the official Instagram account of Mohammad Nabi, an Afghan cricketer. Thus the information is fake and misleading.
- Claim: Afghan cricket players chanted "Vande Mataram" to express support for India after the victory over Australia in the ICC T20 World Cup 2024.
- Claimed on: YouTube
- Fact Check: Fake & Misleading
Related Blogs

Introduction
The recent cyber-attack on Jaguar Land Rover (JLR), one of the world's best-known car makers, has revealed extensive weaknesses in the interlinked character of international supply chains. The incident highlights the increasing cybersecurity issues of industries going through digital transformation. With its production stopped in several UK factories, supply chain disruptions, and service delays to its customers worldwide, this cyber-attack shows how cyber events can ripple into operation, finance, and reputation risks for large businesses.
The Anatomy of a Breakdown
Jaguar Land Rover, a Tata Motors subsidiary, was forced to disable its IT infrastructure because of a cyber-attack over the weekend. This shut down was already an emergency shut down to mitigate damage and the disruption to business was serious.
- No Production - The car plants at Halewood (Merseyside) and Solihull (West Midlands) and the engine plant (Wolverhampton) were all completely shut down.
- Sales and Distribution: Car sales were significantly impaired during a high-volume registration period in September, although certain transactions still passed through manual procedures.
- Global Effect: The breakdown did not reach only the UK, dealers and fix experts across the world, including in Australia, suffered with inaccessible parts databases.
JLR called the recovery process "extremely complex" as it involved a controlled recovery of systems and implementing alternative workarounds for offline services. The overall effects include the immediate and massive impact to their suppliers and customers, and has raised larger questions regarding the sustainability of digital ecosystems in the automobile value chain.
The Human Impact: Beyond JLR's Factories
The implications of the cyber-attack have extended beyond the production lines of JLR:
- Independent Garages: Repair centres such as Nyewood Express of West Sussex indicated that they could not use vital parts databases, which brought repair activities to a standstill and left clients waiting indefinitely.
- Global Dealers: Land Rover experts as distant as Tasmania indicated total system crashes, highlighting global dependency on centralized IT systems.
- Customer Frustration: Regular customers in need of urgent repairs were stranded by the inability to order replacement parts from original manufacturers.
This attack is an example of the cascading effect of cyber disruptions among interconnected industries, a single point of failure paralyzing complete ecosystems.
The Culprit: The Hacker Collective
The hack is justifiably claimed by a so-called hacker collective "Scattered Lapsus$ Hunters." The so-called hacking collective says that it consists of young English-speaking hackers and has previously targeted blue-chip brands like Marks & Spencer. While the attackers seem not to have publicly declared whether they exfiltrated sensitive information or deployed ransomware, they went ahead and posted screenshots of internal JLR documents-the kind of documents that probably are not supposed to see the light of day, including troubleshooting guides and system logs-implicating what can only be described as grossly unauthorized access into some of Jaguar Land Rover's core IT systems.
Jaguar Land Rover had gone on record to claim with no apropos proof or evidence that it probably did not see anyone getting into customer data; however, the very occurrence of this attack raises some very serious questions on insider threats, social engineering concepts, and how efficient cybersecurity governance architectures really are.
Cybersecurity Weaknesses and Lessons Learned
The JLR attack depicts some of the common weaknesses associated with large-scale manufacturing organizations:
- Centralized IT Dependencies: Today's auto firms are based on worldwide IT systems for operations, logistics, and customer care. Compromise can lead to broad outages.
- Supply Chain Vulnerabilities: Tier-2 and Tier-1 suppliers use OEM systems for placing and tracing components. Interrupting at the OEM level automatically stops their processes.
- Inadequate Incident Visibility: Several suppliers complained about no clear information from JLR, which increased uncertainty and financial loss.
- Rise of Youth Hacking Groups: Involvement of youth hacker groups highlight the necessity for active monitoring and community-level cybersecurity awareness initiatives.
Broader Industry Context
With ever-increasing cyber-attacks on the automotive industry, an area currently being rapidly digitalised through connected cars, IoT-based factories, and cloud-based operations, this series of incidents falls within such a context. In 2023, JLR awarded an £800 million contract to Tata Consultancy Services (TCS) for services in support of the company's digital transformation and cybersecurity enhancement. This attack shows that, no matter how much is spent, poorly conceptualised security programs can never stand up to ever-changing cyber threats.
What Can Organizations Do? – Cyberpeace Recommendations
To contain risks and develop a resilience against such events, organizations need to implement a multi-layered approach to cybersecurity:
- Adopt Zero Trust Architecture - Presume breach as the new normal. Verify each user, device, and application before access is given, even inside the internal network.
- Enhance Supply Chain Security - Perform targeted assessments on a routine basis to identify risk factors in diminishing suppliers. Include rigorous cybersecurity provisions in the agreements with suppliers, namely disclosure of vulnerabilities and the agreed period for incident response.
- Durable Backups and Their Restoration - Backward hampers are kept isolated and encrypted to continue operations in case of ransomware incidents or any other occur in system compromise.
- Periodic Red Team Exercises - Simulate cyber-attacks on IT and OT systems to examine if vulnerabilities exist and evaluate current incident response measures.
- Employee Training and Insider Threat Monitoring - Social engineering being the forefront of attack vectors, continuous training and behavioural monitoring will have to be done to avoid credential disposal.
- Public-Private Partnership - Interact with several government agencies and cybersecurity groups for sharing threat intelligence and enforcing best practices complementary to ISO/IEC 27001 and NIST Cybersecurity Framework.
Conclusion
The hacking at Jaguar Land Rover is perhaps one of a thousand reminders that cybersecurity can no longer be seen as a back-office job but rather as an issue of business continuity at the very core of the organization. In the process of digital transformation, the attack surface grows, making the entities targeted by cybercriminals. Operation security demands that cybersecurity be ensured on a proactive basis through resilient supply chains and stakeholders working together. The JLR attack is not an isolated event; it is a warning for the entire automobile sector to maintain security at every level of digitalization.
References
- https://www.bbc.com/news/articles/c1jzl1lw4y1o
- https://www.theguardian.com/business/2025/sep/07/disruption-to-jaguar-land-rover-after-cyber-attack-may-last-until-october
- https://uk.finance.yahoo.com/news/jaguar-factory-workers-told-stay-073458122.html

Introduction
According to a shocking report, there are multiple scam loan apps on the App Store in India that charge excessive interest rates and force users to pay by blackmailing and harassing them. Apple has prohibited and removed these apps from the App Store, but they may still be installed on your iPhone and running. You must delete any of these apps if you have downloaded them. Learn the names of these apps and how they operated the fraud.
Why Apple banned these apps?
- Apple has taken action to remove certain apps from the Indian App Store. These apps were engaging in unethical behaviour, such as impersonating financial institutions, demanding high fees, and threatening borrowers. Here are the titles of these apps, as well as what Apple has said about their suspension.
- Following user concerns, Apple removed six loan apps from the Indian App Store. Loan apps include White Kash, Pocket Kash, Golden Kash, Ok Rupee, and others.
- According to multiple user reviews, certain apps seek unjustified access to users’ contact lists and media. These apps also charge exorbitant fees that are not necessitated. Furthermore, companies have been found to engage in unethical tactics such as charging high-interest rates and “processing fees” equal to half the loan amount.
- Some lending app users have reported being harassed and threatened for failing to return their loans on time. In some circumstances, the apps threatened the user’s contacts if payment was not completed by the deadline. According to one user, the app company threatened to produce and send false photographs of her to her contacts.
- These loan apps were removed from the App Store, according to Apple, because they broke the norms and standards of the Apple Developer Program License Agreement. These apps were discovered to be falsely claiming financial institution connections.
Issue of Fake loan apps on the App Store
- The App Store and our App Review Guidelines are designed to ensure we provide our users with the safest experience possible,” Apple explained. “We do not tolerate fraudulent activity on the App Store and have strict rules against apps and developers who attempt to game the system.
- In 2022, Apple blocked nearly $2 billion in fraudulent App Store sales. Furthermore, it rejected nearly 1.7 million software submissions that did not match Apple’s quality and safety criteria and cancelled 428,000 developer accounts due to suspected fraudulent activities.
- The scammers also used heinous tactics to force the loanees to pay. According to reports, the scammers behind the apps gained access to the user’s contact list as well as their images. They would morph the images and then scare the individual by sharing their fake nude photos with their whole contact list.
Dangerous financial fraud apps have surfaced on the App Store
- TechCrunch acquired a user review from one of these apps. “I borrowed an amount in a helpless situation, and a day before the repayment due date, I got some messages with my picture and my contacts in my phone saying that repay your loan or they will inform our contacts that you are not paying the loan,” it said.
- Sandhya Ramesh, a journalist from The Print, recently tweeted a screenshot of a direct message she got. A victim’s friend told a similar story in the message.
- TechCrunch contacted Apple, who confirmed that the apps had been removed from the App Store for breaking the Apple Developer Program License Agreement and guidelines.
Conclusion
Recently, some users have claimed that some quick-loan applications, such as White Kash, Pocket Kash, and Golden Kash, have appeared on the Top Finance applications chart in recent days. These apps necessitate unauthorised and intrusive access to users’ contact lists and media. According to hundreds of user evaluations, these apps charged exorbitantly high and useless fees. They used unscrupulous techniques such as demanding “processing fees” equal to half the loan amount and charging high-interest rates. Users were also harassed and threatened with restitution. If payments were not made by the due date, the lending applications threatened to notify users’ contacts. According to one user, the app provider even threatened to generate phoney nude images of her and send them to her contacts.

Introduction
Deepfake technology, which combines the words "deep learning" and "fake," uses highly developed artificial intelligence—specifically, generative adversarial networks (GANs)—to produce computer-generated content that is remarkably lifelike, including audio and video recordings. Because it can provide credible false information, there are concerns about its misuse, including identity theft and the transmission of fake information. Cybercriminals leverage AI tools and technologies for malicious activities or for committing various cyber frauds. By such misuse of advanced technologies such as AI, deepfake, and voice clones. Such new cyber threats have emerged.
India Topmost destination for deepfake attacks
According to Sumsub’s identity fraud report 2023, a well-known digital identity verification company with headquarters in the UK. India, Bangladesh, and Pakistan have become an important participants in the Asia-Pacific identity fraud scene with India’s fraud rate growing exponentially by 2.99% from 2022 to 2023. They are among the top ten nations most impacted by the use of deepfake technology. Deepfake technology is being used in a significant number of cybercrimes, according to the newly released Sumsub Identity Fraud Report for 2023, and this trend is expected to continue in the upcoming year. This highlights the need for increased cybersecurity awareness and safeguards as identity fraud poses an increasing concern in the area.
How Deeepfake Works
Deepfakes are a fascinating and worrisome phenomenon that have emerged in the modern digital landscape. These realistic-looking but wholly artificial videos have become quite popular in the last few months. Such realistic-looking, but wholly artificial, movies have been ingrained in the very fabric of our digital civilisation as we navigate its vast landscape. The consequences are enormous and the attraction is irresistible.
Deep Learning Algorithms
Deepfakes examine large datasets, frequently pictures or videos of a target person, using deep learning techniques, especially Generative Adversarial Networks. By mimicking and learning from gestures, speech patterns, and facial expressions, these algorithms can extract valuable information from the data. By using sophisticated approaches, generative models create material that mixes seamlessly with the target context. Misuse of this technology, including the dissemination of false information, is a worry. Sophisticated detection techniques are becoming more and more necessary to separate real content from modified content as deepfake capabilities improve.
Generative Adversarial Networks
Deepfake technology is based on GANs, which use a dual-network design. Made up of a discriminator and a generator, they participate in an ongoing cycle of competition. The discriminator assesses how authentic the generated information is, whereas the generator aims to create fake material, such as realistic voice patterns or facial expressions. The process of creating and evaluating continuously leads to a persistent improvement in Deepfake's effectiveness over time. The whole deepfake production process gets better over time as the discriminator adjusts to become more perceptive and the generator adapts to produce more and more convincing content.
Effect on Community
The extensive use of Deepfake technology has serious ramifications for several industries. As technology develops, immediate action is required to appropriately manage its effects. And promoting ethical use of technologies. This includes strict laws and technological safeguards. Deepfakes are computer trickery that mimics prominent politicians' statements or videos. Thus, it's a serious issue since it has the potential to spread instability and make it difficult for the public to understand the true nature of politics. Deepfake technology has the potential to generate totally new characters or bring stars back to life for posthumous roles in the entertainment industry. It gets harder and harder to tell fake content from authentic content, which makes it simpler for hackers to trick people and businesses.
Ongoing Deepfake Assaults In India
Deepfake videos continue to target popular celebrities, Priyanka Chopra is the most recent victim of this unsettling trend. Priyanka's deepfake adopts a different strategy than other examples including actresses like Rashmika Mandanna, Katrina Kaif, Kajol, and Alia Bhatt. Rather than editing her face in contentious situations, the misleading film keeps her look the same but modifies her voice and replaces real interview quotes with made-up commercial phrases. The deceptive video shows Priyanka promoting a product and talking about her yearly salary, highlighting the worrying development of deepfake technology and its possible effects on prominent personalities.
Actions Considered by Authorities
A PIL was filed requesting the Delhi High Court that access to websites that produce deepfakes be blocked. The petitioner's attorney argued in court that the government should at the very least establish some guidelines to hold individuals accountable for their misuse of deepfake and AI technology. He also proposed that websites should be asked to identify information produced through AI as such and that they should be prevented from producing illegally. A division bench highlighted how complicated the problem is and suggested the government (Centre) to arrive at a balanced solution without infringing the right to freedom of speech and expression (internet).
Information Technology Minister Ashwini Vaishnaw stated that new laws and guidelines would be implemented by the government to curb the dissemination of deepfake content. He presided over a meeting involving social media companies to talk about the problem of deepfakes. "We will begin drafting regulation immediately, and soon, we are going to have a fresh set of regulations for deepfakes. this might come in the way of amending the current framework or ushering in new rules, or a new law," he stated.
Prevention and Detection Techniques
To effectively combat the growing threat posed by the misuse of deepfake technology, people and institutions should place a high priority on developing critical thinking abilities, carefully examining visual and auditory cues for discrepancies, making use of tools like reverse image searches, keeping up with the latest developments in deepfake trends, and rigorously fact-check reputable media sources. Important actions to improve resistance against deepfake threats include putting in place strong security policies, integrating cutting-edge deepfake detection technologies, supporting the development of ethical AI, and encouraging candid communication and cooperation. We can all work together to effectively and mindfully manage the problems presented by deepfake technology by combining these tactics and adjusting the constantly changing terrain.
Conclusion
Advanced artificial intelligence-powered deepfake technology produces extraordinarily lifelike computer-generated information, raising both creative and moral questions. Misuse of tech or deepfake presents major difficulties such as identity theft and the propagation of misleading information, as demonstrated by examples in India, such as the latest deepfake video involving Priyanka Chopra. It is important to develop critical thinking abilities, use detection strategies including analyzing audio quality and facial expressions, and keep up with current trends in order to counter this danger. A thorough strategy that incorporates fact-checking, preventative tactics, and awareness-raising is necessary to protect against the negative effects of deepfake technology. Important actions to improve resistance against deepfake threats include putting in place strong security policies, integrating cutting-edge deepfake detection technologies, supporting the development of ethical AI, and encouraging candid communication and cooperation. We can all work together to effectively and mindfully manage the problems presented by deepfake technology by combining these tactics and making adjustments to the constantly changing terrain. Creating a true cyber-safe environment for netizens.
References:
- https://yourstory.com/2023/11/unveiling-deepfake-technology-impact
- https://www.indiatoday.in/movies/celebrities/story/deepfake-alert-priyanka-chopra-falls-prey-after-rashmika-mandanna-katrina-kaif-and-alia-bhatt-2472293-2023-12-05
- https://www.csoonline.com/article/1251094/deepfakes-emerge-as-a-top-security-threat-ahead-of-the-2024-us-election.html
- https://timesofindia.indiatimes.com/city/delhi/hc-unwilling-to-step-in-to-curb-deepfakes-delhi-high-court/articleshow/105739942.cms
- https://www.indiatoday.in/india/story/india-among-top-targets-of-deepfake-identity-fraud-2472241-2023-12-05
- https://sumsub.com/fraud-report-2023/