#FactCheck - Afghan Cricket Team's Chant Misrepresented in Viral Video
Executive Summary:
Footage of the Afghanistan cricket team singing ‘Vande Mataram’ after India’s triumph in ICC T20 WC 2024 exposed online. The CyberPeace Research team carried out a thorough research to uncover the truth about the viral video. The original clip was posted on X platform by Afghan cricketer Mohammad Nabi on October 23, 2023 where the Afghan players posted the video chanting ‘Allah-hu Akbar’ after winning the ODIs in the World Cup against Pakistan. This debunks the assertion made in the viral video about the people chanting Vande Mataram.

Claims:
Afghan cricket players chanted "Vande Mataram" to express support for India after India’s victory over Australia in the ICC T20 World Cup 2024.

Fact Check:
Upon receiving the posts, we analyzed the video and found some inconsistency in the video such as the lip sync of the video.
We checked the video in an AI audio detection tool named “True Media”, and the detection tool found the audio to be 95% AI-generated which made us more suspicious of the authenticity of the video.


For further verification, we then divided the video into keyframes. We reverse-searched one of the frames of the video to find any credible sources. We then found the X account of Afghan cricketer Mohammad Nabi, where he uploaded the same video in his account with a caption, “Congratulations! Our team emerged triumphant n an epic battle against ending a long-awaited victory drought. It was a true test of skills & teamwork. All showcased thr immense tlnt & unwavering dedication. Let's celebrate ds 2gether n d glory of our great team & people” on 23 Oct, 2023.

We found that the audio is different from the viral video, where we can hear Afghan players chanting “Allah hu Akbar” in their victory against Pakistan. The Afghan players were not chanting Vande Mataram after India’s victory over Australia in T20 World Cup 2014.
Hence, upon lack of credible sources and detection of AI voice alteration, the claim made in the viral posts is fake and doesn’t represent the actual context. We have previously debunked such AI voice alteration videos. Netizens must be careful before believing misleading information.
Conclusion:
The viral video claiming that Afghan cricket players chanted "Vande Mataram" in support of India is false. The video was altered from the original video by using audio manipulation. The original video of Afghanistan players celebrating victory over Pakistan by chanting "Allah-hu Akbar" was posted in the official Instagram account of Mohammad Nabi, an Afghan cricketer. Thus the information is fake and misleading.
- Claim: Afghan cricket players chanted "Vande Mataram" to express support for India after the victory over Australia in the ICC T20 World Cup 2024.
- Claimed on: YouTube
- Fact Check: Fake & Misleading
Related Blogs

Executive Summary:
A viral image circulating on social media claims it to be a natural optical illusion from Epirus, Greece. However, upon fact-checking, it was found that the image is an AI-generated artwork created by Iranian artist Hamidreza Edalatnia using the Stable Diffusion AI tool. CyberPeace Research Team found it through reverse image search and analysis with an AI content detection tool named HIVE Detection, which indicated a 100% likelihood of AI generation. The claim of the image being a natural phenomenon from Epirus, Greece, is false, as no evidence of such optical illusions in the region was found.

Claims:
The viral image circulating on social media depicts a natural optical illusion from Epirus, Greece. Users share on X (formerly known as Twitter), YouTube Video, and Facebook. It’s spreading very fast across Social Media.

Similar Posts:


Fact Check:
Upon receiving the Posts, the CyberPeace Research Team first checked for any Synthetic Media detection, and the Hive AI Detection tool found it to be 100% AI generated, which is proof that the Image is AI Generated. Then, we checked for the source of the image and did a reverse image search for it. We landed on similar Posts from where an Instagram account is linked, and the account of similar visuals was made by the creator named hamidreza.edalatnia. The account we landed posted a photo of similar types of visuals.

We searched for the viral image in his account, and it was confirmed that the viral image was created by this person.

The Photo was posted on 10th December, 2023 and he mentioned using AI Stable Diffusion the image was generated . Hence, the Claim made in the Viral image of the optical illusion from Epirus, Greece is Misleading.
Conclusion:
The image claiming to show a natural optical illusion in Epirus, Greece, is not genuine, and it's False. It is an artificial artwork created by Hamidreza Edalatnia, an artist from Iran, using the artificial intelligence tool Stable Diffusion. Hence the claim is false.

Artificial intelligence is revolutionizing industries such as healthcare to finance to influence the decisions that touch the lives of millions daily. However, there is a hidden danger associated with this power: unfair results of AI systems, reinforcement of social inequalities, and distrust of technology. One of the main causes of this issue is training data bias, which appears when the examples on which an AI model is trained are not representative or skewed. To deal with it successfully, this needs a combination of statistical methods, algorithmic design that is mindful of fairness, and robust governance over the AI lifecycle. This article discusses the origin of bias, the ways to reduce it, and the unique position of fairness-conscious algorithms.
Why Bias in Training Data Matters
The bias in AI occurs when the models mirror and reproduce the trends of inequality in the training data. When a dataset has a biased representation of a demographic group or includes historical biases, the model will be trained to make decisions in ways that will harm the group. This is a fact that has a practical implication: prejudiced AI may cause discrimination during the recruitment of employees, lending, and evaluation of criminal risks, as well as various other spheres of social life, thus compromising justice and equity. These problems are not only technical in nature but also require moral principles and a system of governance (E&ICTA).
Bias is not uniform. It may be based on the data itself, the algorithm design, or even the lack of diversity among developers. The bias in data occurs when data does not represent the real world. Algorithm bias may arise when design decisions inadvertently put one group at an unfair advantage over another. Both the interpretation of the model and data collection may be affected by human bias. (MDPI)
Statistical Principles for Reducing Training Data Bias
Statistical principles are at the core of bias mitigation and they redefine the data-model interaction. These approaches are focused on data preparation, training process adjustment, and model output corrections in such a way that the notion of fairness becomes a quantifiable goal.
Balancing Data Through Re-Sampling and Re-Weighting
Among the aforementioned methods, a fair representation of all the relevant groups in the dataset is one way. This can be achieved by oversampling underrepresented groups and undersampling overrepresented groups. Oversampling gives greater weight to minority examples, whereas re-weighting gives greater weight to under-represented data points in training. The methods minimize the tendency of models to fit to salient patterns and improve coverage among vulnerable groups. (GeeksforGeeks)
Feature Engineering and Data Transformation
The other statistical technique is to convert data characteristics in such a way that sensitive characteristics have a lesser impact on the results. In one example, fair representation learning adjusts the data representation to discourage bias during the untraining of the model. The disparate impact remover adjust technique performs the adjustment of features of the model in such a way that the impact of sensitive features is reduced during learning. (GeeksforGeeks)
Measuring Fairness With Metrics
Statistical fairness measures are used to measure the effectiveness of a model in groups.
Fairness-Aware Algorithms Explained
Fair algorithms do not simply detect bias. They incorporate fairness goals in model construction and run in three phases including pre-processing, in-processing, and post-processing.
Pre-Processing Techniques
Fairness-aware pre-processing deals with bias prior to the model consuming the information. This involves the following ways:
- Rebalancing training data through sampling and re-weighting training data to address sample imbalances.
- Data augmentation to generate examples of underrepresented groups.
- Feature transformation removes or downplays the impact of sensitive attributes prior to the commencement of training. (IJMRSET)
These methods can be used to guarantee that the model is trained on more balanced data and to reduce the chances of bias transfer between historical data.
In-Processing Techniques
The in-processing techniques alter the learning algorithm. These include:
- Fairness constraints that penalize the model for making biased predictions during training.
- Adversarial debiasing, where a second model is used to ensure that sensitive attributes are not predicted by the learned representations.
- Fair representation learning that modifies internal model representations in favor of
Post-Processing Techniques
Fairness may be enhanced after training by changing the model outputs. These strategies comprise:
- Threshold adjustments to various groups to meet conditions of fairness, like equalized odds.
- Calibration techniques such that the estimated probabilities are fair indicators of the actual probabilities in groups. (GeeksforGeeks)
Challenges
Mitigating bias is complex. The statistical bias minimization may at times come at the cost of the model accuracy, and there is a conflict between predictive performance and fairness. The definition of fairness itself is potentially a difficult task because various applications of fairness require various criteria, and various criteria can be conflicting. (MDPI)
Gaining varied and representative data is also a challenge that is experienced because of privacy issues, incomplete records, and a lack of resources. The auditing and reporting done on a continuous basis are needed so that mitigation processes are up to date, as models are continually updated. (E&ICTA)
Why Fairness-Aware Development Matters
The outcomes of the unfair treatment of some groups by AI systems are far-reaching. Discriminatory software in recruitment may support inequality in the workplace. Subjective credit rating may deprive deserving people of opportunities. Unbiased medical forecasts might result in the flawed allocation of medical resources. In both cases, prejudice contravenes the credibility and clouds the greater prospect of AI. (E&ICTA)
Algorithms that are fair and statistical mitigation plans provide a way to create not only powerful AI but also fair and trustworthy AI. They admit that the results of AI systems are social tools whose effects extend across society. Responsible development will necessitate sustained fairness quantification, model adjustment, and upholding human control.
Conclusion
AI bias is not a technical malfunction. It is a mirror of real-world disparities in data and exaggerated by models. Statistical rigor, wise algorithm design, and readiness to address the trade-offs between fairness and performance are required to reduce training data bias. Fairness-conscious algorithms (which can be implemented in pre-processing, in-processing, or post-processing) are useful in delivering more fair results. As AI is taking part in the most crucial decisions, it is necessary to consider fairness at the beginning to have a system that serves the population in a responsible and fair manner.
References
- Understanding Bias in Artificial Intelligence: Challenges, Impacts, and Mitigation Strategies: E&ICTA, IITK
- Bias and Fairness in Artificial Intelligence: Methods and Mitigation Strategies: JRPS Shodh Sagar
- Fairness and Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, and Mitigation Strategies: MDPI
- Ensuring Fairness in Machine Learning Algorithms: GeeksforGeeks
Bias and Fairness in Machine Learning Models: A Critical Examination of Ethical Implications: IJMRSET - Bias in AI Models: Origins, Impact, and Mitigation Strategies: Preprints
- Bias in Artificial Intelligence and Mitigation Strategies: TCS
- Survey on Machine Learning Biases and Mitigation Techniques: MDPI

Introduction
The ramifications of cybercrime can be far-reaching. Depending on the size of the attack, even entire countries can be affected if their critical infrastructure is connected to the internet. The vast majority of security breaches start within the perimeter and most internet attacks are socially engineered. Unwittingly trusting any email or web request from an unknown sender creates a potential danger for those organisations that depend on the Internet for their business functions. In this ever-evolving digital downtown yet another group has emerged from its darkest corners of targeting the UK’s very bastion of British and global heritage; a treasure trove of around 14 million volumes, ancient manuscripts, in the precious British Library. A group self-identified as Rhysida. Their bold maneuver, executed with the stealth of seasoned cyber brigands, has cast a shadow as long and dark as those found in the Gothic novels that rest on the library's shelves. The late October cyber-attack has thrust the British Library into an unnerving state of chaos, a situation more commonly aligned with works of dystopian fiction than the everyday reality of a revered institution.
The Modus Operandi
The gang uses all-new Rhysida ransomware to jeopardize Virtual Private Networks, which is typically used by library staff to gain access to their employee’s systems remotely. The Ransomware presents itself as a regular decoy file in a familiar fashion as regular phishing attacks in an email, tricking its victim and downloading itself into the host system. Once the malware enters the system it stays dormant and lurks around the system for a period of time. The new malware has significantly reduced the dwell time from 4 days to less than 24 hours which enables it to evade periodic system checks to avoid detection.
Implications of Cyber Attack
Implications of the cyber-attack have been sobering and multifaceted. The library's systems, which serve as the lifeline for countless scholars, students, and the reading public, were left in disarray, unsettlingly reminiscent of a grand mansion invaded by incorporeal thieves. The violation has reverberated through the digital corridors of this once-impenetrable fortress, and the virtual aftershocks are ongoing. Patrons, who traverse a diverse spectrum of society, but share a common reverence for knowledge, received unsettling news: the possibility that their private data has been compromised—a sanctity breached, revealing yet again how even the most hallowed of spaces are not impervious to modern threats.
It is with no small sense of irony that we consider the nature of the stolen goods—names, email addresses, and the like. It is not the physical tomes of inestimable value that have been ransacked, but rather the digital footprints of those who sought the wisdom within the library's walls. This virtual Pandora's Box, now unleashed onto the dark web, has been tagged with a monetary value. Rhysida has set the ominous asking price of a staggering $740,000 worth of cryptocurrency for the compromised data, flaunting their theft with a hubris that chills the spine.
Yet, in this convoluted narrative unfolds a subplot that offers some measure of consolation. Payment information purports the library has not been included in this digital heist, offering a glint of reassurance amidst the prevailing uncertainty. This digital storm has had seismic repercussions: the library's website and interconnected systems have been besieged and access to the vast resources significantly hampered. The distressing notice of a 'major technology outage' transformed the digital facade from a portal for endless learning to a bulletin of sorrow, projecting the sombre message across virtual space.
The Impact
The impact of this violation will resonate far beyond the mere disruption of services; it signals the dawn of an era where venerable institutions of culture and learning must navigate the depths of cybersecurity. As the library grapples with the breach, a new front has opened in the age-old battle for the preservation of knowledge. The continuity of such an institution in a digitised world will be tested, and the outcome will define the future of digital heritage management. As the institution rallies, led by Roly Keating, its Chief Executive, one observes not a defeatist retreat, but a stoic, strategic regrouping. Gratitude is extended to patrons and partners whose patience has become as vital a resource as the knowledge the library preserves. The reassurances given, while acknowledging the laborious task ahead, signal not just an intention to repair but to fortify, to adapt, to evolve amidst adversity.
This wretched turn of events serves as a portentous reminder that threats to our most sacred spaces have transformed. The digital revolution has indeed democratised knowledge but has also exposed it to neoteric threats. The British Library, a repository of the past, must now confront a distinctly modern adversary. It requires us to posit whether our contemporary guardians of history are equipped to combat those who wield malicious code as their weapons of choice.
Best Practices for Cyber Resilience
It is crucial to keep abreast with recent developments in cyberspace and emerging trends. Libraries in the digital age must ensure the protection of their patron’s data by applying comprehensive security protocols to safeguard the integrity, availability and confidentiality of sensitive information of their patrons. A few measures that can be applied by libraries include.
- Secured Wi-Fi networks: Libraries offering public Wi-Fi facilities must secure them with strong encryption protocols such as WPA 3. Libraries should establish separate networks for internal operations allowing separation of staff and public networks to protect sensitive information.
- Staff Training Programs: To avoid human error it is imperative that comprehensive training programs are conducted on a regular basis to generate greater awareness of cyber threats among staff and educate them about best practices of cyber hygiene and data security.
- Data Backups and Recovery Protocols: Patrons' sensitive data should be updated and backed up regularly. Proper verification of the user’s data integrity is crucial and should be stored securely in a dedicated repository to ensure full recovery of the user’s data in the event of a breach.
- Strong Authentication: Strong authentication to enhance library defenses is crucial to combat cyber threats. Staff and Patrons should be educated on strong password usage and the implementation of Multi-Factor Authentication to add an extra layer of security.
Conclusion
Finally, whatever the future holds, what remains unassailable is the cultural edifice that is the British Library. Its trials and tribulations, like those of the volumes it safeguards, become a part of a larger narrative of endurance and defiance. In the canon of history—filled with conflicts and resolutions—the library, like the lighter anecdotes and tragic tales it harbours, will decidedly hold its place. And perhaps, with some assurance, we might glean from the sentiment voiced by Milton—an assurance that the path from turmoil to enlightenment, though fraught with strenuous challenges, is paved with lessons learned and resilience rediscovered. Cyberspace is constantly evolving hence it is in our best interest to keep abreast of all developments in this digital sphere. Maximum threats can be avoided if we are vigilant.
References: