#FactCheck - Viral Video of Argentina Football Team Dancing to Bhojpuri Song is Misleading
Executive Summary:
A viral video of the Argentina football team dancing in the dressing room to a Bhojpuri song is being circulated in social media. After analyzing the originality, CyberPeace Research Team discovered that this video was altered and the music was edited. The original footage was posted by former Argentine footballer Sergio Leonel Aguero in his official Instagram page on 19th December 2022. Lionel Messi and his teammates were shown celebrating their win at the 2022 FIFA World Cup. Contrary to viral video, the song in this real-life video is not from Bhojpuri language. The viral video is cropped from a part of Aguero’s upload and the audio of the clip has been changed to incorporate the Bhojpuri song. Therefore, it is concluded that the Argentinian team dancing to Bhojpuri song is misleading.

Claims:
A video of the Argentina football team dancing to a Bhojpuri song after victory.


Fact Check:
On receiving these posts, we split the video into frames, performed the reverse image search on one of these frames and found a video uploaded to the SKY SPORTS website on 19 December 2022.

We found that this is the same clip as in the viral video but the celebration differs. Upon further analysis, We also found a live video uploaded by Argentinian footballer Sergio Leonel Aguero on his Instagram account on 19th December 2022. The viral video was a clip from his live video and the song or music that’s playing is not a Bhojpuri song.

Thus this proves that the news that circulates in the social media in regards to the viral video of Argentina football team dancing Bhojpuri is false and misleading. People should always ensure to check its authenticity before sharing.
Conclusion:
In conclusion, the video that appears to show Argentina’s football team dancing to a Bhojpuri song is fake. It is a manipulated version of an original clip celebrating their 2022 FIFA World Cup victory, with the song altered to include a Bhojpuri song. This confirms that the claim circulating on social media is false and misleading.
- Claim: A viral video of the Argentina football team dancing to a Bhojpuri song after victory.
- Claimed on: Instagram, YouTube
- Fact Check: Fake & Misleading
Related Blogs

Introduction
Betting has long been associated with sporting activities and has found a growing presence in online gaming and esports globally. As the esports industry continues to expand, Statista has projected that it will reach a market value of $5.9 billion by 2029. As such, associated markets have also seen significant growth. In 2024, this segment accounted for an estimated $2.5 billion globally. While such engagement avenues are popular among international audiences, they also bring attention to concerns around regulation, integrity, and user protection. As esports builds its credibility and reach, especially among younger demographics, these aspects become increasingly important to address in policy and practice.
What Does Esports Betting Involve?
Much like traditional sports, esports engagement in some regions includes the practice of wagering on teams, players, or match outcomes. But it is inherently more complex. The accurate valuation of odds in online gaming and esports can be complicated by frequently updated game titles, changing teams, and shifting updates to game mechanics (called metas- most effective strategies). Bets can be placed using real money, virtual items like skins (digital avatars), or increasingly, cryptocurrency.
Esports and Wagering: Emerging Issues and Implications
- Legal Grey Areas: While countries like South Korea and some USA states have dedicated regulations for esports betting and licensed bookmaking, most do not. This creates legal grey areas for betting service providers to access unregulated markets, increasing the risk of fraud, money laundering, and exploitation of bettors in those regions.
- The Skill v/s Chance Dilemma: Most gambling laws across the world regulate betting based on the distinction between ‘games of skill’ and ‘games of chance’. Betting on the latter is typically illegal, since winning depends on chance. But the definitions of ‘skill’ and ‘chance’ may vary by jurisdiction. Also, esports betting often blurs into gambling. Outcomes may depend on player skill, but in-game economies like skin betting and unpredictable gameplay introduce elements of chance, complicating regulation and making enforcement difficult.
- Underage Gambling and Addiction Risks: Players are often minors and are exposed to the gambling ecosystem due to gamified betting through reward systems like loot boxes. These often mimic the mechanics of betting, normalising gambling behaviours among young users before they fully understand the risks. This can lead to the development of addictive behaviours.
- Match-Fixing and Loss of Integrity: Esports are particularly susceptible to match-fixing because of weak regulation, financial pressures, and the anonymity of online betting. Instances like the Dota 2 Southeast Asia Scandals (2023) and Valorant match-fixing in North America (2021) can jeopardise audience trust and sponsorships. This affects the trustworthiness of minor tournaments, where talent is discovered.
- Cybersecurity and Data Risks: Esports betting apps collect sensitive user data, making them an attractive target for cybercrime. Bettors are susceptible to identity theft, financial fraud, and data breaches, especially on unlicensed platforms.
Way Forward
To strengthen trust, ensure user safety, and protect privacy within the esports ecosystem, responsible management of betting practices can be achieved through targeted interventions focused on:
- National-Level Regulations: Countries like India have a large online gaming and esports market. It will need to create a regulatory authority along the lines of the UK’s Gambling Commission and update its gambling laws to protect consumers.
- Protection of Minors: Setting guardrails such as age verification, responsible advertising, anti-fraud mechanisms, self-exclusion tools, and spending caps can help to keep a check on gambling by minors.
- Harmonizing Global Standards: Since esports is inherently global, aligning core regulatory principles across jurisdictions (such as through multi-country agreements or voluntary industry codes of conduct) can help create consistency while avoiding overregulation.
- Co-Regulation: Governments, esports organisers, betting platforms, and player associations should work closely to design effective, well-informed policies. This can help uphold the interests of all stakeholders in the industry.
Conclusion
Betting in esports is inevitable. But the industry faces a double dilemma- overregulating on the one hand, or letting gambling go unchecked, on the other. Both can be detrimental to its growth. This is why there is a need for industry actors like policymakers, platforms and organisers to work together to harmonise legal inconsistencies, protect vulnerable users and invest in forming data security. Forming industry-wide ethics boards, promoting regional regulatory dialogue, and instating transparency measures for betting operators can be a step in this direction to ensure that esports evolves into a mature, trusted global industry.
Sources

Key points: Data collection, Protecting Children, and Awareness
Introduction
The evolution of technology has drastically changed over the period impacting mankind and their lifestyle. For every single smallest aspect, humans are reliable on the computers they have manufactured. The use of AI has almost hindered mankind, kids these days are more lethargic to work and write more sensibly on their own, but they are more likely interested in television, video games, mobile games, etc. School kids use AI just to complete their homework. Is it a good sign for the country’s future? The study suggests that Tools like ChatGPT is a threat to humans/a child’s potential to be creative and make original content requiring a human writer’s insight. Tools like ChatGPT can remove students’ artistic voices rather than using their unique writing style.
Does any of those browsers or search engines use your search history against you? or How do non-users tend to lose their private info on such a search engine?
Are there any safety measures that one’s the government of a particular country taking to protect their people’s rights?
Some of us might wonder how these two fancy-looking world merge and into, Arey they a boon or curse?
So here’s the top news getting flooded all over the world through the internet,
“Italian Agency impose strict measures on OpenAI’s ChatGPT”
Italy becomes the first Western European country to take serious measures about using Open AI ChatGPT. An Italian Data Protection agency named Garante has set mandates on ChatGPT. Garante has raised concerns about privacy violations and the inability to verify the age of users. Garate has also claimed that the AI ChatBot is violating the EU’s General Data Protection Regulation (GDPR). In a press release, Garante demanded OpenAI take necessary actions.
To begin with, Garante has demanded that OpenAI’s ChatGPT should increase its transparency and give a comprehensive statement about its data processing practices. OpenAI must specify between obtaining user consent for processing users’ data to train its AI model or may rely on a legitimate basis. OpenAI must maintain the privacy of users’ data.
In addition, ChatGPT should also take measures to prevent minors from accessing the technology at such an early stage of life, which could hinder their brain power. ChatGPT should add some age verification system to prevent minors from accessing explicit content. Moreover, Garante suggests that OpenAI should spread awareness among its users about their data being processed to train its AI model. Garante has set a deadline of April 30 for ChatGPT to complete the given tasks. Until then, its service should be banned in the country.
Child safety while surfing on ChatGpt
Italian agency demands age limitation to surf and an age verification method to exclude users under the age of 13, and parental authority should be required for users between the ages of 13 and 18. As this is a matter of security. Children might get exposed to explicit content invalidated to their age or explore illegitimate content. The AI chatbot doesn’t have the sense to determine which content is appropriate for the underage audience. Due to tools like chatbots, subjective things/information are already available to young students, leading to endangered irrespective of their future. As ChatGpt can hinder their potential and ability to create original and creative content for young minds. It is a threat motivation to humans’ motivation to write. Moreover, when students need time to think and analyze they get lethargic due to tools like ChatGPT, and the practice they need fades away.
Collection of User’s Data
According to some reports from the company’s privacy policy, OpenAI ChatGpt collects an assortment of additional data. The first two questions are for a free trial when a session starts. It asks for your Login, and SignUp through your Gmail account collects your IP address, browser type, and the data you put in the form of input, i.e. it collects data on the user’s interaction with the website, It also collects the user’s data like session time, cookies through third party may tend to sell it to an unspecified third party.
This snapshot shows that they have added a few things after Garante’s draft.
Conclusion
AI chatbot – Chatgpt is an advanced technology tool that makes work a little easier, but one surfing on such tools must stay aware of the information they are asking for. Such AI bots are trained to understand mankind, its job is to give a helping hand and not doltish. In case of this, some people tend to provide sensitive information unknowingly, young minds get exposed to explicit information. Such bots need to put some age limitations. Such innovations keep taking place, but it’s individuals’ responsibility what actions to be allowed to access their online connected device. Unlike the Italian Agency, which has taken some preventive measures to keep their user’s data safe, also looking at the adverse effect of such chatbots on a young mind.
.webp)
Introduction
A Pew Research Center survey conducted in September 2023, found that among 1,453 age group of 13-17 year olds projected that the majority of the age group uses TikTok (63%), Snapchat (60%) and Instagram (59%) in the U.S. Further, in India the 13-19 year-olds age group makes up 31% of social media users in India, according to a report by Statista from 2021. This has been the leading cause of young users inadvertently or deliberately accessing adult content on social media platforms.
Brief Analysis of Meta’s Proposed AI Age Classifier
It can be seen as a step towards safer and moderated content for teen users, by placing age restrictions on teen social media users as sometimes they do not have enough cognitive skills to understand what content can be shared and consumed on these platforms and what can not as per their age. Moreover, there needs to be an understanding of platform policies and they need to understand that nothing can be completely erased from the internet.
Unrestricted access to social media exposes teens to potentially harmful or inappropriate online content, raising concerns about their safety and mental well-being. Meta's recent measures aim to address this, however striking a balance between engagement, protection, and privacy is also an essential part.
The AI-based Age Classifier proposed by Meta classifies users based on their age and places them in the ‘Teen Account’ category which has built-in limits on who can contact them, the content they see and more ways to connect and explore their interests. According to Meta, teens under 16 years of age will need parental permission to change these settings.
Meta's Proposed Solution: AI-Powered Age Classifier
This tool uses Artificial Intelligence (AI) to analyze users’ online behaviours and other profile information to estimate their age. It analyses different factors such as who follows the user, what kind of content they interact with, and even comments like birthday posts from friends. If the classifier detects that a user is likely under 18 years old, it will automatically switch them to a “Teen Account.” These accounts have more restricted privacy settings, such as limiting who can message the user and filtering the type of content they can see.
The adult classifier is anticipated to be deployed by next year and will start scanning for such users who may have lied about their age. All users found to be under 18 years old will be placed in the category of teen accounts, but 16-17 year olds will be able to adjust these settings if they want more flexibility, while younger teens will need parental permission. The effort is part of a broader strategy to protect teens from potentially harmful content on social media. This is especially important in today’s time as the invasion of privacy for anyone, particularly, can be penalised due to legal instruments like GDPR, DPDP Act, COPPA and many more.
Policy Implications and Compliances
Meta's AI Age Classifier addresses the growing concerns over teen safety on social media by categorizing users based on age, restricting minors' access to adult content, and enforcing parental controls. However, reliance on behavioural tracking might potentially impact the online privacy of teen users. Hence the approach of Meta needs to be aligned with applicable jurisdictional laws. In India, the recently enacted DPDP Act, of 2023 prohibits behavioural tracking and targeted advertising to children. Accuracy and privacy are the two main concerns that Meta should anticipate when they roll out the classifier.
Meta emphasises transparency to build user trust, and customizable parental controls empower families to manage teens' online experiences. This initiative reflects Meta's commitment to creating a safer, regulated digital space for young users worldwide, it must also align its policies properly with the regional policy and law standards. Meta’s proposed AI Age Classifier aims to protect teens from adult content, reassure parents by allowing them to curate acceptable content, and enhance platform integrity by ensuring a safer environment for teen users on Instagram.
Conclusion
Meta’s AI Age Classifier while promising to enhance teen safety and putting certain restrictions and parental controls on accounts categorised as ‘teen accounts’, must also properly align with global regulations like GDPR, and the DPDP Act with reference to India. This tool offers reassurance to parents and aims to foster a safer social media environment for teens. To support accurate age estimation and transparency, policy should focus on refining AI methods to minimise errors and ensure clear disclosures about data handling. Collaborative international standards are essential as privacy laws evolve. Meta’s initiative is intended to prioritise youth protection and build public trust in AI-driven moderation across social platforms, while it must also balance the online privacy of users while utilising these advanced tech measures on the platforms.
References
- https://familycenter.meta.com/in/our-products/instagram/
- https://www.indiatoday.in/technology/news/story/instagram-will-now-take-help-of-ai-to-check-if-kids-are-lying-about-their-age-on-app-2628464-2024-11-05
- https://www.bloomberg.com/news/articles/2024-11-04/instagram-plans-to-use-ai-to-catch-teens-lying-about-age
- https://tech.facebook.com/artificial-intelligence/2022/6/adult-classifier/
- https://indianexpress.com/article/technology/artificial-intelligence/too-young-to-use-instagram-metas-ai-classifier-could-help-catch-teens-lying-about-their-age-9658555/