#FactCheck - AI Generated image of Virat Kohli falsely claims to be sand art of a child
Executive Summary:
The picture of a boy making sand art of Indian Cricketer Virat Kohli spreading in social media, claims to be false. The picture which was portrayed, revealed not to be a real sand art. The analyses using AI technology like 'Hive' and ‘Content at scale AI detection’ confirms that the images are entirely generated by artificial intelligence. The netizens are sharing these pictures in social media without knowing that it is computer generated by deep fake techniques.

Claims:
The collage of beautiful pictures displays a young boy creating sand art of Indian Cricketer Virat Kohli.




Fact Check:
When we checked on the posts, we found some anomalies in each photo. Those anomalies are common in AI-generated images.

The anomalies such as the abnormal shape of the child’s feet, blended logo with sand color in the second image, and the wrong spelling ‘spoot’ instead of ‘sport’n were seen in the picture. The cricket bat is straight which in the case of sand made portrait it’s odd. In the left hand of the child, there’s a tattoo imprinted while in other photos the child's left hand has no tattoo. Additionally, the face of the boy in the second image does not match the face in other images. These made us more suspicious of the images being a synthetic media.
We then checked on an AI-generated image detection tool named, ‘Hive’. Hive was found to be 99.99% AI-generated. We then checked from another detection tool named, “Content at scale”


Hence, we conclude that the viral collage of images is AI-generated but not sand art of any child. The Claim made is false and misleading.
Conclusion:
In conclusion, the claim that the pictures showing a sand art image of Indian cricket star Virat Kohli made by a child is false. Using an AI technology detection tool and analyzing the photos, it appears that they were probably created by an AI image-generated tool rather than by a real sand artist. Therefore, the images do not accurately represent the alleged claim and creator.
Claim: A young boy has created sand art of Indian Cricketer Virat Kohli
Claimed on: X, Facebook, Instagram
Fact Check: Fake & Misleading
Related Blogs
.webp)
Introduction
According to Statista, the global artificial intelligence software market is forecast to grow by around 126 billion US dollars by 2025. This will include a 270% increase in enterprise adoption over the past four years. The top three verticals in the Al market are BFSI (Banking, Financial Services, and Insurance), Healthcare & Life Sciences, and Retail & e-commerce. These sectors benefit from vast data generation and the critical need for advanced analytics. Al is used for fraud detection, customer service, and risk management in BFSI; diagnostics and personalised treatment plans in healthcare; and retail marketing and inventory management.
The Chairperson of the Competition Commission of India’s Chief, Smt. Ravneet Kaur raised a concern that Artificial Intelligence has the potential to aid cartelisation by automating collusive behaviour through predictive algorithms. She explained that the mere use of algorithms cannot be anti-competitive but in case the algorithms are manipulated, then that is a valid concern about competition in markets.
This blog focuses on how policymakers can balance fostering innovation and ensuring fair competition in an AI-driven economy.
What is the Risk Created by AI-driven Collusion?
AI uses predictive algorithms, and therefore, they could lead to aiding cartelisation by automating collusive behaviour. AI-driven collusion could be through:
- The use of predictive analytics to coordinate pricing strategies among competitors.
- The lack of human oversight in algorithm-induced decision-making leads to tacit collusion (competitors coordinate their actions without explicitly communicating or agreeing to do so).
AI has been raising antitrust concerns and the most recent example is the partnership between Microsoft and OpenAI, which has raised concerns among other national competition authorities regarding potential competition law issues. While it is expected that the partnership will potentially accelerate innovation, it also raises concerns about potential anticompetitive effects such as market foreclosure or the creation of barriers to entry for competitors and, therefore, has been under consideration in the German and UK courts. The problem here is in detecting and proving whether collusion is taking place.
The Role of Policy and Regulation
The uncertainties induced by AI regarding its effects on competition create the need for algorithmic transparency and accountability in mitigating the risks of AI-driven collusion. It leads to the need to build and create regulatory frameworks that mandate the disclosure of algorithmic methodologies and establish a set of clear guidelines for the development of AI and its deployment. These frameworks or guidelines should encourage an environment of collaboration between competition watchdogs and AI experts.
The global best practices and emerging trends in AI regulation already include respect for human rights, sustainability, transparency and strong risk management. The EU AI Act could serve as a model for other jurisdictions, as it outlines measures to ensure accountability and mitigate risks. The key goal is to tailor AI regulations to address perceived risks while incorporating core values such as privacy, non-discrimination, transparency, and security.
Promoting Innovation Without Stifling Competition
Policymakers need to ensure that they balance regulatory measures with innovation scope and that the two priorities do not hinder each other.
- Create adaptive and forward-thinking regulatory approaches to keep pace with technological advancements that take place at the pace of development and allow for quick adjustments in response to new AI capabilities and market behaviours.n
- Competition watchdogs need to recruit domain experts to assess competition amid rapid changes in the technology landscape. Create a multi-stakeholder approach that involves regulators, industry leaders, technologists and academia who can create inclusive and ethical AI policies.
- Businesses can be provided incentives such as recognition through certifications, grants or benefits in acknowledgement of adopting ethical AI practices.
- Launch studies such as the CCI’s market study to study the impact of AI on competition. This can lead to the creation of a driving force for sustainable growth with technological advancements.
Conclusion: AI and the Future of Competition
We must promote a multi-stakeholder approach that enhances regulatory oversight, and incentivising ethical AI practices. This is needed to strike a delicate balance that safeguards competition and drives sustainable growth. As AI continues to redefine industries, embracing collaborative, inclusive, and forward-thinking policies will be critical to building an equitable and innovative digital future.
The lawmakers and policymakers engaged in the drafting of the frameworks need to ensure that they are adaptive to change and foster innovation. It is necessary to note that fair competition and innovation are not mutually exclusive goals, they are complementary to each other. Therefore, a regulatory framework that promotes transparency, accountability, and fairness in AI deployment must be established.
References
- https://www.thehindu.com/sci-tech/technology/ai-has-potential-to-aid-cartelisation-fair-competition-integral-for-sustainable-growth-cci-chief/article69041922.ece
- https://www.marketsandmarkets.com/Market-Reports/artificial-intelligence-market-74851580.html
- https://www.ey.com/en_in/insights/ai/how-to-navigate-global-trends-in-artificial-intelligence-regulation#:~:text=Six%20regulatory%20trends%20in%20Artificial%20Intelligence&text=These%20include%20respect%20for%20human,based%20approach%20to%20AI%20regulation.
- https://www.business-standard.com/industry/news/ai-has-potential-to-aid-fair-competition-for-sustainable-growth-cci-chief-124122900221_1.html
.webp)
Executive Summary:
A post on X (formerly Twitter) has gained widespread attention, featuring an image inaccurately asserting that Houthi rebels attacked a power plant in Ashkelon, Israel. This misleading content has circulated widely amid escalating geopolitical tensions. However, investigation shows that the footage actually originates from a prior incident in Saudi Arabia. This situation underscores the significant dangers posed by misinformation during conflicts and highlights the importance of verifying sources before sharing information.

Claims:
The viral video claims to show Houthi rebels attacking Israel's Ashkelon power plant as part of recent escalations in the Middle East conflict.

Fact Check:
Upon receiving the viral posts, we conducted a Google Lens search on the keyframes of the video. The search reveals that the video circulating online does not refer to an attack on the Ashkelon power plant in Israel. Instead, it depicts a 2022 drone strike on a Saudi Aramco facility in Abqaiq. There are no credible reports of Houthi rebels targeting Ashkelon, as their activities are largely confined to Yemen and Saudi Arabia.

This incident highlights the risks associated with misinformation during sensitive geopolitical events. Before sharing viral posts, take a brief moment to verify the facts. Misinformation spreads quickly and it’s far better to rely on trusted fact-checking sources.
Conclusion:
The assertion that Houthi rebels targeted the Ashkelon power plant in Israel is incorrect. The viral video in question has been misrepresented and actually shows a 2022 incident in Saudi Arabia. This underscores the importance of being cautious when sharing unverified media. Before sharing viral posts, take a moment to verify the facts. Misinformation spreads quickly, and it is far better to rely on trusted fact-checking sources.
- Claim: The video shows massive fire at Israel's Ashkelon power plant
- Claimed On:Instagram and X (Formerly Known As Twitter)
- Fact Check: False and Misleading

Introduction
In the digital realm of social media, Meta Platforms, the driving force behind Facebook and Instagram, faces intense scrutiny following The Wall Street Journal's investigative report. This exploration delves deeper into critical issues surrounding child safety on these widespread platforms, unravelling algorithmic intricacies, enforcement dilemmas, and the ethical maze surrounding monetisation features. Instances of "parent-managed minor accounts" leveraging Meta's subscription tools to monetise content featuring young individuals have raised eyebrows. While skirting the line of legality, this practice prompts concerns due to its potential appeal to adults and the associated inappropriate interactions. It's a nuanced issue demanding nuanced solutions.
Failed Algorithms
The very heartbeat of Meta's digital ecosystem, its algorithms, has come under intense scrutiny. These algorithms, designed to curate and deliver content, were found to actively promoting accounts featuring explicit content to users with known pedophilic interests. The revelation sparks a crucial conversation about the ethical responsibilities tied to the algorithms shaping our digital experiences. Striking the right balance between personalised content delivery and safeguarding users is a delicate task.
While algorithms play a pivotal role in tailoring content to users' preferences, Meta needs to reevaluate the algorithms to ensure they don't inadvertently promote inappropriate content. Stricter checks and balances within the algorithmic framework can help prevent the inadvertent amplification of content that may exploit or endanger minors.
Major Enforcement Challenges
Meta's enforcement challenges have come to light as previously banned parent-run accounts resurrect, gaining official verification and accumulating large followings. The struggle to remove associated backup profiles adds layers to concerns about the effectiveness of Meta's enforcement mechanisms. It underscores the need for a robust system capable of swift and thorough actions against policy violators.
To enhance enforcement mechanisms, Meta should invest in advanced content detection tools and employ a dedicated team for consistent monitoring. This proactive approach can mitigate the risks associated with inappropriate content and reinforce a safer online environment for all users.
The financial dynamics of Meta's ecosystem expose concerns about the exploitation of videos that are eligible for cash gifts from followers. The decision to expand the subscription feature before implementing adequate safety measures poses ethical questions. Prioritising financial gains over user safety risks tarnishing the platform's reputation and trustworthiness. A re-evaluation of this strategy is crucial for maintaining a healthy and secure online environment.
To address safety concerns tied to monetisation features, Meta should consider implementing stricter eligibility criteria for content creators. Verifying the legitimacy and appropriateness of content before allowing it to be monetised can act as a preventive measure against the exploitation of the system.
Meta's Response
In the aftermath of the revelations, Meta's spokesperson, Andy Stone, took centre stage to defend the company's actions. Stone emphasised ongoing efforts to enhance safety measures, asserting Meta's commitment to rectifying the situation. However, critics argue that Meta's response lacks the decisive actions required to align with industry standards observed on other platforms. The debate continues over the delicate balance between user safety and the pursuit of financial gain. A more transparent and accountable approach to addressing these concerns is imperative.
To rebuild trust and credibility, Meta needs to implement concrete and visible changes. This includes transparent communication about the steps taken to address the identified issues, continuous updates on progress, and a commitment to a user-centric approach that prioritises safety over financial interests.
The formation of a task force in June 2023 was a commendable step to tackle child sexualisation on the platform. However, the effectiveness of these efforts remains limited. Persistent challenges in detecting and preventing potential child safety hazards underscore the need for continuous improvement. Legislative scrutiny adds an extra layer of pressure, emphasising the urgency for Meta to enhance its strategies for user protection.
To overcome ongoing challenges, Meta should collaborate with external child safety organisations, experts, and regulators. Open dialogues and partnerships can provide valuable insights and recommendations, fostering a collaborative approach to creating a safer online environment.
Drawing a parallel with competitors such as Patreon and OnlyFans reveals stark differences in child safety practices. While Meta grapples with its challenges, these platforms maintain stringent policies against certain content involving minors. This comparison underscores the need for universal industry standards to safeguard minors effectively. Collaborative efforts within the industry to establish and adhere to such standards can contribute to a safer digital environment for all.
To align with industry standards, Meta should actively participate in cross-industry collaborations and adopt best practices from platforms with successful child safety measures. This collaborative approach ensures a unified effort to protect users across various digital platforms.
Conclusion
Navigating the intricate landscape of child safety concerns on Meta Platforms demands a nuanced and comprehensive approach. The identified algorithmic failures, enforcement challenges, and controversies surrounding monetisation features underscore the urgency for Meta to reassess and fortify its commitment to being a responsible digital space. As the platform faces this critical examination, it has an opportunity to not only rectify the existing issues but to set a precedent for ethical and secure social media engagement.
This comprehensive exploration aims not only to shed light on the existing issues but also to provide a roadmap for Meta Platforms to evolve into a safer and more responsible digital space. The responsibility lies not just in acknowledging shortcomings but in actively working towards solutions that prioritise the well-being of its users.
References
- https://timesofindia.indiatimes.com/gadgets-news/instagram-facebook-prioritised-money-over-child-safety-claims-report/articleshow/107952778.cms
- https://www.adweek.com/blognetwork/meta-staff-found-instagram-tool-enabled-child-exploitation-the-company-pressed-ahead-anyway/107604/
- https://www.tbsnews.net/tech/meta-staff-found-instagram-subscription-tool-facilitated-child-exploitation-yet-company