#FactCheck - AI Generated image of Virat Kohli falsely claims to be sand art of a child
Executive Summary:
The picture of a boy making sand art of Indian Cricketer Virat Kohli spreading in social media, claims to be false. The picture which was portrayed, revealed not to be a real sand art. The analyses using AI technology like 'Hive' and ‘Content at scale AI detection’ confirms that the images are entirely generated by artificial intelligence. The netizens are sharing these pictures in social media without knowing that it is computer generated by deep fake techniques.

Claims:
The collage of beautiful pictures displays a young boy creating sand art of Indian Cricketer Virat Kohli.




Fact Check:
When we checked on the posts, we found some anomalies in each photo. Those anomalies are common in AI-generated images.

The anomalies such as the abnormal shape of the child’s feet, blended logo with sand color in the second image, and the wrong spelling ‘spoot’ instead of ‘sport’n were seen in the picture. The cricket bat is straight which in the case of sand made portrait it’s odd. In the left hand of the child, there’s a tattoo imprinted while in other photos the child's left hand has no tattoo. Additionally, the face of the boy in the second image does not match the face in other images. These made us more suspicious of the images being a synthetic media.
We then checked on an AI-generated image detection tool named, ‘Hive’. Hive was found to be 99.99% AI-generated. We then checked from another detection tool named, “Content at scale”


Hence, we conclude that the viral collage of images is AI-generated but not sand art of any child. The Claim made is false and misleading.
Conclusion:
In conclusion, the claim that the pictures showing a sand art image of Indian cricket star Virat Kohli made by a child is false. Using an AI technology detection tool and analyzing the photos, it appears that they were probably created by an AI image-generated tool rather than by a real sand artist. Therefore, the images do not accurately represent the alleged claim and creator.
Claim: A young boy has created sand art of Indian Cricketer Virat Kohli
Claimed on: X, Facebook, Instagram
Fact Check: Fake & Misleading
Related Blogs

Introduction
The term ‘super spreader’ is used to refer to social media and digital platform accounts that are able to quickly transmit information to a significantly large audience base in a short duration. The analogy references the medical term, where a small group of individuals is able to rapidly amplify the spread of an infection across a huge population. The fact that a few handful accounts are able to impact and influence many is attributed to a number of factors like large follower bases, high engagement rates, content attractiveness or virality and perceived credibility.
Super spreader accounts have become a considerable threat on social media because they are responsible for generating a large amount of low-credibility material online. These individuals or groups may create or disseminate low-credibility content for a number of reasons, running from social media fame to garnering political influence, from intentionally spreading propaganda to seeking financial gains. Given the exponential reach of these accounts, identifying, tracing and categorising such accounts as the sources of misinformation can be tricky. It can be equally difficult to actually recognise the content they spread for the misinformation that it actually is.
How Do A Few Accounts Spark Widespread Misinformation?
Recent research suggests that misinformation superspreaders, who consistently distribute low-credibility content, may be the primary cause of the issue of widespread misinformation about different topics. A study[1] by a team of social media analysts at Indiana University has found that a significant portion of tweets spreading misinformation are sent by a small percentage of a given user base. The researchers conducted a review of 2,397,388 tweets posted on Twitter (now X) that were flagged as having low credibility and details on who was sending them. The study found that it does not take a lot of influencers to sway the beliefs and opinions of large numbers. This is attributed to the impact of what they describe as superspreaders. The researchers collected 10 months of data, which added up to 2,397,388 tweets sent by 448,103 users, and then reviewed it, looking for tweets that were flagged as containing low-credibility information. They found that approximately a third of the low-credibility tweets had been posted by people using just 10 accounts, and that just 1,000 accounts were responsible for posting approximately 70% of such tweets.[2]
Case Study
- How Misinformation ‘Superspreaders’ Seed False Election Theories
During the 2020 U.S. presidential election, a small group of "repeat spreaders" aggressively pushed false election claims across various social media platforms for political gain, and this even led to rallies and radicalisation in the U.S.[3] Superspreaders accounts were responsible for disseminating a disproportionately large amount of misinformation related to the election, influencing public opinion and potentially undermining the electoral process.
In the domestic context, India was ranked highest for the risk of misinformation and disinformation according to experts surveyed for the World Economic Forum’s 2024 Global Risk Report. In today's digital age, misinformation, deep fakes, and AI-generated fakes pose a significant threat to the integrity of elections and democratic processes worldwide. With 64 countries conducting elections in 2024, the dissemination of false information carries grave implications that could influence outcomes and shape long-term socio-political landscapes. During the 2024 Indian elections, we witnessed a notable surge in deepfake videos of political personalities, raising concerns about the influence of misinformation on election outcomes.
- Role of Superspreaders During Covid-19
Clarity in public health communication is important when any grey areas or gaps in information can be manipulated so quickly. During the COVID-19 pandemic, misinformation related to the virus, vaccines, and public health measures spread rapidly on social media platforms, including Twitter (Now X). Some prominent accounts or popular pages on platforms like Facebook and Twitter(now X) were identified as superspreaders of COVID-19 misinformation, contributing to public confusion and potentially hindering efforts to combat the pandemic.
As per the Center for Countering Digital Hate Inc (US), The "disinformation dozen," a group of 12 prominent anti-vaccine accounts[4], were found to be responsible for a large amount of anti-vaccine content circulating on social media platforms, highlighting the significant role of superspreaders in influencing public perceptions and behaviours during a health crisis.
There are also incidents where users are unknowingly engaged in spreading misinformation by forwarding information or content which are not always shared by the original source but often just propagated by amplifiers, using other sources, websites, or YouTube videos that help in dissemination. The intermediary sharers amplify these messages on their pages, which is where it takes off. Hence such users do not always have to be the ones creating or deliberately popularising the misinformation, but they are the ones who expose more people to it because of their broad reach. This was observed during the pandemic when a handful of people were able to create a heavy digital impact sharing vaccine/virus-related misinformation.
- Role of Superspreaders in Influencing Investments and Finance
Misinformation and rumours in finance may have a considerable influence on stock markets, investor behaviour, and national financial stability. Individuals or accounts with huge followings or influence in the financial niche can operate as superspreaders of erroneous information, potentially leading to market manipulation, panic selling, or incorrect impressions about individual firms or investments.
Superspreaders in the finance domain can cause volatility in markets, affect investor confidence, and even trigger regulatory responses to address the spread of false information that may harm market integrity. In fact, there has been a rise in deepfake videos, and fake endorsements, with multiple social media profiles providing unsanctioned investing advice and directing followers to particular channels. This leads investors into dangerous financial decisions. The issue intensifies when scammers employ deepfake videos of notable personalities to boost their reputation and can actually shape people’s financial decisions.
Bots and Misinformation Spread on Social Media
Bots are automated accounts that are designed to execute certain activities, such as liking, sharing, or retweeting material, and they can broaden the reach of misinformation by swiftly spreading false narratives and adding to the virality of a certain piece of content. They can also artificially boost the popularity of disinformation by posting phony likes, shares, and comments, making it look more genuine and trustworthy to unsuspecting users. Bots can exploit social network algorithms by establishing false identities that interact with one another and with real users, increasing the spread of disinformation and pushing it to the top of users' feeds and search results.
Bots can use current topics or hashtags to introduce misinformation into popular conversations, allowing misleading information to acquire traction and reach a broader audience. They can lead to the construction of echo chambers, in which users are exposed to a narrow variety of perspectives and information, exacerbating the spread of disinformation inside restricted online groups. There are incidents reported where bot's were found as the sharers of content from low-credibility sources.
Bots are frequently employed as part of planned misinformation campaigns designed to propagate false information for political, ideological, or commercial gain. Bots, by automating the distribution of misleading information, can make it impossible to trace the misinformation back to its source. Understanding how bots work and their influence on information ecosystems is critical for combatting disinformation and increasing digital literacy among social media users.
CyberPeace Policy Recommendations
- Recommendations/Advisory for Netizens:
- Educating oneself: Netizens need to stay informed about current events, reliable fact-checking sources, misinformation counter-strategies, and common misinformation tactics, so that they can verify potentially problematic content before sharing.
- Recognising the threats and vulnerabilities: It is important for netizens to understand the consequences of spreading or consuming inaccurate information, fake news, or misinformation. Netizens must be cautious of sensationalised content spreading on social media as it might attempt to provoke strong reactions or to mold public opinions. Netizens must consider questioning the credibility of information, verifying its sources, and developing cognitive skills to identify low-credibility content and counter misinformation.
- Practice caution and skepticism: Netizens are advised to develop a healthy skepticism towards online information, and critically analyse the veracity of all information sources. Before spreading any strong opinions or claims, one must seek supporting evidence, factual data, and expert opinions, and verify and validate claims with reliable sources or fact-checking entities.
- Good netiquette on the Internet, thinking before forwarding any information: It is important for netizens to practice good netiquette in the online information landscape. One must exercise caution while sharing any information, especially if the information seems incorrect, unverified or controversial. It's important to critically examine facts and recognise and understand the implications of sharing false, manipulative, misleading or fake information/content. Netizens must also promote critical thinking and encourage their loved ones to think critically, verify information, seek reliable sources and counter misinformation.
- Adopting and promoting Prebunking and Debunking strategies: Prebunking and debunking are two effective strategies to counter misinformation. Netizens are advised to engage in sharing only accurate information and do fact-checking to debunk any misinformation. They can rely on reputable fact-checking experts/entities who are regularly engaged in producing prebunking and debunking reports and material. Netizens are further advised to familiarise themselves with fact-checking websites, and resources and verify the information.
- Recommendations for tech/social media platforms
- Detect, report and block malicious accounts: Tech/social media platforms must implement strict user authentication mechanisms to verify account holders' identities to minimise the formation of fraudulent or malicious accounts. This is imperative to weed out suspicious social media accounts, misinformation superspreader accounts and bots accounts. Platforms must be capable of analysing public content, especially viral or suspicious content to ascertain whether it is misleading, AI-generated, fake or deliberately misleading. Upon detection, platform operators must block malicious/ superspreader accounts. The same approach must apply to other community guidelines’ violations as well.
- Algorithm Improvements: Tech/social media platform operators must develop and deploy advanced algorithm mechanisms to detect suspicious accounts and recognise repetitive posting of misinformation. They can utilise advanced algorithms to identify such patterns and flag any misleading, inaccurate, or fake information.
- Dedicated Reporting Tools: It is important for the tech/social media platforms to adopt robust policies to take action against social media accounts engaged in malicious activities such as spreading misinformation, disinformation, and propaganda. They must empower users on the platforms to flag/report suspicious accounts, and misleading content or misinformation through user-friendly reporting tools.
- Holistic Approach: The battle against online mis/disinformation necessitates a thorough examination of the processes through which it spreads. This involves investing in information literacy education, modifying algorithms to provide exposure to varied viewpoints, and working on detecting malevolent bots that spread misleading information. Social media sites can employ similar algorithms internally to eliminate accounts that appear to be bots. All stakeholders must encourage digital literacy efforts that enable consumers to critically analyse information, verify sources, and report suspect content. Implementing prebunking and debunking strategies. These efforts can be further supported by collaboration with relevant entities such as cybersecurity experts, fact-checking entities, researchers, policy analysts and the government to combat the misinformation warfare on the Internet.
References:
- https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0302201 {1}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette {2}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette {3}
- https://counterhate.com/research/the-disinformation-dozen/ {4}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette
- https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0302201
- https://www.nytimes.com/2020/11/23/technology/election-misinformation-facebook-twitter.html
- https://www.wbur.org/onpoint/2021/08/06/vaccine-misinformation-and-a-look-inside-the-disinformation-dozen
- https://healthfeedback.org/misinformation-superspreaders-thriving-on-musk-owned-twitter/
- https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8139392/
- https://www.jmir.org/2021/5/e26933/
- https://www.yahoo.com/news/7-ways-avoid-becoming-misinformation-121939834.html
.webp)
Introduction
Social media platforms have begun to shape the public understanding of history in today’s digital landscape. You may have encountered videos, images, and posts that claim to reveal an untold story about our past. For example, you might have seen a post on your feed that has a painted or black and white image of a princess and labelled as "the most beautiful princess of Rajasthan who fought countless wars but has been erased from history.” Such emotionally charged narratives spread quickly, without any academic scrutiny or citation. Unfortunately, the originator believes it to be true.
Such unverified content may look harmless. But it profoundly contributes to the systematic distortion of historical information. Such misinformation reoccurs on feeds and becomes embedded in popular memory. It misguides the public discourse and undermines the scholarly research on the relevant topic. Sometimes, it also contributes to communal outrage and social tensions. It is time to recognise that protecting the integrity of our cultural and historical narratives is not only an academic concern but a legal and institutional responsibility. This is where the role of the Ministry of Culture becomes critical.
Pseudohistorical News Information in India
Fake news and misinformation are frequently disseminated via images, pictures, and videos on various messaging applications, which is referred to as “WhatsApp University” in a derogatory way. WhatsApp has become India’s favourite method of communication, while users have to stay very conscious about what they are consuming from forwarded messages. Academic historians strive to understand the past in its context to differentiate it from the present, whereas pseudo-historians try to manipulate history to satisfy their political agendas. Unfortunately, this wave of pseudo-history is expanding rapidly, with platforms like 'WhatsApp University' playing a significant role in amplifying its spread. This has led to an increase in fake historical news and paid journalism. Unlike pseudo-history, academic history is created by professional historians in academic contexts, adhering to strict disciplinary guidelines, including peer review and expert examination of justifications, assertions, and publications.
How to Identify Pseudo-Historic Misinformation
1. Lack of Credible Sources: There is a lack of reliable primary and secondary sources. Instead, pseudohistorical works depend on hearsay and unreliable eyewitness accounts.
2. Selective Use of Evidence: Misinformative posts portray only those facts that support their argument and minimise the facts which is contradictory to their assertions.
3. Incorporation of Conspiracy Theories: They often include conspiracy theories, which postulate secret groups, repressed knowledge. They might mention that evil powers influenced the historical events. Such hypotheses frequently lack any supporting data.
4. Extravagant Claims: Pseudo-historic tales sometimes present unbelievable assertions about historic persons or events.
5. Lack of Peer Review: Such work is generally never published on authentic academic platforms. You would not find them on platforms like LinkedIn, but on platforms like Instagram and Facebook, as they do not pitch for academic publications. Authentic historical research is examined by subject-matter authorities.
6. Neglect of Established Historiographical Methods: Such posts lack knowledge of a recognised methodology and procedures, like the critical study of sources.
7. Ideologically Driven Narratives: Political, communal, ideological, and personal opinions are prioritised in such posts. The author has a prior goal, instead of finding the truth.
8. Exploitation of Gaps in the Historical Record: Pseudo-historians often use missing or unclear parts of history to suggest that regular historians are hiding important secrets. They make the story sound more mysterious than it is.
9. Rejection of Scholarly Consensus: Pseudo-historians often reject the views of experts and historians, choosing instead to believe and promote their strange ideas.
10. Emphasis on Sensationalism: Pseudo-historical works may put more emphasis on sensationalism than academic rigour to pique public interest rather than offer a fair and thorough account of the history.
Legal and Institutional Responsibility
Public opinion is the heart of democracy. It should not be affected by any misinformation or disinformation. Vested interests cannot be allowed to sabotage this public opinion. Specifically, when it concerns academia, it cannot be shared unverified without any fact-checking. Such unverified claims can be called out, and action can be taken only if the authorities take over the charge. In India, the Indian Council of Historical Research (ICHR) regulates the historical academia. As per the official website, their stated aim is to “take all such measures as may be found necessary from time to time to promote historical research and its utilisation in the country,”. However, it is now essential to modernise the functioning of the ICHR to meet the demands of the digital era. Concerned authorities can run campaigns and awareness programmes to question the validity and research of such misinformative posts. Just as there are fact-checking mechanisms for news, there must also be an institutional push to fact-check and regulate historical content online. The following measures can be taken by authorities to strike down such misinformation online:
- Launch a nationwide awareness campaign about historical misinformation.
- Work with scholars, historians, and digital platforms to promote verified content.
- Encourage social media platforms to introduce fact-check labels for historical posts.
- Consider legal frameworks that penalise the deliberate spread of false historical narratives.
History is part of our national heritage, and preserving its accuracy is a matter of public interest. Misinformation and pseudo-history are a combination that misleads the public and weakens the foundation of shared cultural identity. In this digital era, false narratives spread rapidly, and it is important to promote critical thinking, encourage responsible academic work, and ensure that the public has access to accurate and well-researched historical information. Protecting the integrity of history is not just the work of historians — it is a collective responsibility that serves the future of our democracy.
References:
- https://kuey.net/index.php/kuey/article/view/4091
- https://www.drishtiias.com/daily-news-editorials/social-media-and-the-menace-of-false-information

Introduction
The Information Technology (IT) Ministry has tested a new parental control app called ‘SafeNet’ that is intended to be pre-installed in all mobile phones, laptops and personal computers (PCs). The government's approach shows collaborative efforts by involving cooperation between Internet service providers (ISPs), the Department of School Education, and technology manufacturers to address online safety concerns. Campaigns and the proposed SafeNet application aim to educate parents about available resources for online protection and safeguarding their children.
The Need for SafeNet App
SafeNet Trusted Access is an access management and authentication service that ensures no user is a target by allowing you to expand authentication to all users and apps with diverse authentication capabilities. SafeNet is, therefore, an arsenal of tools, each meticulously crafted to empower guardians in the art of digital parenting. With the finesse of a master weaver, it intertwines content filtering with the vigilant monitoring of live locations, casting a protective net over the vulnerable online experiences of the children. The ability to oversee calls and messages adds another layer of security, akin to a watchful sentinel standing guard over the gates of communication. Some pointers regarding the parental control app that can be taken into consideration are as follows.
1. Easy to use and set up: The app should be useful, intuitive, and easy to use. The interface plays a significant role in achieving this goal. The setup process should be simple enough for parents to access the app without any technical issues. Parents should be able to modify settings and monitor their children's activity with ease.
2. Privacy and data protection: Considering the sensitive nature of children's data, strong privacy and data protection measures are paramount. From the app’s point of view, strict privacy standards include encryption protocols, secure data storage practices, and transparent data handling policies with the right of erasure to protect and safeguard the children's personal information from unauthorized access.
3. Features for Time Management: Effective parental control applications frequently include capabilities for regulating screen time and establishing use limitations. The app will evaluate if the software enables parents to set time limits for certain applications or devices, therefore promoting good digital habits and preventing excessive screen time.
4. Comprehensive Features of SafeNet: The app's commitment to addressing the multifaceted aspects of online safety is reflected in its robust features. It allows parents to set content filters with surgical precision, manage the time their children spend in the digital world, and block content that is deemed age-inappropriate. This reflects a deep understanding of the digital ecosystem's complexities and the varied threats that lurk within its shadows.
5. Adaptable to the needs of the family: In a stroke of ingenuity, SafeNet offers both parent and child versions of the app for shared devices. This adaptability to diverse family dynamics is not just a nod to inclusivity but a strategic move that enhances its usability and effectiveness in real-world scenarios. It acknowledges the unique tapestry of family structures and the need for tools that are as flexible and dynamic as the families they serve.
6. Strong Support From Government: The initiative enjoys a chorus of support from both government and industry stakeholders, a symphony of collaboration that underscores the collective commitment to the cause. Recommendations for the pre-installation of SafeNet on devices by an industry consortium resonate with the directives from the Prime Minister's Office (PMO),creating a harmonious blend of policy and practice. The involvement of major telecommunications players and Internet service providers underscores the industry's recognition of the importance of such initiatives, emphasising a collaborative approach towards deploying digital safeguarding measures at scale.
Recommendations
The efforts by the government to implement parental controls a recommendable as they align with societal goals of child welfare and protection. This includes providing parents with tools to manage and monitor their children's Internet usage to address concerns about inappropriate content and online risks. The following suggestions are made to further support the government's initiative:
1. The administration can consider creating a verification mechanism similar to how identities are verified when mobile SIMS are issued. While this certainly makes for a longer process, it will help address concerns about the app being misused for stalking and surveillance if it is made available to everyone as a default on all digital devices.
2. Parental controls are available on several platforms and are designed to shield, not fetter. Finding the right balance between protection and allowing for creative exploration is thus crucial to ensuring children develop healthy digital habits while fostering their curiosity and learning potential. It might be helpful to the administration to establish updated policies that prioritise the privacy-protection rights of children so that there is a clear mandate on how and to what extent the app is to be used.
3. Policy reforms can be further supported through workshops, informational campaigns, and resources that educate parents and children about the proper use of the app, the concept of informed consent, and the importance of developing healthy, transparent communication between parents and children.
Conclusion
Safety is a significant step towards child protection and development. Children have to rely on adults for protection and cannot identify or sidestep risk. In this context, the United Nations Convention on the Rights of the Child emphasises the matter of protection efforts for children, which notes that children have the "right to protection". Therefore, the parental safety app will lead to significant concentration on the general well-being and health of the children besides preventing drug misuse. On the whole, while technological solutions can be helpful, one also needs to focus on educating people on digital safety, responsible Internet use, and parental supervision.
References
- https://www.hindustantimes.com/india-news/itministry-tests-parental-control-app-progress-to-be-reviewed-today-101710702452265.html
- https://www.htsyndication.com/ht-mumbai/article/it-ministry-tests-parental-control-app%2C-progress-to-be-reviewed-today/80062127
- https://www.varindia.com/news/it-ministry-to-evaluate-parental-control-software
- https://www.medianama.com/2024/03/223-indian-government-to-incorporate-parental-controls-in-data-usage/