#FactCheck - Virat Kohli's Ganesh Chaturthi Video Falsely Linked to Ram Mandir Inauguration
Executive Summary:
Old footage of Indian Cricketer Virat Kohli celebrating Ganesh Chaturthi in September 2023 was being promoted as footage of Virat Kohli at the Ram Mandir Inauguration. A video of cricketer Virat Kohli attending a Ganesh Chaturthi celebration last year has surfaced, with the false claim that it shows him at the Ram Mandir consecration ceremony in Ayodhya on January 22. The Hindi newspaper Dainik Bhaskar and Gujarati newspaper Divya Bhaskar also displayed the now-viral video in their respective editions on January 23, 2024, escalating the false claim. After thorough Investigation, it was found that the Video was old and it was Ganesh Chaturthi Festival where the cricketer attended.
Claims:
Many social media posts, including those from news outlets such as Dainik Bhaskar and Gujarati News Paper Divya Bhaskar, show him attending the Ram Mandir consecration ceremony in Ayodhya on January 22, where after investigation it was found that the Video was of Virat Kohli attending Ganesh Chaturthi in September, 2023.



The caption of Dainik Bhaskar E-Paper reads, “ क्रिकेटर विराट कोहली भी नजर आए ”
Fact Check:
CyberPeace Research Team did a reverse Image Search of the Video where several results with the Same Black outfit was shared earlier, from where a Bollywood Entertainment Instagram Profile named Bollywood Society shared the same Video in its Page, the caption reads, “Virat Kohli snapped for Ganapaati Darshan” the post was made on 20 September, 2023.

Taking an indication from this we did some keyword search with the Information we have, and it was found in an article by Free Press Journal, Summarizing the article we got to know that Virat Kohli paid a visit to the residence of Shiv Sena leader Rahul Kanal to seek the blessings of Lord Ganpati. The Viral Video and the claim made by the news outlet is false and Misleading.
Conclusion:
The recent Claim made by the Viral Videos and News Outlet is an Old Footage of Virat Kohli attending Ganesh Chaturthi the Video back to the year 2023 but not of the recent auspicious day of Ram Mandir Pran Pratishtha. To be noted that, we also confirmed that Virat Kohli hadn’t attended the Program; there was no confirmation that Virat Kohli attended on 22 January at Ayodhya. Hence, we found this claim to be fake.
- Claim: Virat Kohli attending the Ram Mandir consecration ceremony in Ayodhya on January 22
- Claimed on: Youtube, X
- Fact Check: Fake
Related Blogs

Introduction
Twitter is a popular social media plate form with millions of users all around the world. Twitter’s blue tick system, which verifies the identity of high-profile accounts, has been under intense scrutiny in recent years. The platform must face backlash from its users and brands who have accused it of basis, inaccuracy, and inconsistency in its verification process. This blog post will explore the questions raised on the verification process and its impact on users and big brands.
What is Twitter’s blue trick System?
The blue tick system was introduced in 2009 to help users identify the authenticity of well-known public figures, Politicians, celebrities, sportspeople, and big brands. The Twitter blue Tick system verifies the identity of high-profile accounts to display a blue badge next to your username.
According to a survey, roughly there are 294,000 verified Twitter Accounts which means they have a blue tick badge with them and have also paid the subscription for the service, which is nearly $7.99 monthly, so think about those subscribers who have paid the amount and have also lost their blue badge won’t they feel cheated?
The Controversy
Despite its initial aim, the blue tick system has received much criticism from consumers and brands. Twitter’s irregular and non-transparent verification procedure has sparked accusations of prejudice and inaccuracy. Many Twitter users have complained that the network’s verification process is random and favours account with huge followings or celebrity status. In contrast, others have criticised the platform for certifying accounts that promote harmful or controversial content.
Furthermore, the verification mechanism has generated user confusion, as many need to understand the significance of the blue tick badge. Some users have concluded that the blue tick symbol represents a Twitter endorsement or that the account is trustworthy. This confusion has resulted in users following and engaging with verified accounts that promote misleading or inaccurate data, undermining the platform’s credibility.
How did the Blue Tick Row start in India?
On 21 May 2021, when the government asked Twitter to remove the blue badge from several profiles of high-profile Indian politicians, including the Indian National Congress Party Vice-President Mr Rahul Ghandhi.
The blue badge gives the users an authenticated identity. Many celebrities, including Amitabh Bachchan, popularly known as Big B, Vir Das, Prakash Raj, Virat Kohli, and Rohit Sharma, have lost their blue tick despite being verified handles.
What is the Twitter policy on blue tick?
To Twitter’s policy, blue verification badges may be removed from accounts if the account holder violates the company’s verification policy or terms of service. In such circumstances, Twitter typically notifies the account holder of the removal of the verification badge and the reason for the removal. In the instance of the “Twitter blue badge row” in India, however, it appears that Twitter did not notify the impacted politicians or their representatives before revoking their verification badges. Twitter’s lack of communication has exacerbated the controversy around the episode, with some critics accusing the company of acting arbitrarily and not following due process.
Is there a solution?
The “Twitter blue badge row” has no simple answer since it involves a complex convergence of concerns about free expression, social media policies, and government laws. However, here are some alternatives:
- Establish clear guidelines: Twitter should develop and constantly implement clear guidelines and policies for the verification process. All users, including politicians and government officials, would benefit from greater transparency and clarity.
- Increase transparency: Twitter’s decision-making process for deleting or restoring verification badges should be more open. This could include providing explicit reasons for badge removal, notifying impacted users promptly, and offering an appeals mechanism for those who believe their credentials were removed unfairly.
- Engage in constructive dialogue: Twitter should engage in constructive dialogue with government authorities and other stakeholders to address concerns about the platform’s content moderation procedures. This could contribute to a more collaborative approach to managing online content, leading to more effective and accepted policies.
- Follow local rules and regulations: Twitter should collaborate with the Indian government to ensure it conforms to local laws and regulations while maintaining freedom of expression. This could involve adopting more precise standards for handling requests for material removal or other actions from governments and other organisations.
Conclusion
To sum up, the “Twitter blue tick row” in India has highlighted the complex challenges that Social media faces daily in handling the conflicting interests of free expression, government rules, and their own content moderation procedures. While Twitter’s decision to withdraw the blue verification badges of several prominent Indian politicians garnered anger from the government and some public members, it also raised questions about the transparency and uniformity of Twitter’s verification procedure. In order to deal with this issue, Twitter must establish clear verification procedures and norms, promote transparency in its decision-making process, participate in constructive communication with stakeholders, and adhere to local laws and regulations. Furthermore, the Indian government should collaborate with social media platforms to create more effective and acceptable laws that balance the necessity for free expression and the protection of citizens’ rights. The “Twitter blue tick row” is just one example of the complex challenges that social media platforms face in managing online content, and it emphasises the need for greater collaboration among platforms, governments, and civil society organisations to develop effective solutions that protect both free expression and citizens’ rights.

Introduction
Online dating platforms have become a common way for individuals to connect in today’s digital age. For many in the LGBTQ+ community, especially in environments where offline meeting spaces are limited, these platforms offer a way to find companionship and support. However, alongside these opportunities come serious risks. Users are increasingly being targeted by cybercrimes such as blackmail, sextortion, identity theft, and online harassment. These incidents often go unreported due to stigma and concerns about privacy. The impact of such crimes can be both emotional and financial, highlighting the need for greater awareness and digital safety.
Cybercrime On LGBTQ+ Dating Apps: A Threat Landscape
According to the NCRB 2022 report, there has been a 24.4% increase in cybercrimes. But unfortunately, the queer community-specific data is not available. Cybercrimes that target LGBTQ+ users in very organised and predatory. In several Indian cities, gangs actively monitor dating platforms to the point that potential victims, especially young queers and those who seem discreet about their identity, become targets. Once the contact is established, perpetrators use a standard operating process, building false trust, forcing private exchanges, and then gradually starting blackmail and financial exploitation. Many queer victims are blackmailed with threats of exposure to families or workplaces, often by fake police demanding bribes. Fear of stigma and insensitive policing discourages reporting. Cyber criminal gangs exploit these gaps on dating apps. Despite some arrests, under-reporting persists, and activists call for stronger platform safety.
Types of Cyber Crimes against Queer Community on Dating Apps
- Romance scam or “Lonely hearts scam”: Scammers build trust with false stories (military, doctors, NGO workers) and quickly express strong romantic interest. They later request money, claiming emergencies. They often try to create multiple accounts to avoid profile bans.
- Sugar daddy scam: In this type of scam, the fraudster offers money or allowance in exchange for things like chatting, sending photos, or other interactions. They usually offer a specific amount and want to use some uncommon payment gateways. After telling you they will send you a lot of money, they often make up a story like: “My last sugar baby cheated me, so now you must first send me a small amount to prove you are trustworthy.” This is just a trick to make you send them money first.
- Sextortion / Blackmail scam: Scammers record explicit chats or pretend to be underage, then threaten exposure unless you pay. Some target discreet users. Never send explicit content or pay blackmailers.
- Investment Scams: Scammers posing as traders or bankers convince victims to invest in fake opportunities. Some "flip" small amounts to build trust, then disappear with larger sums. Real investors won’t approach you on dating apps. Don’t share financial info or transfer money.
- Pay-Before-You-Meet scam: Scammer demands upfront payment (gift cards, gas money, membership fees) before meeting, then vanishes. Never pay anyone before meeting in person.
- Security app registration scam: Scammers ask you to register on fake "security apps" to steal your info, claiming it ensures your safety. Research apps before registering. Be wary of quick link requests.
- The Verification code scam: Scammers trick you into giving them SMS verification codes, allowing them to hijack your accounts. Never share verification codes with anyone.
- Third-party app links: Mass spam messages with suspicious links that steal info or infect devices. Don’t click suspicious links or “Google me” messages.
- Support message scam: Messages pretending to be from application support, offering prizes or fake shows to lure you to malicious sites.
Platform Accountability & Challenges
The issue of online dating platforms in India is characterised by weak grievance redressal, poor takedown of abusive profiles, and limited moderation practices. Most platforms appoint grievance officers or offer an in-app complaint portal, but complaints are often unanswered or receive only automated and AI-generated responses. This highlights the gap between policy and enforcement on the ground.
Abusive or fake profiles, often used for scams, hate crimes, and outing LGBTQ+ individuals, remain active long after being reported. In India, organised extortion gangs have exploited such profiles to lure, assault, rob, and blackmail queer men. Moderation teams often struggle with backlogs and lack the resources needed to handle even the most serious complaints.
Despite offering privacy settings and restricting profile visibility, moderation practices in India are still weak, leaving large segments of users vulnerable to impersonation, catfishing, and fraud. The concept of pseudonymisation can help protect vulnerable communities, but it is difficult to distinguish authentic users from malicious actors without robust, privacy-respecting verification systems.
Since many LGBTQ+ individuals prefer to maintain their confidentiality, while others are more vocal about their identities, in either case, the data shared by an individual with an online dating platform must be vigilantly protected. The Digital Personal Data Protection Act, 2023, mandates the protection of personal data. Section 8(4) provides: “A Data Fiduciary shall implement appropriate technical and organisational measures to ensure effective observance of the provisions of this Act and the rules made thereunder.” Accordingly, digital platforms collecting such data should adopt the necessary technical and organisational measures to comply with data protection laws.
Recommendations
The Supreme Court has been proactive in this regard, through decisions like Navtej Singh Johar v. Union of India, which decriminalised same-sex relationships. Justice K.S. Puttaswamy (Retd.) v. Union of India and Ors., acknowledged the right to privacy as a fundamental right, and, most recently, the 2025 affirmation of the right to digital access. However, to protect LGBTQ+ people online, more robust legal frameworks are still required.
There is a requirement for a dedicated commission or an empowered LGBTQ+ cell. Like the National Commission for Women (NCW), which works to safeguard the rights of women, a similar commission would address community-specific issues, including cybercrime, privacy violations, and discrimination on digital platforms. It may serve as an institutional link between the victim, the digital platforms, the government, and the police. Dating Platforms must enhance their security features and grievance mechanisms to safeguard the users.
Best Practices
Scammers use data sets and plans to target individuals seeking specific interests, such as love, sex, money, or association. Do not make financial transactions, such as signing up for third-party platforms or services. Scammers may attempt to create accounts for others, which can be used to access dating platforms and harm legitimate users. Users should be vigilant about sharing sensitive information, such as private images, contact information, or addresses, as scammers can use this information to threaten users. Stay smart, stay cyber safe.
References
- https://www.hindustantimes.com/htcity/cinema/16yearold-queer-child-pranshu-dies-by-suicide-due-to-bullying-did-we-fail-as-a-society-mental-health-expert-opines-101701172202794.html#google_vignette
- https://www.ijsr.net/archive/v11i6/SR22617213031.pdf
- https://help.grindr.com/hc/en-us/articles/1500009328241-Scam-awareness-guide
- http://meity.gov.in/static/uploads/2024/06/2bf1f0e9f04e6fb4f8fef35e82c42aa5.pdf
- https://mib.gov.in/sites/default/files/2024-02/IT%28Intermediary%20Guidelines%20and%20Digital%20Media%20Ethics%20Code%29%20Rules%2C%202021%20English.pdf
.jpeg)
Introduction and Brief Analysis
A movie named “The Artifice Girl” portrayed A law enforcement agency developing an AI-based personification of a 12-year-old girl who appears to be exactly like a real person. Believing her to be an actual girl, perpetrators of child sexual exploitation were caught attempting to seek sexual favours. The movie showed how AI aided law enforcement, but the reality is that the emergence of Artificial Intelligence has posed numerous challenges in multiple directions. This example illustrates both the promise and the complexity of using AI in sensitive areas like law enforcement, where technological innovation must be carefully balanced with ethical and legal considerations.
Detection and Protection tools are constantly competing with technologies that generate content, automate grooming and challenge legal boundaries. Such technological advancements have provided enough ground for the proliferation of Child Sexual Exploitation and Abuse Material (CSEAM). Also known as child pornography under Section 2 (da) of Protection of Children from Sexual Offences Act, 2012, it defined it as - “means any visual depiction of sexually explicit conduct involving a child which includes a photograph, video, digital or computer-generated image indistinguishable from an actual child and image created, adapted, or modified, but appears to depict a child.”
Artificial Intelligence is a category of technologies that attempt to shape human thoughts and behaviours using input algorithms and datasets. Two Primary applications can be considered in the context of CSEAM: classifiers and content generators. Classifiers are programs that learn from large data sets, which may be labelled or unlabelled and further classify what is restricted or illegal. Whereas generative AI is also trained on large datasets, it uses that knowledge to create new things. Majority of current AI research related to AI for CSEAM is done by the use of Artificial neural networks (ANNs), a type of AI that can be trained to identify unusual connections between items (classification) and to generate unique combinations of items (e.g., elements of a picture) based on the training data used.
Current Legal Landscape
The legal Landscape in terms of AI is yet unclear and evolving, with different nations trying to track the evolution of AI and develop laws. However, some laws directly address CSEAM. The International Centre for Missing and Exploited Children (ICMEC) combats Illegal sexual content involving children. They have a “Model Legislation” for setting recommended sanctions/sentencing. According to research performed in 2018, Illegal sexual content involving children is illegal in 118 of the 196 Interpol member states. This figure represents countries that have sufficient legislation in place to meet 4 or 5 of the 5 criteria defined by the ICMEC.
CSEAM in India can be reported on various portals like the ‘National Cyber Crime Reporting Portal’. Online crimes related to children, including CSEAM, can be reported to this portal by visiting cybercrime.gov.in. This portal allows anonymous reporting, automatic FIR registration and tracking of your complaint. ‘I4C Sahyog Portal’ is another platform managed by the Indian Cyber Crime Coordination Centre (I4C). This portal integrates with social media platforms.
The Indian legal front for AI is evolving and CSEAM is well addressed in Indian laws and through judicial pronouncements. The Supreme Court judgement on Alliance and Anr v S Harish and ors is a landmark in this regard. The following principles were highlighted in this judgment.
- The term “child pornography” should be substituted by “Child Sexual Exploitation and Abuse Material” (CSEAM) and shall not be used for any further judicial proceeding, order, or judgment. Also, parliament should amend the same in POCSO and instead, the term CSEAM should be endorsed.
- Parliament to consider amending Section 15 (1) of POCSO to make it more convenient for the general public to report by way of an online portal.
- Implementing sex education programs to give young people a clear understanding of consent and the consequences of exploitation. To help prevent Problematic sexual behaviour (PSB), schools should teach students about consent, healthy relationships and appropriate behaviour.
- Support services to the victims and rehabilitation programs for the offenders are essential.
- Early identification of at-risk individuals and implementation of intervention strategies for youth.
Distinctive Challenges
According to a report by the National Centre for Missing and Exploited Children (NCMEC), a significant number of reports about child sexual exploitation and abuse material (CSEAM) are linked to perpetrators based outside the country. This highlights major challenges related to jurisdiction and anonymity in addressing such crimes. Since the issue concerns children and considering the cross-border nature of the internet and the emergence of AI, Nations across the globe need to come together to solve this matter. Delays in the extradition procedure and irregular legal processes across the jurisdictions hinder the apprehension of offenders and the delivery of justice to victims.
CyberPeace Recommendations
For effective regulation of AI-generated CSEAM, laws are required to be strengthened for AI developers and trainers to prevent misuse of their tools. AI should be designed with its ethical considerations, ensuring respect for privacy, consent and child rights. There can be a self-regulation mechanism for AI models to recognise and restrict red flags related to CSEAM and indicate grooming or potential abuse.
A distinct Indian CSEAM reporting portal is urgently needed, as cybercrimes are increasing throughout the nation. Depending on the integrated portal may lead to ignorance of AI-based CSEAM cases. This would result in faster response and focused tracking. Since AI-generated content is detectable. The portal should also include an automated AI-content detection system linked directly to law enforcement for swift action.
Furthermore, International cooperation is of utmost importance to win the battle of AI-enabled challenges and to fill the jurisdictional gaps. A united global effort is required. Using a common technology and unified international laws is essential to tackle AI-driven child sexual exploitation across borders and protect children everywhere. CSEAM is an extremely serious issue. Children are among the most vulnerable to such harmful content. This threat must be addressed without delay, through stronger policies, dedicated reporting mechanisms and swift action to protect children from exploitation.
References:
- https://www.sciencedirect.com/science/article/pii/S2950193824000433?ref=pdf_download&fr=RR-2&rr=94efffff09e95975
- https://aasc.assam.gov.in/sites/default/files/swf_utility_folder/departments/aasc_webcomindia_org_oi d_4/portlet/level_2/pocso_act.pdf
- https://www.manupatracademy.com/assets/pdf/legalpost/just-rights-for-children-alliance-and-anr-vs-sharish-and-ors.pdfhttps://www.icmec.orghttps://www.missingkids.org/theissues/generative-ai