#FactCheck - Virat Kohli's Ganesh Chaturthi Video Falsely Linked to Ram Mandir Inauguration
Executive Summary:
Old footage of Indian Cricketer Virat Kohli celebrating Ganesh Chaturthi in September 2023 was being promoted as footage of Virat Kohli at the Ram Mandir Inauguration. A video of cricketer Virat Kohli attending a Ganesh Chaturthi celebration last year has surfaced, with the false claim that it shows him at the Ram Mandir consecration ceremony in Ayodhya on January 22. The Hindi newspaper Dainik Bhaskar and Gujarati newspaper Divya Bhaskar also displayed the now-viral video in their respective editions on January 23, 2024, escalating the false claim. After thorough Investigation, it was found that the Video was old and it was Ganesh Chaturthi Festival where the cricketer attended.
Claims:
Many social media posts, including those from news outlets such as Dainik Bhaskar and Gujarati News Paper Divya Bhaskar, show him attending the Ram Mandir consecration ceremony in Ayodhya on January 22, where after investigation it was found that the Video was of Virat Kohli attending Ganesh Chaturthi in September, 2023.



The caption of Dainik Bhaskar E-Paper reads, “ क्रिकेटर विराट कोहली भी नजर आए ”
Fact Check:
CyberPeace Research Team did a reverse Image Search of the Video where several results with the Same Black outfit was shared earlier, from where a Bollywood Entertainment Instagram Profile named Bollywood Society shared the same Video in its Page, the caption reads, “Virat Kohli snapped for Ganapaati Darshan” the post was made on 20 September, 2023.

Taking an indication from this we did some keyword search with the Information we have, and it was found in an article by Free Press Journal, Summarizing the article we got to know that Virat Kohli paid a visit to the residence of Shiv Sena leader Rahul Kanal to seek the blessings of Lord Ganpati. The Viral Video and the claim made by the news outlet is false and Misleading.
Conclusion:
The recent Claim made by the Viral Videos and News Outlet is an Old Footage of Virat Kohli attending Ganesh Chaturthi the Video back to the year 2023 but not of the recent auspicious day of Ram Mandir Pran Pratishtha. To be noted that, we also confirmed that Virat Kohli hadn’t attended the Program; there was no confirmation that Virat Kohli attended on 22 January at Ayodhya. Hence, we found this claim to be fake.
- Claim: Virat Kohli attending the Ram Mandir consecration ceremony in Ayodhya on January 22
- Claimed on: Youtube, X
- Fact Check: Fake
Related Blogs

Introduction
Intricate and winding are the passageways of the modern digital age, a place where the reverberations of truth effortlessly blend, yet hauntingly contrast, with the echoes of falsehood. The latest thread in this fabric of misinformation is a claim that has scurried through the virtual windows of social media platforms, gaining the kind of traction that is both revelatory and alarming of our times. It is a narrative that speaks to the heart of India's cultural and religious fabric—the construction of the Ram Temple in Ayodhya, a project enshrined in the collective consciousness of a nation and steeped in historical significance.
The claim in question, a spectre of misinformation, suggests that the Ram Temple's construction has been covertly shifted 3 kilometres from its original, hallowed ground—the birthplace, as it were, of Lord Ram. This assertion, which spread through the echo chambers of social media, has been bolstered by a screenshot of Google Maps, a digital cartographer that has accidentally become a pawn in this game of truth and deception. The image purports to showcase the location of Ram Mandir as distinct and distant from the site where the Babri Masjid once stood, a claim went viral on social media and has caught the public's reactions.
The Viral Tempest
In the face of such a viral tempest, IndiaTV's fact-checking arm, IndiaTVFactCheck, has stepped into the fray, wielding the sword of veracity against the Goliath of falsehood. Their investigation into this viral claim was meticulous, a deep dive into the digital representations that have fueled this controversy. Upon examining the viral Google Maps screenshot, they noticed markings at two locations: one labelled as Shri Ram Janmabhoomi Temple and the other as Babri Masjid. The latter, upon closer inspection and with the aid of Google's satellite prowess, was revealed to be the Shri Sita-Ram Birla Temple, a place of worship that stands in quiet dignity, far removed from the contentious whispers of social media.
The truth, as it often does, lay buried beneath layers of user-generated content on Google Maps, where the ability to tag any location with a name has sometimes led to the dissemination of incorrect information. This can be corrected, of course, but not before it has woven itself into the fabric of public discourse. The fact-check by IndiaTV revealed that the location mentioned in the viral screenshot is, indeed, the Shri Sita-Ram Birla Temple and the Ram Temple is being constructed at its original, intended site.
This revelation is not merely a victory for truth over falsehood but also a testament to the resilience of facts in the face of a relentless onslaught of misinformation. It is a reminder that the digital realm, for all its wonders, is also a shadowy theatre where narratives are constructed and deconstructed with alarming ease. The very basis of all the fake narratives that spread around significant events, such as the consecration ceremony of the Ram Temple, is the manipulation of truth, the distortion of reality to serve nefarious ends of spreading misinformation.
Fake Narratives; Misinformation
Consider the elaborate fake narratives spun around the ceremony, where hours have been spent on the internet building a web of deceit. Claims such as 'Mandir wahan nahin banaya gaya' (The temple is not being built at the site of the demolition) and the issuance of new Rs 500 notes for the Ram Mandir were some pieces of misinformation that went viral on social media amid the preparations for the consecration ceremony. These repetitive claims, albeit differently worded, were spread to further a single narrative on the internet, a phenomenon that a study published in Nature said could be attributed to people taking some peripheral cues as signals for truth, which can increase with repetition.
The misinformation incidents surrounding the Ram Temple in Ayodhya are a microcosm of the larger battle between truth and misinformation. The false claims circulating online assert that the ongoing construction is not taking place at the original Babri Masjid site but rather 3 kilometres away. This misinformation, shared widely on social media has been debunked upon closer examination. The claim is based on a screenshot of Google Maps showing two locations: the construction site of the Shri Ram Janmabhoomi Temple and another spot labeled 'Babar Masjid permanently closed' situated 3 kilometers away. The assertion questions the legitimacy of demolishing the Babri Masjid if the temple is being built elsewhere. However, a thorough fact-check reveals the claim to be entirely unfounded.
Deep Scrutiny
Upon scrutiny, the screenshot indicates that the second location marked as 'Babar Masjid' is, in fact, the Sita-Ram Birla Temple in Ayodhya. This is verified by comparing the Google Maps satellite image with the actual structure of the Birla Temple. Notably, the viral screenshot misspells 'Babri Masjid' as 'Babar Masjid,' casting doubt on its credibility. Satellite images from Google Earth Pro clearly depict the construction of a temple-like structure at the precise coordinates of the original Babri Masjid demolition site (26°47'43.74'N 82°11'38.77'E). Comparing old and new satellite images further confirms that major construction activities began in 2011, aligning with the initiation of the Ram Temple construction.
Moreover, existing photographs of the Babri Masjid, though challenging to precisely match, share essential structural elements with the current construction site, reinforcing the location as the original site of the mosque. Hence the viral claim that the Ram Temple is being constructed 3 kilometers away from the Babri Masjid site is indubitably false. Evidence from historical photographs, satellite images and google images conclusively refute this misinformation, attesting that the temple construction is indeed taking place at the same location as the original Babri Masjid.
Viral Misinformation: A false claim based on a misleading Google Maps screenshot suggests the Ram Temple construction in Ayodhya has been covertly shifted 3 kilometres away from its original Babri Masjid site.
Fact Check Revealed: IndiaTVFactCheck debunked the misinformation, confirming that the viral screenshot actually showed the Shri Sita-Ram Birla Temple, not the Babri Masjid site. The Ram Temple is indeed being constructed at its original, intended location, exposing the falsehood of the claim.
Conclusion
The case of the Ram Temple is a pitiful reminder of the power of misinformation and the significance of fact-checking in preserving the integrity of truth. It is a clarion call to question, to uphold the integrity of facts in a world increasingly stymied in the murky waters of falsehoods. Widespread misinformation highlights the critical role of fact-checking in dispelling false narratives. It serves as a reminder of the ongoing battle between truth and misinformation in the digital age, emphasising the importance of upholding the integrity of facts for a more informed society.
References
- https://www.indiatvnews.com/fact-check/fact-check-is-ram-temple-being-built-3-km-away-from-the-birthplace-here-truth-behind-viral-claim-2024-01-19-912633
- https://www.thequint.com/news/webqoof/misinformation-spread-around-events-ayodhya-ram-mandir-g20-elections-bharat-jodo-yatra

AI has grown manifold in the past decade and so has its reliance. A MarketsandMarkets study estimates the AI market to reach $1,339 billion by 2030. Further, Statista reports that ChatGPT amassed more than a million users within the first five days of its release, showcasing its rapid integration into our lives. This development and integration have their risks. Consider this response from Google’s AI chatbot, Gemini to a student’s homework inquiry: “You are not special, you are not important, and you are not needed…Please die.” In other instances, AI has suggested eating rocks for minerals or adding glue to pizza sauce. Such nonsensical outputs are not just absurd; they’re dangerous. They underscore the urgent need to address the risks of unrestrained AI reliance.
AI’s Rise and Its Limitations
The swiftness of AI’s rise, fueled by OpenAI's GPT series, has revolutionised fields like natural language processing, computer vision, and robotics. Generative AI Models like GPT-3, GPT-4 and GPT-4o with their advanced language understanding, enable learning from data, recognising patterns, predicting outcomes and finally improving through trial and error. However, despite their efficiency, these AI models are not infallible. Some seemingly harmless outputs can spread toxic misinformation or cause harm in critical areas like healthcare or legal advice. These instances underscore the dangers of blindly trusting AI-generated content and highlight the importance and the need to understand its limitations.
Defining the Problem: What Constitutes “Nonsensical Answers”?
Harmless errors due to AI nonsensical responses can be in the form of a wrong answer for a trivia question, whereas, critical failures could be as damaging as wrong legal advice.
AI algorithms sometimes produce outputs that are not based on training data, are incorrectly decoded by the transformer or do not follow any identifiable pattern. This response is known as a Nonsensical Answer and the situation is known as an “AI Hallucination”. It can be factual inaccuracies, irrelevant information or even contextually inappropriate responses.
A significant source of hallucination in machine learning algorithms is the bias in input that it receives. If the inputs for the AI model are full of biased datasets or unrepresentative data, it may lead to the model hallucinating and producing results that reflect these biases. These models are also vulnerable to adversarial attacks, wherein bad actors manipulate the output of an AI model by tweaking the input data ina subtle manner.
The Need for Policy Intervention
Nonsensical AI responses risk eroding user trust and causing harm, highlighting the need for accountability despite AI’s opaque and probabilistic nature. Different jurisdictions address these challenges in varied ways. The EU’s AI Act enforces stringent reliability standards with a risk-based and transparent approach. The U.S. emphasises creating ethical guidelines and industry-driven standards. India’s DPDP Act indirectly tackles AI safety through data protection, focusing on the principles of accountability and consent. While the EU prioritises compliance, the U.S. and India balance innovation with safeguards. This reflects on the diverse approaches that nations have to AI regulation.
Where Do We Draw the Line?
The critical question is whether AI policies should demand perfection or accept a reasonable margin for error. Striving for flawless AI responses may be impractical, but a well-defined framework can balance innovation and accountability. Adopting these simple measures can lead to the creation of an ecosystem where AI develops responsibly while minimising the societal risks it can pose. Key measures to achieve this include:
- Ensure that users are informed about AI and its capabilities and limitations. Transparent communication is the key to this.
- Implement regular audits and rigorous quality checks to maintain high standards. This will in turn prevent any form of lapses.
- Establishing robust liability mechanisms to address any harms caused by AI-generated material which is in the form of misinformation. This fosters trust and accountability.
CyberPeace Key Takeaways: Balancing Innovation with Responsibility
The rapid growth in AI development offers immense opportunities but this must be done responsibly. Overregulation of AI can stifle innovation, on the other hand, being lax could lead to unintended societal harm or disruptions.
Maintaining a balanced approach to development is essential. Collaboration between stakeholders such as governments, academia, and the private sector is important. They can ensure the establishment of guidelines, promote transparency, and create liability mechanisms. Regular audits and promoting user education can build trust in AI systems. Furthermore, policymakers need to prioritise user safety and trust without hindering creativity while making regulatory policies.
We can create a future that is AI-development-driven and benefits us all by fostering ethical AI development and enabling innovation. Striking this balance will ensure AI remains a tool for progress, underpinned by safety, reliability, and human values.
References
- https://timesofindia.indiatimes.com/technology/tech-news/googles-ai-chatbot-tells-student-you-are-not-needed-please-die/articleshow/115343886.cms
- https://www.forbes.com/advisor/business/ai-statistics/#2
- https://www.reuters.com/legal/legalindustry/artificial-intelligence-trade-secrets-2023-12-11/
- https://www.indiatoday.in/technology/news/story/chatgpt-has-gone-mad-today-openai-says-it-is-investigating-reports-of-unexpected-responses-2505070-2024-02-21

Executive Summary:
A widely circulated social media post claims that the Government of India has reportedly opened an account—Army Welfare Fund Battle Casualty—at Canara Bank to support the modernization of the Indian Army and assist injured or martyred soldiers. Citizens can voluntarily contribute starting from ₹1, with no upper limit. The fund is said to have been launched based on a suggestion by actor Akshay Kumar, which was later acknowledged by the Prime Minister of India through Mann Ki Baat and social media platforms. However, the fact is that no such decision has been taken by the cabinet recently, and no such decision has been officially announced.

Claim:
A viral social media post claims that the Government of India has launched a new initiative aimed at modernizing the Indian Army and supporting battle casualties through public donations. According to the post, a special bank account has been created to enable citizens to contribute directly toward the procurement of arms and equipment for the armed forces.
It further states that this initiative was introduced following a Cabinet decision and was inspired by a suggestion from Bollywood actor Akshay Kumar, which was reportedly acknowledged by the Prime Minister during his Mann Ki Baat address.
The post encourages individuals to donate any amount starting from ₹1, with no upper limit, and estimates that widespread public participation could generate up to ₹36,000 crore annually to support the armed forces. It also lists two bank accounts—one at Canara Bank (Account No: 90552010165915) and another at State Bank of India (Account No: 40650628094)—allegedly designated for the "Armed Forces Battle Casualties Welfare Fund."
The statement said,” The government established a range of welfare schemes for soldiers killed or disabled while undertaking military operations in recent combat. In 2020, the government established the 'Armed Forces Battle Casualty Welfare Fund (AFBCWF)', which is used to provide immediate financial assistance to families of soldiers, sailors and airmen who lose their lives or sustain grievous injury as a result of active military service.”

We also found a similar post from the past, which can be seen here.
Fact Check:
The Press Information Bureau (PIB) have responded to the viral post stating that it is misleading, and the Government has not launched any message inviting public donations towards the modernisation of the Indian Army or for purchasing Weapons for the army. The only known official initiative by the Ministry of Defence is the "Armed Forces Battle Casualties Welfare Fund", which is an initiative set up to support the families of our soldiers who have been marshalled or grievously disabled in the line of duty, not for buying military equipment.

In addition, the bank account details mentioned in the Viral post are false, and donations and charitable donations submitted to the account have been dishonoured.
The other false claim says that actor Akshay Kumar is promoting or heading this message-there is no official/disclosure record or announcement related to him leading or sponsoring this project. Having said that in 2017, Akshay Kumar encouraged public contributions of just one rupee per month to support the armed forces, through a web portal called “Bharat Ke Veer”. The platform was developed in partnership with the Ministry of Home Affairs


Citizens have to rely on only official government sources and ignore misleading messages on such social media platforms.
Conclusion:
The viral social media post suggesting that the Government of India has initiated a donation drive for the modernisation of the Indian Army and the purchase of weapons is misleading and inaccurate. According to the Press Information Bureau (PIB), no such initiative has been launched by the government, and the bank account details provided in the post are false, with reported cases of dishonoured transactions. The only legitimate initiative is the Armed Forces Battle Casualties Welfare Fund (AFBCWF), which provides financial assistance to the families of soldiers who are martyred or seriously injured in the line of duty. While actor Akshay Kumar played a key role in launching the Bharat Ke Veer portal in 2017 to support paramilitary personnel, he has no official connection to the viral claims.
- Claim: The government has launched a public donation message to fund Army weapon purchases.
- Claimed On: Social Media
- Fact Check: False and Misleading