#FactCheck- AI-Generated Image Falsely Shows SRH Team Seeking Blessings
Executive Summary
A post is rapidly going viral on social media claiming to show Sunrisers Hyderabad (SRH) captain Ishan Kishan, CEO Kavya Maran, and the team seeking blessings in front of a portrait of Jesus Christ at the Rajiv Gandhi International Cricket Stadium before a match. The image is being shared as a genuine pre-match moment. However, research by the CyberPeace found that the viral image is not real but generated using artificial intelligence (AI). There are no credible media reports or official updates from Sunrisers Hyderabad confirming any such pre-match activity. Further analysis using multiple AI detection tools also indicated that the image is likely synthetic. Therefore, the claim made in the viral post is false.
Claim
A Facebook user shared the image with the caption:“Preparation starts from within. Before taking the field at the Rajiv Gandhi Stadium, Ishan Kishan, Abhishek Sharma, and the SRH squad seek blessings. With Kavya Maran and the team united in faith, the Orange Army is ready for battle!”
- https://archive.ph/wip/dtbZ0
- https://www.facebook.com/13CricketNews/posts/preparation-starts-from-within-before-taking-the-field-at-the-rajiv-gandhi-stadi/1790225659038036/

Fact Check
A close inspection of the viral image revealed several inconsistencies. A cooler box in the image bears a sticker of Mumbai Indians, even though Mumbai Indians and Sunrisers Hyderabad had not played each other in IPL 2026 at the time implied by the claim. Their scheduled match is set for April 29, 2026, at Wankhede Stadium, not at the Hyderabad venue shown in the image.
- https://www.iplt20.com/teams/sunrisers-hyderabad/schedule

Additionally, the image incorrectly displays Dream11 as the title sponsor for SRH, whereas Shree Cement is the official title sponsor for the IPL 2026 season.

To further verify authenticity, the image was analysed using AI detection tools. Hive Moderation assigned it a 99.9% probability of being AI-generated, strongly indicating that it is not genuine.

Conclusion
The viral claim is false. The image showing Sunrisers Hyderabad players and their CEO praying before a match is AI-generated and does not depict a real event. It has been circulated with a misleading narrative and lacks any factual basis.
Related Blogs

Introduction
As e-sports flourish in India, mobile gaming platforms and apps have contributed massively to this boom. The wave of online mobile gaming has led to a new recognition of esports. As we see the Sports Ministry being very proactive for e-sports and e-athletes, it is pertinent to ensure that we do not compromise our cyber security for the sake of these games. When we talk about online mobile gaming, the most common names that come to our minds are PUBG and BGMI. As news for all Indian gamers, BGMI is set to be relaunched in India after approval from the Ministry of Electronics and Information Technology.
Why was BGMI banned?
The Govt banned Battle Ground Mobile India on the pretext of being a Chinese application and the fact that all the data was hosted in China itself. This caused a cascade of compliance and user safety issues as the Data was stored outside India. Since 2020 The Indian Govt has been proactive in banning Chinese applications, which might have an adverse effect on national security and Indian citizens. Nearly 200 plus applications have been banned by the Govt, and most of them were banned due to their data hubs being in China. The issue of cross-border data flow has been a key issue in Geo-Politics, and whoever hosts the data virtually owns it as well and under the potential threat of this fact, all apps hosting their data in China were banned.
Why is BGMI coming back?
BGMI was banned for not hosting data in India, and since the ban, the Krafton Inc.-owned game has been engaging in Idnai to set up data banks and servers to have a separate gaming server for Indian players. These moves will lead to a safe gaming ecosystem and result in better adherence to the laws and policies of the land. The developers have not declared a relaunch date yet, but the game is expected to be available for download for iOS and Android users in the coming few days. The game will be back on app stores as a letter from the Ministry of Electronics and Information Technology has been issued stating that the games be allowed and made available for download on the respective app stores.
Grounds for BGMI
BGMI has to ensure that they comply with all the laws, policies and guidelines in India and have to show the same to the Ministry to get an extension on approval. The game has been permitted for only 90 days (3 Months). Hon’ble MoS Meity Rajeev Chandrashekhar stated in a tweet “This is a 3-month trial approval of #BGMI after it has complied with issues of server locations and data security etc. We will keep a close watch on other issues of User harm, Addiction etc., in the next 3 months before a final decision is taken”. This clearly shows the magnitude of the bans on Chinese apps. The ministry and the Govt will not play the soft game now, it’s all about compliance and safeguarding the user’s data.
Way Forward
This move will play a significant role in the future, not only for gaming companies but also for other online industries, to ensure compliance. This move will act as a precedent for the issue of cross-border data flow and the advantages of data localisation. It will go a long way in advocacy for the betterment of the Indian cyber ecosystem. Meity alone cannot safeguard the space completely, it is a shared responsibility of the Govt, industry and netizens.
Conclusion
The advent of online mobile gaming has taken the nation by storm, and thus, being safe and secure in this ecosystem is paramount. The provisional permission form BGMI shows the stance of the Govt and how it is following the no-tolerance policy for noncompliance with laws. The latest policies and bills, like the Digital India Act, Digital Personal Data Protection Act, etc., will go a long way in securing the interests and rights of the Indian netizen and will create a blanket of safety and prevention of issues and threats in the future.

Recognizing As the Ministry of Electronic and Information Technology (MeitY) continues to invite proposals from academicians, institutions, and industry experts to develop frameworks and tools for AI-related issues through the IndiaAI Mission, it has also funded two AI projects that will deal with matters related to deepfakes as per a status report submitted on 21st November 2024. The Delhi court also ordered the nomination of the members of a nine-member Committee constituted by the MeitY on 20th November 2024 (to address deepfake issues) and asked for a report within three months.
Funded AI projects :
The two projects funded by MeitY are:
- Fake Speech Detection Using Deep Learning Framework- The project was initiated in December 2021 and focuses on detecting fake speech by creating a web interface for detection software this also includes investing in creating a speech verification software platform that is specifically designed for testing fake speech detection systems. It is set to end in December 2024.
- Design and Development of Software for Detecting Deepfake Videos and Images- This project was funded by MeitY from January 2022 to March 2024. It also involved the Centre for Development of Advanced Computing (C-DAC), Kolkata and Hyderabad as they have developed a prototype tool capable of detecting deepfakes. Named FakeCheck, it is designed as a desktop application and a web portal aiming to detect deepfakes without the use of the internet. Reports suggest that it is currently undergoing the testing phase and awaiting feedback.
Apart from these projects, MeitY has released their expression of interest for proposals in four other areas which include:
- Tools that detect AI-generated content along with traceable markers,
- Tools that develop an ethical AI framework for AI systems to be transparent and respect human values,
- An AI risk management and assessment tool that analyses threats and precarious situations of AI-specific risks in public AI use cases and;
- Tools that can assess the resilience of AI in stressful situations such as cyberattacks, national disasters, operational failures, etc.
CyberPeace Outlook
Deepfakes pose significant challenges to critical sectors in India, such as healthcare and education, where manipulated content can lead to crimes like digital impersonation, misinformation, and fraud. The rapid advancement of AI, with developments (regarding regulation) that can’t keep pace, continues to fuel such threats. Recognising these risks, MeitY’s IndiaAI mission, promoting investments and encouraging educational institutions to undertake AI projects that strengthen the country's digital infrastructure comes in as a guiding light. A part of the mission focuses on developing indigenous solutions, including tools for assessment and regulation, to address AI-related threats effectively. While India is making strides in this direction, the global AI landscape is evolving rapidly, with many nations advancing regulations to mitigate AI-driven challenges. Consistent steps, including inviting proposals and funding projects provide the much-needed impetus for the mission to be realized.
References
- https://economictimes.indiatimes.com/tech/technology/meity-dot-at-work-on-projects-for-fair-ai-development/articleshow/115777713.cms?from=mdr
- https://www.hindustantimes.com/india-news/meity-seeks-tools-to-detect-deepfakes-label-ai-generated-content-101734410291642.html
- https://www.msn.com/en-in/news/India/meity-funds-two-ai-projects-to-detect-fake-media-forms-committee-on-deepfakes/ar-AA1vMAlJ
- https://indiaai.gov.in/
.webp)
Introduction
Conversations surrounding the scourge of misinformation online typically focus on the risks to social order, political stability, economic safety and personal security. An oft-overlooked aspect of this phenomenon is the fact that it also takes a very real emotional and mental toll on people. Even as we grapple with the big picture questions about financial fraud or political rumors or inaccurate medical information online, we must also appreciate the fact that being exposed to misinformation and becoming aware of one’s own vulnerability are both significant sources of mental stress in today’s digital ecosystem.
Inaccurate information causes confusion and worry, which has negative consequences for mental health. Misinformation may also impair people's sense of well-being by undermining their trust in institutions, authority figures, and their own judgment. The constant bombardment of misinformation can lead to information overload, wherein people are unable to discriminate between legitimate sources and misleading content, resulting in mental exhaustion and a sense of being overwhelmed by the sheer volume of information available. Vulnerable groups such as children, the elderly, and those with pre-existing health conditions are more sensitive or susceptible to the negative effects of misinformation.
How Does Misinformation Endanger Mental Health?
Misinformation on social media platforms is a matter of public health because it has the potential to confuse people, lead to poor decision-making and result in cognitive dissonance, anxiety and unwanted behavioural changes.
Unconstrained misinformation can also lead to social disorder and the prevalence of negative emotions amongst larger numbers, ultimately causing a huge impact on society. Therefore, understanding the spread and diffusion characteristics of misinformation on Internet platforms is crucial.
The spread of misinformation can elicit different emotions of the public, and the emotions also change with the spread of misinformation. Factors such as user engagement, number of comments, and time of discussion all have an impact on the change of emotions in misinformation. Active users tend to make more comments, engage longer in discussions, and display more dominant negative emotions when triggered by misinformation. Understanding the evolution pattern of emotions triggered by misinformation is also important in view of the public’s emotional fluctuations under the influence of misinformation, and social media often magnifies the impact of emotions and makes emotions spread rapidly in social networks. For example, the sentiment of misinformation increases when there are sensitive topics such as political elections, viral trending topics, health-related information, communal and local information, information about natural disasters and more. Active misinformation on the Internet not only affects the public's psychology, mental health and behavior, but also has an impact on the stability of social order and the maintenance of social security.
Prebunking and Debunking To Build Mental Guards Against Misinformation
As the spread of misinformation and disinformation rises, so do the techniques aimed to tackle their spread. Prebunking or attitudinal inoculation is a technique for training individuals to recogniseand resist deceptive communications before they can take root. Prebunking is a psychological method for mitigating the effects of misinformation, strengthening resilience and creating cognitive defenses against future misinformation. Debunking provides individuals with accurate information to counter false claims and myths, correcting misconceptions and preventing the spread of misinformation. By presenting evidence-based refutations, debunking helps individuals distinguish fact from fiction.
What do health experts say about online misinformation?
“In the21st century, mental health is crucial due to the overwhelming amount of information available online. The COVID-19 pandemic-related misinformation was a prime example of this, with misinformation spreading online, leading to increased anxiety, panic buying, fear of leaving home, and mistrust in health measures. To protect our mental health, it is essential to cultivate a discerning mindset, question sources, and verify information before consumption. Fostering a supportive community that encourages open dialogue and fact-checking can help navigate the digital information landscape with confidence and emotional support. Prioritising self-care routines, mindfulness practices, and seeking professional guidance are also crucial for safeguarding mental health in the digital information era.”
In conversation with CyberPeace ~ Says Dubai-based psychologist, Aishwarya Menon, (BA,in Psychology and Criminology from the University of Westen Ontario, London and MA in Mental Health and Addictions (Humber College, University of Guelph),Toronto.
CyberPeace Policy Recommendations:
1) Countering misinformation is everyone's shared responsibility. To mitigate the negative effects of infodemics online, we must look at developing strong legal policies, creating and promoting awareness campaigns, relying on authenticated content on mass media, and increasing people's digital literacy.
2) Expert organisations actively verifying the information through various strategies including prebunking and debunking efforts are among those best placed to refute misinformation and direct users to evidence-based information sources. It is recommended that countermeasures for users on platforms be increased with evidence-based data or accurate information.
3) The role of social media platforms is crucial in the misinformation crisis, hence it is recommended that social media platforms actively counter the production of misinformation on their platforms. Local, national, and international efforts and additional research are required to implement the robust misinformation counterstrategies.
4) Netizens are advised or encouraged to follow official sources to check the reliability of any news or information. They must recognise the red flags by recognising the signs such as questionable facts, poorly written texts, surprising or upsetting news, fake social media accounts and fake websites designed to look like legitimate ones. Netizens are also encouraged to develop cognitive skills to discern fact and reality. Netizens are advised to approach information with a healthy dose of skepticism and curiosity.
Final Words:
It is crucial to protect mental health by escalating and disturbing the rise of misinformation incidents on various subjects, safeguarding our minds requires cognitive skills, building media literacy and verifying the information from trusted sources, prioritising mental health by self-care practices and staying connected with supportive authenticated networks. Promoting prebunking and debunking initiatives is necessary. Netizen scan protect themselves against the negative effects of misinformation and cultivate a resilient mindset in the digital information age.
References:
- https://www.hindawi.com/journals/scn/2021/7999760/
- https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8502082/