#FactCheck-A manipulated image showing Indian cricketer Virat Kohli allegedly watching Rahul Gandhi's media briefing on his mobile phone has been widely shared online.
Executive Summary:
A fake photo claiming to show the cricketer Virat Kohli watching a press conference by Rahul Gandhi before a match, has been widely shared on social media. The original photo shows Kohli on his phone with no trace of Gandhi. The incident is claimed to have happened on March 21, 2024, before Kohli's team, Royal Challengers Bangalore (RCB), played Chennai Super Kings (CSK) in the Indian Premier League (IPL). Many Social Media accounts spread the false image and made it viral.

Claims:
The viral photo falsely claims Indian cricketer Virat Kohli was watching a press conference by Congress leader Rahul Gandhi on his phone before an IPL match. Many Social media handlers shared it to suggest Kohli's interest in politics. The photo was shared on various platforms including some online news websites.




Fact Check:
After we came across the viral image posted by social media users, we ran a reverse image search of the viral image. Then we landed on the original image posted by an Instagram account named virat__.forever_ on 21 March.

The caption of the Instagram post reads, “VIRAT KOHLI CHILLING BEFORE THE SHOOT FOR JIO ADVERTISEMENT COMMENCE.❤️”

Evidently, there is no image of Congress Leader Rahul Gandhi on the Phone of Virat Kohli. Moreover, the viral image was published after the original image, which was posted on March 21.

Therefore, it’s apparent that the viral image has been altered, borrowing the original image which was shared on March 21.
Conclusion:
To sum up, the Viral Image is altered from the original image, the original image caption tells Cricketer Virat Kohli chilling Before the Jio Advertisement commences but not watching any politician Interview. This shows that in the age of social media, where false information can spread quickly, critical thinking and fact-checking are more important than ever. It is crucial to check if something is real before sharing it, to avoid spreading false stories.
Related Blogs

Introduction
In an age where the lines between truth and fiction blur with an alarming regularity, we stand at the precipice of a new and dangerous era. Amidst the wealth of information that characterizes the digital age, deep fakes and disinformation rise like ghosts, haunting our shared reality. These manifestations of a technological revolution that promised enlightenment instead threaten the foundations upon which our societies are built: trust, truth, and collective understanding.
These digital doppelgängers, enabled by advanced artificial intelligence, and their deceitful companion—disinformation—are not mere ghosts in the machine. They are active agents of chaos, capable of undermining the core of democratic values, human rights, and even the safety of individuals who dare to question the status quo.
The Perils of False Narratives in the Digital Age
As a society, we often throw around terms such as 'fake news' with a mixture of disdain and a weary acceptance of their omnipresence. However, we must not understate their gravity. Misinformation and disinformation represent the vanguard of the digital duplicitous tide, a phenomenon growing more complex and dire each day. Misinformation, often spread without malicious intent but with no less damage, can be likened to a digital 'slip of the tongue' — an error in dissemination or interpretation. Disinformation, its darker counterpart, is born of deliberate intent to deceive, a calculated move in the chess game of information warfare.
Their arsenal is varied and ever-evolving: from misleading memes and misattributed quotations to wholesale fabrications in the form of bogus news sites and carefully crafted narratives. Among these weapons of deceit, deepfakes stand out for their audacity and the striking challenge they pose to the concept of seeing to believe. Through the unwelcome alchemy of algorithms, these video and audio forgeries place public figures, celebrities, and even everyday individuals into scenarios they never experienced, uttering words they never said.
The Human Cost: Threats to Rights and Liberties
The impact of this disinformation campaign transcends inconvenience or mere confusion; it strikes at the heart of human rights and civil liberties. It particularly festers at the crossroads of major democratic exercises, such as elections, where the right to a truthful, unmanipulated narrative is not just a political nicety but a fundamental human right, enshrined in Article 25 of the International Convention on Civil and Political Rights (ICCPR).
In moments of political change, whether during elections or pivotal referenda, the deliberate seeding of false narratives is a direct assault on the electorate's ability to make informed decisions. This subversion of truth infects the electoral process, rendering hollow the promise of democratic choice.
This era of computational propaganda has especially chilling implications for those at the frontline of accountability—journalists and human rights defenders. They find themselves targets of character assassinations and smear campaigns that not only put their safety at risk but also threaten to silence the crucial voices of dissent.
It should not be overlooked that the term 'fake news' has, paradoxically, been weaponized by governments and political entities against their detractors. In a perverse twist, this label becomes a tool to shut down legitimate debate and shield human rights violations from scrutiny, allowing for censorship and the suppression of opposition under the guise of combatting disinformation.
Deepening the societal schisms, a significant portion of this digital deceit traffic in hate speech. Its contents are laden with xenophobia, racism, and calls to violence, all given a megaphone through the anonymity and reach the internet so readily provides, feeding a cycle of intolerance and violence vastly disproportionate to that seen in traditional media.
Legislative and Technological Countermeasures: The Ongoing Struggle
The fight against this pervasive threat, as illustrated by recent actions and statements by the Indian government, is multifaceted. Notably, Union Minister Rajeev Chandrasekhar's commitment to safeguarding the Indian populace from the dangers of AI-generated misinformation signals an important step in the legislative and policy framework necessary to combat deepfakes.
Likewise, Prime Minister Narendra Modi's personal experience with a deepfake video accentuates the urgency with which policymakers, technologists, and citizens alike must view this evolving threat. The disconcerting experience of actor Rashmika Mandanna serves as a sobering reminder of the individual harm these false narratives can inflict and reinforces the necessity of a robust response.
In their pursuit to negate these virtual apparitions, policymakers have explored various avenues ranging from legislative action to penalizing offenders and advancing digital watermarks. However, it is not merely in the realm of technology that solutions must be sought. Rather, the confrontation with deepfakes and disinformation is also a battle for the collective soul of societies across the globe.
As technological advancements continue to reshape the battleground, figures like Kris Gopalakrishnan and Manish Gangwar posit that only a mix of rigorous regulatory frameworks and savvy technological innovation can hold the front line against this rising tidal wave of digital distrust.
This narrative is not a dystopian vision of a distant future - it is the stark reality of our present. And as we navigate this new terrain, our best defenses are not just technological safeguards, but also the nurturing of an informed and critical citizenry. It is essential to foster media literacy, to temper the human inclination to accept narratives at face value and to embolden the values that encourage transparency and the robust exchange of ideas.
As we peer into the shadowy recesses of our increasingly digital existence, may we hold fast to our dedication to the truth, and in doing so, preserve the essence of our democratic societies. For at stake is not just a technological arms race, but the very quality of our democratic discourse and the universal human rights that give it credibility and strength.
Conclusion
In this age of digital deceit, it is crucial to remember that the battle against deep fakes and disinformation is not just a technological one. It is also a battle for our collective consciousness, a battle to preserve the sanctity of truth in an era of falsehoods. As we navigate the labyrinthine corridors of the digital world, let us arm ourselves with the weapons of awareness, critical thinking, and a steadfast commitment to truth. In the end, it is not just about winning the battle against deep fakes and disinformation, but about preserving the very essence of our democratic societies and the human rights that underpin them.

Introduction
Google’s search engine is widely known for its ability to tailor its search results based on user activity, enhancing the relevance of search outcomes. Recently, Google introduced the ‘Try Without Personalisation’ feature. This feature allows users to view results independent of their prior activity. This change marks a significant shift in platform experiences, offering users more control over their search experience while addressing privacy concerns.
However, even in this non-personalised mode, certain contextual factors including location, language, and device type, continue to influence results. This essentially provides the search with a baseline level of relevance. This feature carries significant policy implications, particularly in the areas of privacy, consumer rights, and market competition.
Understanding the Feature
When users engage with this option of non-personalised search, it will no longer show them helpful individual results that are personalisation-dependent and will instead provide unbiased search results. Essentially,this feature provides users with neutral (non-personalised) search results by bypassing their data.
This feature allows the following changes:
- Disables the user’s ability to find past searches in Autofill/Autocomplete.
- Does not pause or delete stored activity within a user’s Google account. Users, because of this feature, will be able to pause or delete stored activity through data and privacy controls.
- The feature doesn't delete or disable app/website preferences like language or search settings are some of the unaffected preferences.
- It also does not disable or delete the material that users save.
- When a user is signed in, they can ‘turn off the personalisation’ by clicking on the search option at the end of the webpage. These changes, offered by the feature, in functionality, have significant implications for privacy, competition, and user trust.
Policy Implications: An Analysis
This feature aligns with global privacy frameworks such as the GDPR in the EU and the DPDP Act in India. By adhering to principles like data minimisation and user consent, it offers users control over their data and the choice to enable or disable personalisation, thereby enhancing user autonomy and trust.
However, there is a trade-off between user expectations for relevance and the impartiality of non-personalised results. Additionally, the introduction of such features may align with emerging regulations on data usage, transparency, and consent. Policymakers play a crucial role in encouraging innovations like these while ensuring they safeguard user rights and maintain a competitive market.
Conclusion and Future Outlook
Google's 'Try Without Personalisation' feature represents a pivotal moment for innovation by balancing user privacy with search functionality. By aligning with global privacy frameworks such as the GDPR and the DPDP Act, it empowers users to control their data while navigating the complex interplay between relevance and neutrality. However, its success hinges on overcoming technical hurdles, fostering user understanding, and addressing competitive and regulatory scrutiny. As digital platforms increasingly prioritise transparency, such features could redefine user expectations and regulatory standards in the evolving tech ecosystem.
References

Amid the popularity of OpenAI’s ChatGPT and Google’s announcement of introducing its own Artificial Intelligence chatbot called Bard, there has been much discussion over how such tools can impact India at a time when the country is aiming for an AI revolution.
During the Budget Session, Finance Minister Nirmala Sitharaman talked about AI, while her colleague, Minister of State (MoS) for Electronics and Information Technology Rajeev Chandrasekhar discussed it at the India Stack Developer Conference.
While Sitharaman stated that the government will establish three centres of excellence in AI in the country, Chandrashekhar at the event mentioned that India Stack, which includes digital solutions like Aadhaar, Digilocker and others, will become more sophisticated over time with the inclusion of AI.
As AI chatbots become the buzzword, News18 discusses with experts how such tech tools will impact India.
AI IN INDIA
Many experts believe that in a country like India, which is extremely diverse in nature and has a sizeable population, the introduction of technologies and their right adoption can bring a massive digital revolution.
For example, Manoj Gupta, Cofounder of Plotch.ai, a full-stack AI-enabled SaaS product, told News18 that Bard is still experimental and not open to everyone to use while ChatGPT is available and can be used to build applications on top of it.
He said: “Conversational chatbots are interesting since they have the potential to automate customer support and assisted buying in e-commerce. Even simple banking applications can be built that can use ChatGPT AI models to answer queries like bank balance, service requests etc.”
According to him, such tools could be extremely useful for people who are currently excluded from the digital economy due to language barriers.
Ashwini Vaishnaw, Union Minister for Communications, Electronics & IT, has also talked about using such tools to reduce communication issues. At World Economic Forum in Davos, he said: “We integrated our Bhashini language AI tool, which translates from one Indian language to another Indian language in real-time, spoken and text everything. We integrated that with ChatGPT and are seeing very good results.”
‘DOUBLE-EDGED SWORD’
Sundar Balasubramanian, Managing Director, India & SAARC, at Check Point Software, told News18 that generative AI like ChatGPT is a “double-edged sword”.
According to him, used in the right way, it can help developers write and fix code quicker, enable better chat services for companies, or even be a replacement for search engines, revolutionising the way people search for information.
“On the flip side, hackers are also leveraging ChatGPT to accelerate their bad acts and we have already seen examples of such exploitations. ChatGPT has lowered the bar for novice hackers to enter the field as they are able to learn quicker and hack better through asking the AI tool for answers,” he added.
Balasubramanian also stated that CPR has seen the quality of phishing emails improve tremendously over the past 3 months, making it increasingly difficult to discern between legitimate sources and a targeted phishing scam.
“Despite the emergence of the use of generative AI impacting cybercrime, Check Point is continually reminding organisations and individuals of the significance of being vigilant as ChatGPT and Codex become more mature, it can affect the threat landscape, for both good and bad,” he added.
While the real-life applications of ChatGPT include several things ranging from language translation to explaining tricky math problems, Balasubramanian said it can also be used for making the work of cyber researchers and developers more efficient.
“Generative AI or tools like ChatGPT can be used to detect potential threats by analysing large amounts of data and identifying patterns that may indicate malicious activity. This can help enterprises quickly identify and respond to a potential threat before it escalates to something more,” he added.
POSITIVE FACTORS
Major Vineet Kumar, Founder and Global President of CyberPeace Foundation, believes that the deployment of AI chatbots has proven to be highly beneficial in India, where a booming economy and increasing demand for efficient customer service have led to a surge in their use. According to him, both ChatGPT and Bard have the potential to bring significant positive change to various industries and individuals in India.
“ChatGPT has already made an impact by revolutionising customer service, providing instant and accurate support, and reducing wait time. It has automated tedious and complicated tasks for businesses and educational institutions, freeing up valuable time for more significant activities. In the education sector, ChatGPT has also improved learning experiences by providing quick and reliable information to students and educators,” he added.
He also said there are several possible positive impacts that the AI chatbots, ChatGPT and Bard, could have in India and these include improved customer experience, increased productivity, better access to information, improved healthcare, improved access to education and better financial services.
Reference Link : https://www.news18.com/news/explainers/confused-about-chatgpt-bard-experts-tell-news18-how-openai-googles-ai-chatbots-may-impact-india-7026277.html