#FactCheck-A manipulated image showing Indian cricketer Virat Kohli allegedly watching Rahul Gandhi's media briefing on his mobile phone has been widely shared online.
Executive Summary:
A fake photo claiming to show the cricketer Virat Kohli watching a press conference by Rahul Gandhi before a match, has been widely shared on social media. The original photo shows Kohli on his phone with no trace of Gandhi. The incident is claimed to have happened on March 21, 2024, before Kohli's team, Royal Challengers Bangalore (RCB), played Chennai Super Kings (CSK) in the Indian Premier League (IPL). Many Social Media accounts spread the false image and made it viral.

Claims:
The viral photo falsely claims Indian cricketer Virat Kohli was watching a press conference by Congress leader Rahul Gandhi on his phone before an IPL match. Many Social media handlers shared it to suggest Kohli's interest in politics. The photo was shared on various platforms including some online news websites.




Fact Check:
After we came across the viral image posted by social media users, we ran a reverse image search of the viral image. Then we landed on the original image posted by an Instagram account named virat__.forever_ on 21 March.

The caption of the Instagram post reads, “VIRAT KOHLI CHILLING BEFORE THE SHOOT FOR JIO ADVERTISEMENT COMMENCE.❤️”

Evidently, there is no image of Congress Leader Rahul Gandhi on the Phone of Virat Kohli. Moreover, the viral image was published after the original image, which was posted on March 21.

Therefore, it’s apparent that the viral image has been altered, borrowing the original image which was shared on March 21.
Conclusion:
To sum up, the Viral Image is altered from the original image, the original image caption tells Cricketer Virat Kohli chilling Before the Jio Advertisement commences but not watching any politician Interview. This shows that in the age of social media, where false information can spread quickly, critical thinking and fact-checking are more important than ever. It is crucial to check if something is real before sharing it, to avoid spreading false stories.
Related Blogs
.webp)
Introduction
AI-generated fake videos are proliferating on the Internet indeed becoming more common by the day. There is a use of sophisticated AI algorithms that help manipulate or generate multimedia content such as videos, audio, and images. As a result, it has become increasingly difficult to differentiate between genuine, altered, or fake content, and these AI-manipulated videos look realistic. A recent study has shown that 98% of deepfake-generated videos have adult content featuring young girls, women, and children, with India ranking 6th among the nations that suffer from misuse of deepfake technology. This practice has dangerous consequences and could harm an individual's reputation, and criminals could use this technology to create a false narrative about a candidate or a political party during elections.
The working of deepfake videos is based on algorithms that refine the fake content, and the generators are built and trained in such a way as to get the desired output. The process is repeated several times, allowing the generator to improve the content until it seems realistic, making it more flawless. Deepfake videos are created by specific approaches some of them are: -
- Lip syncing: This is the most common technique used in deepfake. Here, the voice recordings of the video, make it appear as to what was originally said by the person appearing in the video.
- Audio deepfake: For Audio-generated deepfake, a generative adversarial network (GAN) is used to colon a person’s voice, based on the vocal patterns and refine it till the desired output is generated.
- Deepfake has become so serious that the technology could be used by bad actors or by cyber-terrorist squads to set their Geo-political agendas. Looking at the present situation in the past few the number of cases has just doubled, targeting children, women and popular faces.
- Greater Risk: in the last few years the cases of deep fake have risen. by the end of the year 2022, the number of cases has risen to 96% against women and children according to a survey.
- Every 60 seconds, a deepfake pornographic video is created, now quicker and more affordable than ever, it takes less than 25 minutes and costs using just one clean face image.
- The connection to deepfakes is that people can become targets of "revenge porn" without the publisher having sexually explicit photographs or films of the victim. They may be made using any number of random pictures or images collected from the internet to obtain the same result. This means that almost everyone who has taken a selfie or shared a photograph of oneself online faces the possibility of a deepfake being constructed in their image.
Deepfake-related security concerns
As deepfakes proliferate, more people are realising that they can be used not only to create non-consensual porn but also as part of disinformation and fake news campaigns with the potential to sway elections and rekindle frozen or low-intensity conflicts.
Deepfakes have three security implications: at the international level, strategic deepfakes have the potential to destroy precarious peace; at the national level, deepfakes may be used to unduly influence elections, and the political process, or discredit opposition, which is a national security concern, and at the personal level, the scope for using Women suffer disproportionately from exposure to sexually explicit content as compared to males, and they are more frequently threatened.
Policy Consideration
Looking at the present situation where the cases of deepfake are on the rise against women and children, the policymakers need to be aware that deepfakes are utilized for a variety of valid objectives, including artistic and satirical works, which policymakers should be aware of. Therefore, simply banning deepfakes is not a way consistent with fundamental liberties. One conceivable legislative option is to require a content warning or disclaimer. Deepfake is an advanced technology and misuse of deepfake technology is a crime.
What are the existing rules to combat deepfakes?
It's worth noting that both the IT Act of 2000 and the IT Rules of 2021 require social media intermediaries to remove deep-fake videos or images as soon as feasible. Failure to follow these guidelines can result in up to three years in jail and a Rs 1 lakh fine. Rule 3(1)(b)(vii) requires social media intermediaries to guarantee that its users do not host content that impersonates another person, and Rule 3(2)(b) requires such content to be withdrawn within 24 hours of receiving a complaint. Furthermore, the government has stipulated that any post must be removed within 36 hours of being published online. Recently government has also issued an advisory to social media intermediaries to identify misinformation and deepfakes.
Conclusion
It is important to foster ethical and responsible consumption of technology. This can only be achieved by creating standards for both the creators and users, educating individuals about content limits, and providing information. Internet-based platforms should also devise techniques to deter the uploading of inappropriate information. We can reduce the negative and misleading impacts of deepfakes by collaborating and ensuring technology can be used in a better manner.
References
- https://timesofindia.indiatimes.com/life-style/parenting/moments/how-social-media-scandals-like-deepfake-impact-minors-and-students-mental-health/articleshow/105168380.cms?from=mdr
- https://www.aa.com.tr/en/science-technology/deepfake-technology-putting-children-at-risk-say-experts/2980880
- https://wiisglobal.org/deepfakes-as-a-security-issue-why-gender-matters/
.webp)
Introduction
With the advent of the internet, the world revealed the promise of boundless connection and the ability to bridge vast distances with a single click. However, as we wade through the complex layers of the digital age, we find ourselves facing a paradoxical realm where anonymity offers both liberation and a potential for unforeseen dangers. Omegle, a chat and video messaging platform, epitomizes this modern conundrum. Launched over a decade ago in 2009, it has burgeoned into a popular avenue for digital interaction, especially amidst the heightened need for human connection spurred by the COVID-19 pandemic's social distancing requirements. Yet, this seemingly benign tool of camaraderie, tragically, doubles as a contemporary incarnation of Pandora's box, unleashing untold risks upon the online privacy and security landscape. Omegle shuts down its operations permanently after 14 years of its service.
The Rise of Omegle
The foundations of this nebulous virtual dominion can be traced back to the very architecture of Omegle. Introduced to the world as a simple, anonymous chat service, Omegle has since evolved, encapsulating the essence of unpredictable human interaction. Users enter this digital arena, often with the innocent desire to alleviate the pangs of isolation or simply to satiate curiosity; yet they remain blissfully unaware of the potential cybersecurity maelstrom that awaits them.
As we commence a thorough inquiry into the psyche of Omegle's vast user base, we observe a digital diaspora with staggering figures. The platform, in May 2022, counted 51.7 million unique visitors, a testament to its sprawling reach across the globe. Delve a bit deeper, and you will uncover that approximately 29.89% of these digital nomads originate from the United States. Others, in varying percentages, flock from India, the Philippines, the United Kingdom, and Germany, revealing a vast, intricate mosaic of international engagement.
Such statistics beguile the uninformed observer with the lie of demographic diversity. Yet we must proceed with caution, for while the platform boasts an impressive 63.91% male patronage, we cannot overlook the notable surge in female participation, which has climbed to 36.09% during the pandemic era. More alarming still is the revelation, borne out of a BBC investigation in February 2021, that children as young as seven have trespassed into Omegle's adult sections—a section purportedly guarded by a minimum age limit of thirteen. How we must ask, has underage presence burgeoned on this platform? A sobering pointer finger towards the platform's inadvertent marketing on TikTok, where youthful influencers, with abandon, promote their Omegle exploits under the #omegle hashtag.
The Omegle Allure
Omegle's allure is further compounded by its array of chat opportunities. It flaunts an adult section awash with explicit content, a moderated chat section that, despite the platform's own admissions, remains imperfectly patrolled, and an unmoderated section, its entry pasted with forewarnings of an 18+ audience. Beyond these lies the college chat option, a seemingly exclusive territory that only admits individuals armed with a verified '.edu' email address.
The effervescent charm of Omegle's interface, however, belies its underlying treacheries. Herein lies a digital wilderness where online predators and nefarious entities prowl, emboldened by the absence of requisite registration protocols. No email address, no unique identifier—pestilence to any notion of accountability or safeguarding. Within this unchecked reality, the young and unwary stand vulnerable, a hapless game for exploitation.
Threat to Users
Venture even further into Omegle's data fiefdom, and the spectre of compromise looms larger. Users, particularly the youth, risk exposure to unsuitable content, and their naivety might lead to the inadvertent divulgence of personal information. Skulking behind the facade of connection, opportunities abound for coercion, blackmail, and stalking—perils rendered more potent as every video exchange and text can be captured, and recorded by an unseen adversary. The platform acts as a quasi-familiar confidante, all the while harvesting chat logs, cookies, IP addresses, and even sensory data, which, instead of being ephemeral, endure within Omegle's databases, readily handed to law enforcement and partnered entities under the guise of due diligence.
How to Combat the threat
In mitigating these online gorgons, a multi-faceted approach is necessary. To thwart incursion into your digital footprint, adults, seeking the thrills of Omegle's roulette, would do well to cloak their activities with a Virtual Private Network (VPN), diligently pore over the privacy policy, deploy robust cybersecurity tools, and maintain an iron-clad reticence on personal disclosures. For children, the recommendation gravitates towards outright avoidance. There, a constellation of parental control mechanisms await the vigilant guardian, ready to shield their progeny from the internet's darker alcoves.
Conclusion
In the final analysis, Omegle emerges as a microcosm of the greater web—a vast, paradoxical construct proffering solace and sociability, yet riddled with malevolent traps for the uninformed. As digital denizens, our traverse through this interconnected cosmos necessitates a relentless guarding of our private spheres and the sober acknowledgement that amidst the keystrokes and clicks, we must tread with caution lest we unseal the perils of this digital Pandora's box.
References:

Introduction
Misinformation is rampant all over the world and impacting people at large. In 2023, UNESCO commissioned a survey on the impact of Fake News which was conducted by IPSOS. This survey was conducted in 16 countries that are to hold national elections in 2024 with a total of 2.5 billion voters and showed how pressing the need for effective regulation had become and found that 85% of people are apprehensive about the repercussions of online disinformation or misinformation. UNESCO has introduced a plan to regulate social media platforms in light of these worries, as they have become major sources of misinformation and hate speech online. This action plan is supported by the worldwide opinion survey, highlighting the urgent need for strong actions. The action plan outlines the fundamental principles that must be respected and concrete measures to be implemented by all stakeholders associated, i.e., government, regulators, civil society and the platforms themselves.
The Key Areas in Focus of the Action Plan
The focus area of the action plan is on the protection of the Freedom of Expression while also including access to information and other human rights in digital platform governance. The action plan works on the basic premise that the impact on human rights becomes the compass for all decision-making, at every stage and by every stakeholder. Groups of independent regulators work in close coordination as part of a wider network, to prevent digital companies from taking advantage of disparities between national regulations. Moderation of content as a feasible and effective option at the required scale, in all regions and all languages.
The algorithms of these online platforms, particularly the social media platforms are established, but it is too often geared towards maximizing engagement rather than the reliability of information. Platforms are required to take on more initiative to educate and train users to be critical thinkers and not just hopers. Regulators and platforms are in a position to take strong measures during particularly sensitive conditions ranging from elections to crises, particularly the information overload that is taking place.
Key Principles of the Action Plan
- Human Rights Due Diligence: Platforms are required to assess their impact on human rights, including gender and cultural dimensions, and to implement risk mitigation measures. This would ensure that the platforms are responsible for educating users about their rights.
- Adherence to International Human Rights Standards: Platforms must align their design, content moderation, and curation with international human rights standards. This includes ensuring non-discrimination, supporting cultural diversity, and protecting human moderators.
- Transparency and Openness: Platforms are expected to operate transparently, with clear, understandable, and auditable policies. This includes being open about the tools and algorithms used for content moderation and the results they produce.
- User Access to Information: Platforms should provide accessible information that enables users to make informed decisions.
- Accountability: Platforms must be accountable to their stakeholders which would include the users and the public, which would ensure that redressal for content-related decisions is not compromised. This accountability extends to the implementation of their terms of service and content policies.
Enabling Environment for the application of the UNESCO Plan
The UNESCO Action Plan to counter misinformation has been created to create an environment where freedom of expression and access to information flourish, all while ensuring safety and security for digital platform users and non-users. This endeavour calls for collective action—societies as a whole must work together. Relevant stakeholders, from vulnerable groups to journalists and artists, enable the right to expression.
Conclusion
The UNESCO Action Plan is a response to the dilemma that has been created due to the information overload, particularly, because the distinction between information and misinformation has been so clouded. The IPSOS survey has revealed the need for an urgency to address these challenges in the users who fear the repercussions of misinformation.
The UNESCO action plan provides a comprehensive framework that emphasises the protection of human rights, particularly freedom of expression, while also emphasizing the importance of transparency, accountability, and education in the governance of digital platforms as a priority. By advocating for independent regulators and encouraging platforms to align with international human rights standards, UNESCO is setting the stage for a more responsible and ethical digital ecosystem.
The recommendations include integrating regulators through collaborations and promoting global cooperation to harmonize regulations, expanding the Digital Literacy campaign to educate users about misinformation risks and online rights, ensuring inclusive access to diverse content in multiple languages and contexts, and monitoring and refining tech advancements and regulatory strategies as challenges evolve. To ultimately promote a true online information landscape.
Reference
- https://www.unesco.org/en/articles/online-disinformation-unesco-unveils-action-plan-regulate-social-media-platforms
- https://www.unesco.org/sites/default/files/medias/fichiers/2023/11/unesco_ipsos_survey.pdf
- https://dig.watch/updates/unesco-sets-out-strategy-to-tackle-misinformation-after-ipsos-survey