#FactCheck-A manipulated image showing Indian cricketer Virat Kohli allegedly watching Rahul Gandhi's media briefing on his mobile phone has been widely shared online.
Executive Summary:
A fake photo claiming to show the cricketer Virat Kohli watching a press conference by Rahul Gandhi before a match, has been widely shared on social media. The original photo shows Kohli on his phone with no trace of Gandhi. The incident is claimed to have happened on March 21, 2024, before Kohli's team, Royal Challengers Bangalore (RCB), played Chennai Super Kings (CSK) in the Indian Premier League (IPL). Many Social Media accounts spread the false image and made it viral.

Claims:
The viral photo falsely claims Indian cricketer Virat Kohli was watching a press conference by Congress leader Rahul Gandhi on his phone before an IPL match. Many Social media handlers shared it to suggest Kohli's interest in politics. The photo was shared on various platforms including some online news websites.




Fact Check:
After we came across the viral image posted by social media users, we ran a reverse image search of the viral image. Then we landed on the original image posted by an Instagram account named virat__.forever_ on 21 March.

The caption of the Instagram post reads, “VIRAT KOHLI CHILLING BEFORE THE SHOOT FOR JIO ADVERTISEMENT COMMENCE.❤️”

Evidently, there is no image of Congress Leader Rahul Gandhi on the Phone of Virat Kohli. Moreover, the viral image was published after the original image, which was posted on March 21.

Therefore, it’s apparent that the viral image has been altered, borrowing the original image which was shared on March 21.
Conclusion:
To sum up, the Viral Image is altered from the original image, the original image caption tells Cricketer Virat Kohli chilling Before the Jio Advertisement commences but not watching any politician Interview. This shows that in the age of social media, where false information can spread quickly, critical thinking and fact-checking are more important than ever. It is crucial to check if something is real before sharing it, to avoid spreading false stories.
Related Blogs

A Foray into the Digital Labyrinth
In our digital age, the silhouette of truth is often obfuscated by a fog of technological prowess and cunning deception. With each passing moment, the digital expanse sprawls wider, and within it, synthetic media, known most infamously as 'deepfakes', emerge like phantoms from the machine. These adept forgeries, melding authenticity with fabrication, represent a new frontier in the malleable narrative of understood reality. Grappling with the specter of such virtual deceit, social media behemoths Facebook and YouTube have embarked on a prodigious quest. Their mission? To formulate robust bulwarks around the sanctity of fact and fiction, all the while fostering seamless communication across channels that billions consider an inextricable part of their daily lives.
In an exploration of this digital fortress besieged by illusion, we unpeel the layers of strategy that Facebook and YouTube have unfurled in their bid to stymie the proliferation of these insidious technical marvels. Though each platform approaches the issue through markedly different prisms, a shared undercurrent of necessity and urgency harmonizes their efforts.
The Detailing of Facebook's Strategic
Facebook's encampment against these modern-day chimaeras teems with algorithmic sentinels and human overseers alike—a union of steel and soul. The company’s layer upon layer of sophisticated artificial intelligence is designed to scrupulously survey, identify, and flag potential deepfake content with a precision that borders on the prophetic. Employing advanced AI systems, Facebook endeavours to preempt the chaos sown by manipulated media by detecting even the slightest signs of digital tampering.
However, in an expression of profound acumen, Facebook also serves reminder of AI's fallibility by entwining human discernment into its fabric. Each flagged video wages its battle for existence within the realm of these custodians of reality—individuals entrusted with the hefty responsibility of parsing truth from technologically enabled fiction.
Facebook does not rest on the laurels of established defense mechanisms. The platform is in a perpetual state of flux, with policies and AI models adapting to the serpentine nature of the digital threat landscape. By fostering its cyclical metamorphosis, Facebook not only sharpens its detection tools but also weaves a more resilient protective web, one capable of absorbing the shockwaves of an evolving battlefield.
YouTube’s Overture of Transparency and the Exposition of AI
Turning to the amphitheatre of YouTube, the stage is set for an overt commitment to candour. Against the stark backdrop of deepfake dilemmas, YouTube demands the unveiling of the strings that guide the puppets, insisting on full disclosure whenever AI's invisible hands sculpt the content that engages its diverse viewership.
YouTube's doctrine is straightforward: creators must lift the curtains and reveal any artificial manipulation's role behind the scenes. With clarity as its vanguard, this requirement is not just procedural but an ethical invocation to showcase veracity—a beacon to guide viewers through the murky waters of potential deceit.
The iron fist within the velvet glove of YouTube's policy manifests through a graded punitive protocol. Should a creator falter in disclosing the machine's influence, repercussions follow, ensuring that the ecosystem remains vigilant against hidden manipulation.
But YouTube's policy is one that distinguishes between malevolence and benign use. Artistic endeavours, satirical commentary, and other legitimate expositions are spared the policy's wrath, provided they adhere to the overarching principle of transparency.
The Symbiosis of Technology and Policy in a Morphing Domain
YouTube's commitment to refining its coordination between human insight and computerized examination is unwavering. As AI's role in both the generation and moderation of content deepens, YouTube—which, like a skilled cartographer, must redraw its policies increasingly—traverses this ever-mutating landscape with a proactive stance.
In a Comparative Light: Tracing the Convergence of Giants
Although Facebook and YouTube choreograph their steps to different rhythms, together they compose an intricate dance aimed at nurturing trust and authenticity. Facebook leans into the proactive might of their AI algorithms, reinforced by updates and human interjection, while YouTube wields the virtue of transparency as its sword, cutting through masquerades and empowering its users to partake in storylines that are continually rewritten.
Together on the Stage of Our Digital Epoch
The sum of Facebook and YouTube's policies is integral to the pastiche of our digital experience, a multifarious quilt shielding the sanctum of factuality from the interloping specters of deception. As humanity treads the line between the veracious and the fantastic, these platforms stand as vigilant sentinels, guiding us in our pursuit of an old-age treasure within our novel digital bazaar—the treasure of truth. In this labyrinthine quest, it is not merely about unmasking deceivers but nurturing a wisdom that appreciates the shimmering possibilities—and inherent risks—of our evolving connection with the machine.
Conclusion
The struggle against deepfakes is a complex, many-headed challenge that will necessitate a united front spanning technologists, lawmakers, and the public. In this digital epoch, where the veneer of authenticity is perilously thin, the valiant endeavours of these tech goliaths serve as a lighthouse in a storm-tossed sea. These efforts echo the importance of evergreen vigilance in discerning truth from artfully crafted deception.
References
- https://about.fb.com/news/2020/01/enforcing-against-manipulated-media/
- https://indianexpress.com/article/technology/artificial-intelligence/google-sheds-light-on-how-its-fighting-deep-fakes-and-ai-generated-misinformation-in-india-9047211/
- https://techcrunch.com/2023/11/14/youtube-adapts-its-policies-for-the-coming-surge-of-ai-videos/
- https://www.trendmicro.com/vinfo/us/security/news/cybercrime-and-digital-threats/youtube-twitter-hunt-down-deepfakes

Introduction
The G7 nations, a group of the most powerful economies, have recently turned their attention to the critical issue of cybercrimes and (AI) Artificial Intelligence. G7 summit has provided an essential platform for discussing the threats and crimes occurring from AI and lack of cybersecurity. These nations have united to share their expertise, resources, diplomatic efforts and strategies to fight against cybercrimes. In this blog, we shall investigate the recent development and initiatives undertaken by G7 nations, exploring their joint efforts to combat cybercrime and navigate the evolving landscape of artificial intelligence. We shall also explore the new and emerging trends in cybersecurity, providing insights into ongoing challenges and innovative approaches adopted by the G7 nations and the wider international community.
G7 Nations and AI
Each of these nations have launched cooperative efforts and measures to combat cybercrime successfully. They intend to increase their collective capacities in detecting, preventing, and responding to cyber assaults by exchanging intelligence, best practices, and experience. G7 nations are attempting to develop a strong cybersecurity architecture capable of countering increasingly complex cyber-attacks through information-sharing platforms, collaborative training programs, and joint exercises.
The G7 Summit provided an important forum for in-depth debates on the role of artificial intelligence (AI) in cybersecurity. Recognising AI’s transformational potential, the G7 nations have participated in extensive discussions to investigate its advantages and address the related concerns, guaranteeing responsible research and use. The nation also recognises the ethical, legal, and security considerations of deploying AI cybersecurity.
Worldwide Rise of Ransomware
High-profile ransomware attacks have drawn global attention, emphasising the need to combat this expanding threat. These attacks have harmed organisations of all sizes and industries, leading to data breaches, operational outages, and, in some circumstances, the loss of sensitive information. The implications of such assaults go beyond financial loss, frequently resulting in reputational harm, legal penalties, and service delays that affect consumers, clients, and the public. The increase in high-profile ransomware incidents has garnered attention worldwide, Cybercriminals have adopted a multi-faceted approach to ransomware attacks, combining techniques such as phishing, exploit kits, and supply chain Using spear-phishing, exploit kits, and supply chain hacks to obtain unauthorised access to networks and spread the ransomware. This degree of expertise and flexibility presents a substantial challenge to organisations attempting to protect against such attacks.

Focusing On AI and Upcoming Threats
During the G7 summit, one of the key topics for discussion on the role of AI (Artificial Intelligence) in shaping the future, Leaders and policymakers discuss the benefits and dangers of AI adoption in cybersecurity. Recognising AI’s revolutionary capacity, they investigate its potential to improve defence capabilities, predict future threats, and secure vital infrastructure. Furthermore, the G7 countries emphasise the necessity of international collaboration in reaping the advantages of AI while reducing the hazards. They recognise that cyber dangers transcend national borders and must be combated together. Collaboration in areas such as exchanging threat intelligence, developing shared standards, and promoting best practices is emphasised to boost global cybersecurity defences. The G7 conference hopes to set a global agenda that encourages responsible AI research and deployment by emphasising the role of AI in cybersecurity. The summit’s sessions present a path for maximising AI’s promise while tackling the problems and dangers connected with its implementation.
As the G7 countries traverse the complicated convergence of AI and cybersecurity, their emphasis on collaboration, responsible practices, and innovation lays the groundwork for international collaboration in confronting growing cyber threats. The G7 countries aspire to establish robust and secure digital environments that defend essential infrastructure, protect individuals’ privacy, and encourage trust in the digital sphere by collaboratively leveraging the potential of AI.
Promoting Responsible Al development and usage
The G7 conference will focus on developing frameworks that encourage ethical AI development. This includes fostering openness, accountability, and justice in AI systems. The emphasis is on eliminating biases in data and algorithms and ensuring that AI technologies are inclusive and do not perpetuate or magnify existing societal imbalances.
Furthermore, the G7 nations recognise the necessity of privacy protection in the context of AI. Because AI systems frequently rely on massive volumes of personal data, summit speakers emphasise the importance of stringent data privacy legislation and protections. Discussions centre around finding the correct balance between using data for AI innovation, respecting individuals’ privacy rights, and protecting data security. In addition to responsible development, the G7 meeting emphasises the importance of responsible AI use. Leaders emphasise the importance of transparent and responsible AI governance frameworks, which may include regulatory measures and standards to ensure AI technology’s ethical and legal application. The goal is to defend individuals’ rights, limit the potential exploitation of AI, and retain public trust in AI-driven solutions.
The G7 nations support collaboration among governments, businesses, academia, and civil society to foster responsible AI development and use. They stress the significance of sharing best practices, exchanging information, and developing international standards to promote ethical AI concepts and responsible practices across boundaries. The G7 nations hope to build the global AI environment in a way that prioritises human values, protects individual rights, and develops trust in AI technology by fostering responsible AI development and usage. They work together to guarantee that AI is a force for a good while reducing risks and resolving social issues related to its implementation.
Challenges on the way
During the summit, the nations, while the G7 countries are committed to combating cybercrime and developing responsible AI development, they confront several hurdles in their efforts. Some of them are:
A Rapidly Changing Cyber Threat Environment: Cybercriminals’ strategies and methods are always developing, as is the nature of cyber threats. The G7 countries must keep up with new threats and ensure their cybersecurity safeguards remain effective and adaptable.
Cross-Border Coordination: Cybercrime knows no borders, and successful cybersecurity necessitates international collaboration. On the other hand, coordinating activities among nations with various legal structures, regulatory environments, and agendas can be difficult. Harmonising rules, exchanging information, and developing confidence across states are crucial for effective collaboration.
Talent Shortage and Skills Gap: The field of cybersecurity and AI knowledge necessitates highly qualified personnel. However, skilled individuals in these fields need more supply. The G7 nations must attract and nurture people, provide training programs, and support research and innovation to narrow the skills gap.
Keeping Up with Technological Advancements: Technology changes at a rapid rate, and cyber-attacks become more complex. The G7 nations must ensure that their laws, legislation, and cybersecurity plans stay relevant and adaptive to keep up with future technologies such as AI, quantum computing, and IoT, which may both empower and challenge cybersecurity efforts.
Conclusion
To combat cyber threats effectively, support responsible AI development, and establish a robust cybersecurity ecosystem, the G7 nations must constantly analyse and adjust their strategy. By aggressively tackling these concerns, the G7 nations can improve their collective cybersecurity capabilities and defend their citizens’ and global stakeholders’ digital infrastructure and interests.

Along with the loss of important files and information, data loss can result in downtime and lost revenue. Unexpected occurrences, including natural catastrophes, cyber-attacks, hardware malfunctions, and human mistakes, can result in the loss of crucial data. Recovery from these without a backup plan may be difficult, if not impossible.
The fact is that the largest threat to the continuation of your organization today is cyberattacks. Because of this, disaster recovery planning should be approached from a data security standpoint. If not, you run the risk of leaving your vital systems exposed to a cyberattack. Cybercrime has been more frequent and violent over the past few years. In the past, major organizations and global businesses were the main targets of these attacks by criminals. But nowadays, businesses of all sizes need to be cautious of digital risks.
Many firms might suffer a financial hit even from a brief interruption to regular business operations. But imagine if a situation forced a company to close for a few days or perhaps weeks! The consequences would be disastrous.
One must have a comprehensive disaster recovery plan in place that is connected with the cybersecurity strategy, given the growing danger of cybercrime.
Let’s look at why having a solid data security plan and a dependable backup solution are essential for safeguarding a company from external digital threats.
1. Apply layered approaches
One must specifically use precautionary measures like antivirus software and firewalls. One must also implement strict access control procedures to restrict who may access the network.
One must also implement strict access control procedures to restrict who may access the network.
2. Understand the threat situation
If someone is unaware of the difficulties one should be prepared for, how can they possibly expect to develop a successful cybersecurity strategy? They can’t, is the simple response.
Without a solid understanding of the threat landscape, developing the plan will require a lot too much speculation. With this strategy, one can allocate resources poorly or perhaps completely miss a threat.
Because of this, one should educate themselves on the many cyber risks that businesses now must contend with.
3. Adopt a proactive security stance
Every effective cybersecurity plan includes a number of reactive processes that aren’t activated until an attack occurs. Although these reactive strategies will always be useful in cybersecurity, the main focus of your plan should be proactiveness.
There are several methods to be proactive, but the most crucial one is to analyze your network for possible threats regularly. your network securely. Having a SaaS Security Posture Management (SSPM) solution in place is beneficial for SaaS applications, in particular.
A preventive approach can lessen the effects of a data breach and aid in keeping data away from attackers.
4. Evaluate your ability to respond to incidents
Test your cybersecurity disaster recovery plan’s effectiveness by conducting exercises and evaluating the outcomes. Track pertinent data during the exercise to see if your plan is working as expected.
Meet with your team after each drill to evaluate what went well and what didn’t. This strategy enables you to continuously strengthen your plan and solve weaknesses. This procedure may be repeated endlessly and should be.
You must include cybersecurity protections in your entire disaster recovery plan if you want to make sure that your business is resilient in the face of cyber threats. You may strengthen data security and recover from data loss and corruption by putting in place a plan that focuses on both the essential components of proactive data protection and automated data backup and recovery.
For instance, Google distributes all data among several computers in various places while storing each user’s data on a single machine or collection of machines. To prevent a single point of failure, chunk the data and duplicate it across several platforms. As an additional security safeguard, they give these data chunks random names that are unreadable to the human eye.[1]
The process of creating and storing copies of data that may be used to safeguard organizations against data loss is referred to as backup and recovery. In the case of a main data failure, the backup’s goal is to make a duplicate of the data that can be restored.
5. Take zero-trust principles
Don’t presume that anything or anybody can be trusted; zero trust is a new label for an old idea. Check each device, user, service, or other entity’s trustworthiness before providing it access, then periodically recheck trustworthiness while access is allowed to make sure the entity hasn’t been hacked. Reduce the consequences of any breach of confidence by granting each entity access to only the resources it requires. The number of events and the severity of those that do happen can both be decreased by using zero-trust principles.
6. Understand the dangers posed by supply networks
A nation-state can effectively penetrate a single business, and that business may provide thousands of other businesses with tainted technological goods or services. These businesses will then become compromised, which might disclose their own customers’ data to the original attackers or result in compromised services being offered to customers. Millions of businesses and people might be harmed as a result of what began with one infiltrating corporation.
In conclusion, a defense-in-depth approach to cybersecurity won’t vanish. Organizations may never be able to totally eliminate the danger of a cyberattack, but having a variety of technologies and procedures in place can assist in guaranteeing that the risks are kept to a minimum.