#FactCheck - The video of Virat Kohli promoting online casino mobile app is a deep fake.
Executive Summary:
A viral clip where the Indian batsman Virat Kohli is shown endorsing an online casino and declaring a Rs 50,000 jackpot in three days as a guarantee has been proved a fake. In the clip that is accompanied by manipulated captions, Kohli is said to have admitted to being involved in the launch of an online casino during the interview with Graham Bensinger but this is not true. Nevertheless, an investigation showed that the original interview, which was published on YouTube in the last quarter of 2023 by Bensinger, did not have the mentioned words spoken by Kohli. Besides, another AI deepfake analysis tool called Deepware labelled the viral video as a deepfake.

Claims:
The viral video states that cricket star Virat Kohli gets involved in the promotion of an online casino and ensures that the users of the site can make a profit of Rs 50,000 within three days. Conversely, the CyberPeace Research Team has just revealed that the video is a deepfake and not the original and there is no credible evidence suggesting Kohli's participation in such endorsements. A lot of the users are sharing the videos with the wrong info title over different Social Media platforms.


Fact Check:
As soon as we were informed about the news, we made use of Keyword Search to see any news report that could be considered credible about Virat Kohli promoting any Casino app and we found nothing. Therefore, we also used Reverse Image Search for Virat Kohli wearing a Black T-shirt as seen in the video to find out more about the subject. We landed on a YouTube Video by Graham Bensinger, an American Journalist. The clip of the viral video was taken from this original video.

In this video, he discussed his childhood, his diet, his cricket training, his marriage, etc. but did not mention anything regarding a newly launched Casino app by the cricketer.
Through close scrutiny of the viral video we have noticed some inconsistencies in the lip-sync and voice. Subsequently, we executed Deepfake Detection in Deepware tool and identified it to be Deepfake Detected.


Finally, we affirm that the Viral Video Is Deepfakes Video and the statement made is False.
Conclusion:
The video has gone viral and claims that cricketer Virat Kohli is the one endorsing an online casino and assuring you that in three days time you will be a guaranteed winner of Rs 50,000. This is all a fake story. This incident demonstrates the necessity of checking facts and a source before believing any information, as well as remaining sceptical about deepfakes and AI (artificial intelligence), which is a new technology used nowadays for spreading misinformation.
Related Blogs

Introduction
The integration of Artificial Intelligence into our daily workflows has compelled global policymakers to develop legislative frameworks to govern its impact efficiently. The question that we arrive at here is: While AI is undoubtedly transforming global economies, who governs the transformation? The EU AI Act was the first of its kind legislation to govern Artificial Intelligence, making the EU a pioneer in the emerging technology regulation space. This blog analyses the EU's Draft AI Rules and Code of Practice, exploring their implications for ethics, innovation, and governance.
Background: The Need for AI Regulation
AI adoption has been happening at a rapid pace and is projected to contribute $15.7 trillion to the global economy by 2030. The AI market size is expected to grow by at least 120% year-over-year. Both of these statistics have been stated in arguments citing concrete examples of AI risks (e.g., bias in recruitment tools, misinformation spread through deepfakes). Unlike the U.S., which relies on sector-specific regulations, the EU proposes a unified framework to address AI's challenges comprehensively, especially with the vacuum that exists in the governance of emerging technologies such as AI. It should be noted that the GDPR or the General Data Protection Regulation has been a success with its global influence on data privacy laws and has started a domino effect for the creation of privacy regulations all over the world. This precedent emphasises the EU's proactive approach towards regulations which are population-centric.
Overview of the Draft EU AI Rules
This Draft General Purpose AI Code of Practice details the AI rules for the AI Act rules and the providers of general-purpose AI models with systemic risks. The European AI Office facilitated the drawing up of the code, and was chaired by independent experts and involved nearly 1000 stakeholders and EU member state representatives and observers both European and international observers.
14th November 2024 marks the publishing of the first draft of the EU’s General-Purpose AI Code of Practice, established by the EU AI Act. As per Article 56 of the EU AI Act, the code outlines the rules that operationalise the requirements, set out for General-Purpose AI (GPAI) model under Article 53 and GPAI models with systemic risks under Article 55. The AI Act is legislation that finds its base in product safety and relies on setting harmonised standards in order to support compliance. These harmonised standards are essentially sets of operational rules that have been established by the European Standardisation bodies, such as the European Committee for Standardisation (CEN), the European Committee for Electrotechnical Standardisation (CENELEC) and the European Telecommunications Standards Institute. Industry experts, civil society and trade unions are translating the requirements set out by the EU sectoral legislation into the specific mandates set by the European Commission. The AI Act obligates the developers, deployers and users of AI on mandates for transparency, risk management and compliance mechanisms
The Code of Practice for General Purpose AI
The most popular applications of GPAI include ChatGPT and other foundational models such as CoPilot from Microsoft, BERT from Google, Llama from Meta AI and many others and they are under constant development and upgradation. The 36-pages long draft Code of Practice for General Purpose AI is meant to serve as a roadmap for tech companies to comply with the AI Act and avoid paying penalties. It focuses on transparency, copyright compliance, risk assessment, and technical/governance risk mitigation as the core areas for the companies that are developing GPAIs. It also lays down guidelines that look to enable greater transparency on what goes into developing GPAIs.
The Draft Code's provisions for risk assessment focus on preventing cyber attacks, large-scale discrimination, nuclear and misinformation risks, and the risk of the models acting autonomously without oversight.
Policy Implications
The EU’s Draft AI Rules and Code of Practice represent a bold step in shaping the governance of general-purpose AI, positioning the EU as a global pioneer in responsible AI regulation. By prioritising harmonised standards, ethical safeguards, and risk mitigation, these rules aim to ensure AI benefits society while addressing its inherent risks. While the code is a welcome step, the compliance burdens on MSMEs and startups could hinder innovation, whereas, the voluntary nature of the Code raises concerns about accountability. Additionally, harmonising these ambitious standards with varying global frameworks, especially in regions like the U.S. and India, presents a significant challenge to achieving a cohesive global approach.
Conclusion
The EU’s initiative to regulate general-purpose AI aligns with its legacy of proactive governance, setting the stage for a transformative approach to balancing innovation with ethical accountability. However, challenges remain. Striking the right balance is crucial to avoid stifling innovation while ensuring robust enforcement and inclusivity for smaller players. Global collaboration is the next frontier. As the EU leads, the world must respond by building bridges between regional regulations and fostering a unified vision for AI governance. This demands active stakeholder engagement, adaptive frameworks, and a shared commitment to addressing emerging challenges in AI. The EU’s Draft AI Rules are not just about regulation, they are about leading a global conversation.
References
- https://indianexpress.com/article/technology/artificial-intelligence/new-eu-ai-code-of-practice-draft-rules-9671152/
- https://digital-strategy.ec.europa.eu/en/policies/ai-code-practice
- https://www.csis.org/analysis/eu-code-practice-general-purpose-ai-key-takeaways-first-draft#:~:text=Drafting%20of%20the%20Code%20of%20Practice%20is%20taking%20place%20under,the%20drafting%20of%20the%20code.
- https://copyrightblog.kluweriplaw.com/2024/12/16/first-draft-of-the-general-purpose-ai-code-of-practice-has-been-released/

Introduction
The digital realm is evolving at a rapid pace, revolutionising cyberspace at a breakneck speed. However, this dynamic growth has left several operational and regulatory lacunae in the fabric of cyberspace, which are exploited by cybercriminals for their ulterior motives. One of the threats that emerged rapidly in 2024 is proxyjacking, in which vulnerable systems are exploited by cyber criminals to sell their bandwidth to third-party proxy servers. This cyber threat poses a significant threat to organisations and individual servers.
Proxyjacking is a kind of cyber attack that leverages legit bandwidth sharing services such as Peer2Profit and HoneyGain. These are legitimate platforms but proxyjacking occurs when such services are exploited without user consent. These services provide the opportunity to monetize their surplus internet bandwidth by sharing with other users. The model itself is harmless but provides an avenue for numerous cyber hostilities. The participants install net-sharing software and add the participating system to the proxy network, enabling users to route their traffic through the system. This setup intends to enhance privacy and provide access to geo-locked content.
The Modus Operandi
These systems are hijacked by cybercriminals, who sell the bandwidth of infected devices. This is achieved by establishing Secure Shell (SSH) connections to vulnerable servers. While hackers rarely use honeypots to render elaborate scams, the technical possibility of them doing so cannot be discounted. Cowrie Honeypots, for instance, are engineered to emulate UNIX systems. Attackers can use similar tactics to gain unauthorized access to poorly secured systems. Once inside the system, attackers utilise legit tools such as public docker images to take over proxy monetization services. These tools are undetectable to anti-malware software due to being genuine software in and of themselves. Endpoint detection and response (EDR) tools also struggle with the same threats.
The Major Challenges
Limitation Of Current Safeguards – current malware detection software is unable to distinguish between malicious and genuine use of bandwidth services, as the nature of the attack is not inherently malicious.
Bigger Threat Than Crypto-Jacking – Proxyjacking poses a bigger threat than cryptojacking, where systems are compromised to mine crypto-currency. Proxyjacking uses minimal system resources rendering it more challenging to identify. As such, proxyjacking offers perpetrators a higher degree of stealth because it is a resource-light technique, whereas cryptojacking can leave CPU and GPU usage footprints.
Role of Technology in the Fight Against Proxyjacking
Advanced Safety Measures- Implementing advanced safety measures is crucial in combating proxyjacking. Network monitoring tools can help detect unusual traffic patterns indicative of proxyjacking. Key-based authentication for SSH can significantly reduce the risk of unauthorized access, ensuring that only trusted devices can establish connections. Intrusion Detection Systems and Intrusion Prevention Systems can go a long way towards monitoring unusual outbound traffic.
Robust Verification Processes- sharing services must adopt robust verification processes to ensure that only legitimate users are sharing bandwidth. This could include stricter identity verification methods and continuous monitoring of user activities to identify and block suspicious behaviour.
Policy Recommendations
Verification for Bandwidth Sharing Services – Mandatory verification standards should be enforced for bandwidth-sharing services, including stringent Know Your Customer (KYC) protocols to verify the identity of users. A strong regulatory body would ensure proper compliance with verification standards and impose penalties. The transparency reports must document the user base, verification processes and incidents.
Robust SSH Security Protocols – Key-based authentication for SSH across organisations should be mandated, to neutralize the risk of brute force attacks. Mandatory security audits of SSH configuration within organisations to ensure best practices are complied with and vulnerabilities are identified will help. Detailed logging of SSH attempts will streamline the process of identification and investigation of suspicious behaviour.
Effective Anomaly Detection System – Design a standard anomaly detection system to monitor networks. The industry-wide detection system should focus on detecting inconsistencies in traffic patterns indicating proxy-jacking. Establishing mandatory protocols for incident reporting to centralised authority should be implemented. The system should incorporate machine learning in order to stay abreast with evolving attack methodologies.
Framework for Incident Response – A national framework should include guidelines for investigation, response and remediation to be followed by organisations. A centralized database can be used for logging and tracking all proxy hacking incidents, allowing for information sharing on a real-time basis. This mechanism will aid in identifying emerging trends and common attack vectors.
Whistleblower Incentives – Enacting whistleblower protection laws will ensure the proper safety of individuals reporting proxyjacking activities. Monetary rewards provide extra incentives and motivate individuals to join whistleblowing programs. To provide further protection to whistleblowers, secure communication channels can be established which will ensure full anonymity to individuals.
Conclusion
Proxyjacking represents an insidious and complicated threat in cyberspace. By exploiting legitimate bandwidth-sharing services, cybercriminals can profit while remaining entirely anonymous. Addressing this issue requires a multifaceted approach, including advanced anomaly detection systems, effective verification systems, and comprehensive incident response frameworks. These measures of strong cyber awareness among netizens will ensure a healthy and robust cyberspace.
References
- https://gridinsoft.com/blogs/what-is-proxyjacking/
- https://www.darkreading.com/cyber-risk/ssh-servers-hit-in-proxyjacking-cyberattacks
- https://therecord.media/hackers-use-log4j-in-proxyjacking-scheme

The World Wide Web was created as a portal for communication, to connect people from far away, and while it started with electronic mail, mail moved to instant messaging, which let people have conversations and interact with each other from afar in real-time. But now, the new paradigm is the Internet of Things and how machines can communicate with one another. Now one can use a wearable gadget that can unlock the front door upon arrival at home and can message the air conditioner so that it switches on. This is IoT.
WHAT EXACTLY IS IoT?
The term ‘Internet of Things’ was coined in 1999 by Kevin Ashton, a computer scientist who put Radio Frequency Identification (RFID) chips on products in order to track them in the supply chain, while he worked at Proctor & Gamble (P&G). And after the launch of the iPhone in 2007, there were already more connected devices than people on the planet.
Fast forward to today and we live in a more connected world than ever. So much so that even our handheld devices and household appliances can now connect and communicate through a vast network that has been built so that data can be transferred and received between devices. There are currently more IoT devices than users in the world and according to the WEF’s report on State of the Connected World, by 2025 there will be more than 40 billion such devices that will record data so it can be analyzed.
IoT finds use in many parts of our lives. It has helped businesses streamline their operations, reduce costs, and improve productivity. IoT also helped during the Covid-19 pandemic, with devices that could help with contact tracing and wearables that could be used for health monitoring. All of these devices are able to gather, store and share data so that it can be analyzed. The information is gathered according to rules set by the people who build these systems.
APPLICATION OF IoT
IoT is used by both consumers and the industry.
Some of the widely used examples of CIoT (Consumer IoT) are wearables like health and fitness trackers, smart rings with near-field communication (NFC), and smartwatches. Smartwatches gather a lot of personal data. Smart clothing, with sensors on it, can monitor the wearer’s vital signs. There are even smart jewelry, which can monitor sleeping patterns and also stress levels.
With the advent of virtual and augmented reality, the gaming industry can now make the experience even more immersive and engrossing. Smart glasses and headsets are used, along with armbands fitted with sensors that can detect the movement of arms and replicate the movement in the game.
At home, there are smart TVs, security cameras, smart bulbs, home control devices, and other IoT-enabled ‘smart’ appliances like coffee makers, that can be turned on through an app, or at a particular time in the morning so that it acts as an alarm. There are also voice-command assistants like Alexa and Siri, and these work with software written by manufacturers that can understand simple instructions.
Industrial IoT (IIoT) mainly uses connected machines for the purposes of synchronization, efficiency, and cost-cutting. For example, smart factories gather and analyze data as the work is being done. Sensors are also used in agriculture to check soil moisture levels, and these then automatically run the irrigation system without the need for human intervention.
Statistics
- The IoT device market is poised to reach $1.4 trillion by 2027, according to Fortune Business Insight.
- The number of cellular IoT connections is expected to reach 3.5 billion by 2023. (Forbes)
- The amount of data generated by IoT devices is expected to reach 73.1 ZB (zettabytes) by 2025.
- 94% of retailers agree that the benefits of implementing IoT outweigh the risk.
- 55% of companies believe that 3rd party IoT providers should have to comply with IoT security and privacy regulations.
- 53% of all users acknowledge that wearable devices will be vulnerable to data breaches, viruses,
- Companies could invest up to 15 trillion dollars in IoT by 2025 (Gigabit)
CONCERNS AND SOLUTIONS
- Two of the biggest concerns with IoT devices are the privacy of users and the devices being secure in order to prevent attacks by bad actors. This makes knowledge of how these things work absolutely imperative.
- It is worth noting that these devices all work with a central hub, like a smartphone. This means that it pairs with the smartphone through an app and acts as a gateway, which could compromise the smartphone as well if a hacker were to target that IoT device.
- With technology like smart television sets that have cameras and microphones, the major concern is that hackers could hack and take over the functioning of the television as these are not adequately secured by the manufacturer.
- A hacker could control the camera and cyberstalk the victim, and therefore it is very important to become familiar with the features of a device and ensure that it is well protected from any unauthorized usage. Even simple things, like keeping the camera covered when it is not being used.
- There is also the concern that since IoT devices gather and share data without human intervention, they could be transmitting data that the user does not want to share. This is true of health trackers. Users who wear heart and blood pressure monitors have their data sent to the insurance company, who may then decide to raise the premium on their life insurance based on the data they get.
- IoT devices often keep functioning as normal even if they have been compromised. Most devices do not log an attack or alert the user, and changes like higher power or bandwidth usage go unnoticed after the attack. It is therefore very important to make sure the device is properly protected.
- It is also important to keep the software of the device updated as vulnerabilities are found in the code and fixes are provided by the manufacturer. Some IoT devices, however, lack the capability to be patched and are therefore permanently ‘at risk’.
CONCLUSION
Humanity inhabits this world that is made up of all these nodes that talk to each other and get things done. Users can harmonize their devices so that everything runs like a tandem bike – completely in sync with all other parts. But while we make use of all the benefits, it is also very important that one understands what they are using, how it is functioning, and how one can tackle issues should they come up. This is also important to understand because once people get used to IoT, it will be that much more difficult to give up the comfort and ease that these systems provide, and therefore it would make more sense to be prepared for any eventuality. A lot of times, good and sensible usage alone can keep devices safe and services intact. But users should be aware of any issues because forewarned is forearmed.