#FactCheck - Digitally Altered Video of Olympic Medalist, Arshad Nadeem’s Independence Day Message
Executive Summary:
A video of Pakistani Olympic gold medalist and Javelin player Arshad Nadeem wishing Independence Day to the People of Pakistan, with claims of snoring audio in the background is getting viral. CyberPeace Research Team found that the viral video is digitally edited by adding the snoring sound in the background. The original video published on Arshad's Instagram account has no snoring sound where we are certain that the viral claim is false and misleading.

Claims:
A video of Pakistani Olympic gold medalist Arshad Nadeem wishing Independence Day with snoring audio in the background.

Fact Check:
Upon receiving the posts, we thoroughly checked the video, we then analyzed the video in TrueMedia, an AI Video detection tool, and found little evidence of manipulation in the voice and also in face.


We then checked the social media accounts of Arshad Nadeem, we found the video uploaded on his Instagram Account on 14th August 2024. In that video, we couldn’t hear any snoring sound.

Hence, we are certain that the claims in the viral video are fake and misleading.
Conclusion:
The viral video of Arshad Nadeem with a snoring sound in the background is false. CyberPeace Research Team confirms the sound was digitally added, as the original video on his Instagram account has no snoring sound, making the viral claim misleading.
- Claim: A snoring sound can be heard in the background of Arshad Nadeem's video wishing Independence Day to the people of Pakistan.
- Claimed on: X,
- Fact Check: Fake & Misleading
Related Blogs

Introduction
In the digital era, where technology is growing rapidly, the role of Artificial Intelligence (AI) has been making its way to different corners of the world. Where nothing seems to be impossible, technology and innovation have been moving conjointly and once again, and such innovation is in the limelight with its groundbreaking initiative known as “Project Groot”, which has been announced by the AI chip leader “Nvidia”. The core of this project is the fusion of technology with AI and robotics, where a humanoid can be produced with the capability to understand the natural language and interact with it to further learn from the physical environment by observing human actions and skills. Project Groot aims to assist humans in diverse sectors such as Healthcare and so on.
Humanoid robots are based on NVIDIA’s thor system-on-chip (SoC). The thor powers the intelligence of these robots, and the chip has been designed to handle complex tasks and ensure a safe and natural interaction between humans and robots. However, a big question arises about the ethical considerations of privacy, autonomy and the possible replacement of human workers.
Brief Analysis
Nvidia has announced Project GR00T, or Generalist Robot 00 Technology, which aims to create AI-powered humanoid robots with human-like understanding and movement. The project is part of Nvidia's efforts to drive breakthroughs in robotics and embodied AI, which can interact with and learn from a physical environment. The robots built on this platform are designed to understand natural language and emulate movements by observing human actions, such as coordination, dexterity, and other skills.
The model has been trained on NVIDIA GPU-accelerated simulation, enabling the robots to learn from human demonstrations with imitation learning and from the robotics platform NVIDIA Isaac Lab for reinforcement learning. This multimodal AI system acts as the mind for humanoid robots, allowing them to learn new skills and interact with the real world. Leading names in robotics, such as Figure, Boston Dynamics, Apptronik, Agility Robotics, Sanctuary AI, and Unitree, are reported to have collaborated with Nvidia to leverage GR00T.
Nvidia has also updated Isaac with Isaac Manipulator and Isaac Perceptor, which add multi-camera 3D vision. The company also unveiled a new computer, Jetson Thor, to aid humanoid robots based on NVIDIA's SoC, which is designed to handle complex tasks and ensure a safe and natural interaction between humans and robots.
Despite the potential job loss associated with humanoid robots potentially handling hazardous and repetitive tasks, many argue that they can aid humans and make their lives more comfortable rather than replacing them.
Policy Recommendations
The Nvidia project highlights a significant development in AI Robotics, presenting a brimming potential and ethical challenges critical for the overall development and smooth assimilation of AI-driven tech in society. To ensure its smooth assimilation, a comprehensive policy framework must be put in place. This includes:
- Human First Policy - Emphasis should be on better augmentation rather than replacement. The authorities must focus on better research and development (R&D) of applications that aid in modifying human capabilities, enhancing working conditions, and playing a role in societal growth.
- Proper Ethical Guidelines - Guidelines stressing human safety, autonomy and privacy should be established. These norms must include consent for data collection, fair use of AI in decision making and proper protocols for data security.
- Deployment of Inclusive Technology - Access to AI Driven Robotics tech should be made available to diverse sectors of society. It is imperative to address potential algorithm bias and design flaws to avoid discrimination and promote inclusivity.
- Proper Regulatory Frameworks - It is crucial to establish regulatory frameworks to govern the smooth deployment and operation of AI-driven tech. The framework must include certification for safety and standards, frequent audits and liability protocols to address accidents.
- Training Initiatives - Educational programs should be introduced to train the workforce for integrating AI driven robotics and their proper handling. Upskilling of the workforce should be the top priority of corporations to ensure effective integration of AI Robotics.
- Collaborative Research Initiatives - AI and emerging technologies have a profound impact on the trajectory of human development. It is imperative to foster collaboration among governments, industry and academia to drive innovation in AI robotics responsibly and undertake collaborative initiatives to mitigate and address technical, societal, legal and ethical issues posed by AI Robots.
Conclusion
On the whole, Project GROOT is a significant quantum leap in the advancement of robotic technology and indeed paves the way for a future where robots can integrate seamlessly into various aspects of human lives.
References
- https://indianexpress.com/article/explained/explained-sci-tech/what-is-nvidias-project-gr00t-impact-robotics-9225089/
- https://medium.com/paper-explanation/understanding-nvidias-project-groot-762d4246b76d
- https://www.techradar.com/pro/nvidias-project-groot-brings-the-human-robot-future-a-significant-step-closer
- https://www.barrons.com/livecoverage/nvidia-gtc-ai-conference/card/nvidia-announces-ai-model-for-humanoid-robot-development-BwT9fewMyD6XbuBrEDSp

Introduction
Artificial Intelligence (AI) is fast transforming our future in the digital world, transforming healthcare, finance, education, and cybersecurity. But alongside this technology, bad actors are also weaponising it. More and more, state-sponsored cyber actors are misusing AI tools such as ChatGPT and other generative models to automate disinformation, enable cyberattacks, and speed up social engineering operations. This write-up explores why and how AI, in the form of large language models (LLMs), is being exploited in cyber operations associated with adversarial states, and the necessity for international vigilance, regulation, and AI safety guidelines.
The Shift: AI as a Cyber Weapon
State-sponsored threat actors are misusing tools such as ChatGPT to turbocharge their cyber arsenal.
- Phishing Campaigns using AI- Generative AI allows for highly convincing and grammatically correct phishing emails. Unlike the shoddily written scams of yesteryears, these AI-based messages are tailored according to the victim's location, language, and professional background, increasing the attack success rate considerably. Example: It has recently been reported by OpenAI and Microsoft that Russian and North Korean APTs have employed LLMs to create customised phishing baits and malware obfuscation notes.
- Malware Obfuscation and Script Generation- Big Language Models (LLMs) such as ChatGPT may be used by cyber attackers to help write, debug, and camouflage malicious scripts. While the majority of AI instruments contain safety mechanisms to guard against abuse, threat actors often exploit "jailbreaking" to evade these protections. Once such constraints are lifted, the model can be utilised to develop polymorphic malware that alters its code composition to avoid detection. It can also be used to obfuscate PowerShell or Python scripts to render them difficult for conventional antivirus software to identify. Also, LLMs have been employed to propose techniques for backdoor installation, additional facilitating stealthy access to hijacked systems.
- Disinformation and Narrative Manipulation
State-sponsored cyber actors are increasingly employing AI to scale up and automate disinformation operations, especially on election, protest, and geopolitical dispute days. With LLMs' assistance, these actors can create massive amounts of ersatz news stories, deepfake interview transcripts, imitation social media posts, and bogus public remarks on online forums and petitions. The localisation of content makes this strategy especially perilous, as messages are written with cultural and linguistic specificity, making them credible and more difficult to detect. The ultimate aim is to seed societal unrest, manipulate public sentiments, and erode faith in democratic institutions.
Disrupting Malicious Uses of AI – OpenAI Report (June 2025)
OpenAI released a comprehensive threat intelligence report called "Disrupting Malicious Uses of AI" and the “Staying ahead of threat actors in the age of AI”, which outlined how state-affiliated actors had been testing and misusing its language models for malicious intent. The report named few advanced persistent threat (APT) groups, each attributed to particular nation-states. OpenAI highlighted that the threat actors used the models mostly for enhancing linguistic quality, generating social engineering content, and expanding operations. Significantly, the report mentioned that the tools were not utilized to produce malware, but rather to support preparatory and communicative phases of larger cyber operations.
AI Jailbreaking: Dodging Safety Measures
One of the largest worries is how malicious users can "jailbreak" AI models, misleading them into generating banned content using adversarial input. Some methods employed are:
- Roleplay: Simulating the AI being a professional criminal advisor
- Obfuscation: Concealing requests with code or jargon
- Language Switching: Proposing sensitive inquiries in less frequently moderated languages
- Prompt Injection: Lacing dangerous requests within innocent-appearing questions
These methods have enabled attackers to bypass moderation tools, transforming otherwise moral tools into cybercrime instruments.
Conclusion
As AI generations evolve and become more accessible, its application by state-sponsored cyber actors is unprecedentedly threatening global cybersecurity. The distinction between nation-state intelligence collection and cybercrime is eroding, with AI serving as a multiplier of adversarial campaigns. AI tools such as ChatGPT, which were created for benevolent purposes, can be targeted to multiply phishing, propaganda, and social engineering attacks. The cross-border governance, ethical development practices, and cyber hygiene practices need to be encouraged. AI needs to be shaped not only by innovation but by responsibility.
References
- https://www.microsoft.com/en-us/security/blog/2024/02/14/staying-ahead-of-threat-actors-in-the-age-of-ai/
- https://www.bankinfosecurity.com/openais-chatgpt-hit-nation-state-hackers-a-28640
- https://oecd.ai/en/incidents/2025-06-13-b5e9
- https://www.microsoft.com/en-us/security/security-insider/meet-the-experts/emerging-AI-tactics-in-use-by-threat-actors
- https://www.wired.com/story/youre-not-ready-for-ai-hacker-agents/
- https://www.cert-in.org.in/PDF/Digital_Threat_Report_2024.pdf
- https://cdn.openai.com/threat-intelligence-reports/5f73af09-a3a3-4a55-992e-069237681620/disrupting-malicious-uses-of-ai-june-2025.pdf

Introduction
The CID of Jharkhand Police has uncovered a network of around 8000 bank accounts engaged in cyber fraud across the state, with a focus on Deoghar district, revealing a surprising 25% concentration of fraudulent accounts. In a recent meeting with bank officials, the CID shared compiled data, with 20% of the identified accounts traced to State Bank of India branches. This revelation, surpassing even Jamtara's cyber fraud reputation, prompts questions about the extent of cybercrime in Jharkhand. Under Director General Anurag Gupta's leadership, the CID has registered 90 cases, apprehended 468 individuals, and seized 1635 SIM cards and 1107 mobile phones through the Prakharna portal to combat cybercrime.
This shocking revelation by, Jharkhand Police's Criminal Investigation Department (CID) has built a comprehensive database comprising information on about 8000 bank accounts tied to cyber fraud operations in the state. This vital information has aided in the launch of investigations to identify the account holders implicated in these illegal actions. Furthermore, the CID shared this information with bank officials at a meeting on January 12 to speed up the identification process.
Background of the Investigation
The CID shared the collated material with bank officials in a meeting on 12 January 2024 to expedite the identification process. A stunning 2000 of the 8000 bank accounts under investigation are in the Deoghar district alone, with 20 per cent of these accounts connected to various State Bank of India branches. The discovery of 8000 bank accounts related to cybercrime in Jharkhand is shocking and disturbing. Surprisingly, Deoghar district has exceeded even Jamtara, which was famous for cybercrime, accounting for around 25% of the discovered bogus accounts in the state.
As per the information provided by the CID Crime Branch, it has been found that most of the accounts were opened in banks, are currently under investigation and around 2000 have been blocked by the investigating agencies.
Recovery Process
During the investigation, it was found out that most of these accounts were running on rent, the cyber criminals opened them by taking fake phone numbers along with Aadhar cards and identity cards from people in return these people(account holders) will get a fixed amount every month.
The CID has been unrelenting in its pursuit of cybercriminals. Police have recorded 90 cases and captured 468 people involved in cyber fraud using the Prakharna site. 1635 SIM Cards and 1107 mobile phones were confiscated by police officials during raids in various cities.
The Crime Branch has revealed the names of the cities where accounts are opened
- Deoghar 2500
- Dhanbad 1183
- Ranchi 959
- Bokaro 716
- Giridih 707
- Jamshedpur 584
- Hazaribagh 526
- Dumka 475
- Jamtara 443
Impact on the Financial Institutions and Individuals
These cyber scams significantly influence financial organisations and individuals; let us investigate the implications.
- Victims: Cybercrime victims have significant financial setbacks, which can lead to long-term financial insecurity. In addition, people frequently suffer mental pain as a result of the breach of personal information, which causes worry, fear, and a lack of faith in the digital financial system. One of the most difficult problems for victims is the recovery process, which includes retrieving lost cash and repairing the harm caused by the cyberattack. Individuals will find this approach time-consuming and difficult, in a lot of cases people are unaware of where and when to approach and seek help. Hence, awareness about cybercrimes and a reporting mechanism are necessary to guide victims through the recovery process, aiding them in retrieving lost assets and repairing the harm inflicted by cyberattacks.
- Financial Institutions: Financial institutions face direct consequences when they incur significant losses due to cyber financial fraud. Unauthorised account access, fraudulent transactions, and the compromise of client data result in immediate cash losses and costs associated with investigating and mitigating the breach's impact. Such assaults degrade the reputation of financial organisations, undermine trust, erode customer confidence, and result in the loss of potential clients.
- Future Implications and Solutions: Recently, the CID discovered a sophisticated cyber fraud network in Jharkhand. As a result, it is critical to assess the possible long-term repercussions of such discoveries and propose proactive ways to improve cybersecurity. The CID's findings are expected to increase awareness of the ongoing threat of cyber fraud to both people and organisations. Given the current state of cyber dangers, it is critical to implement rigorous safeguards and impose heavy punishments on cyber offenders. Government organisations and regulatory bodies should also adapt their present cybersecurity strategies to address the problems posed by modern cybercrime.
Solution and Preventive Measures
Several solutions can help combat the growing nature of cybercrime. The first and foremost step is to enhance cybersecurity education at all levels, including:
- Individual Level: To improve cybersecurity for individuals, raising awareness across all age groups is crucial. This can only be done by knowing the potential threats by following the best online practices, following cyber hygiene, and educating people to safeguard themselves against financial frauds such as phishing, smishing etc.
- Multi-Layered Authentication: Encouraging individuals to enable MFA for their online accounts adds an extra layer of security by requiring additional verification beyond passwords.
- Continuous monitoring and incident Response: By continuously monitoring their financial transactions and regularly reviewing the online statements and transaction history, ensure that everyday transactions are aligned with your expenditures, and set up the accounts alert for transactions exceeding a specified amount for usual activity.
- Report Suspicious Activity: If you see any fraudulent transactions or activity, contact your bank or financial institution immediately; they will lead you through investigating and resolving the problem. The victim must supply the necessary paperwork to support your claim.
How to reduce the risks
- Freeze compromised accounts: If you think that some of your accounts have been compromised, call the bank immediately and request that the account be frozen or temporarily suspended, preventing further unauthorised truncations
- Update passwords: Update and change your passwords for all the financial accounts, emails, and online banking accounts regularly, if you suspect any unauthorised access, report it immediately and always enable MFA that adds an extra layer of protection to your accounts.
Conclusion
The CID's finding of a cyber fraud network in Jharkhand is a stark reminder of the ever-changing nature of cybersecurity threats. Cyber security measures are necessary to prevent such activities and protect individuals and institutions from being targeted against cyber fraud. As the digital ecosystem continues to grow, it is really important to stay vigilant and alert as an individual and society as a whole. We should actively participate in more awareness activities to update and upgrade ourselves.
References
- https://avenuemail.in/cid-uncovers-alarming-cyber-fraud-network-8000-bank-accounts-in-jharkhand-involved/
- https://www.the420.in/jharkhand-cid-cyber-fraud-crackdown-8000-bank-accounts-involved/
- https://www.livehindustan.com/jharkhand/story-cyber-fraudsters-in-jharkhand-opened-more-than-8000-bank-accounts-cid-freezes-2000-accounts-investigating-9203292.html