#FactCheck- Viral ‘Army Jump Accident’ Video Is AI-Generated
Executive Summary
A video is being widely shared on social media showing a man in an army uniform jumping from a height, losing balance mid-air, and appearing to meet with an accident. The clip is being circulated as a real-life incident. However, a research by the CyberPeace found the claim to be false. The viral video is not real but AI-generated.
Claim
On social media platform Facebook, a user shared the video with a caption suggesting it shows a real accident, warning against risky stunts.
- https://archive.ph/BH6dl#selection-347.0-347.122
- https://www.facebook.com/ashok.yadav.9041083/posts/1593460528549619/

Fact Check
To verify the claim, we conducted a reverse image search using Google Lens but found no credible news reports or official sources mentioning such an incident. A closer look at the video revealed several inconsistencies commonly associated with AI-generated content. For instance, the person appears to disappear momentarily while falling, the head is not clearly visible after impact, and the background audio seems unnatural. We further analyzed the video using AI detection tools. On Hive Moderation, the video showed a 99.2% probability of being AI-generated.

Additionally, analysis using Sightengine indicated a 98% likelihood that the video was synthetically created.

Conclusion
The viral claim is false. The video does not depict a real incident but is an AI-generated clip. It has been shared with a misleading narrative, and there is no evidence to support the claim that it shows an actual accident.
Related Blogs

As AI language models become more powerful, they are also becoming more prone to errors. One increasingly prominent issue is AI hallucinations, instances where models generate outputs that are factually incorrect, nonsensical, or entirely fabricated, yet present them with complete confidence. Recently, ChatGPT released two new models—o3 and o4-mini, which differ from earlier versions as they focus more on step-by-step reasoning rather than simple text prediction. With the growing reliance on chatbots and generative models for everything from news summaries to legal advice, this phenomenon poses a serious threat to public trust, information accuracy, and decision-making.
What Are AI Hallucinations?
AI hallucinations occur when a model invents facts, misattributes quotes, or cites nonexistent sources. This is not a bug but a side effect of how Large Language Models (LLMs) work, and it is only the probability that can be reduced, not their occurrence altogether. Trained on vast internet data, these models predict what word is likely to come next in a sequence. They have no true understanding of the world or facts, they simulate reasoning based on statistical patterns in text. What is alarming is that the newer and more advanced models are producing more hallucinations, not fewer. seemingly counterintuitive. This has been prevalent reasoning-based models, which generate answers step-by-step in a chain-of-thought style. While this can improve performance on complex tasks, it also opens more room for errors at each step, especially when no factual retrieval or grounding is involved.
As per reports shared on TechCrunch, it mentioned that when users asked AI models for short answers, hallucinations increased by up to 30%. And a study published in eWeek found that ChatGPT hallucinated in 40% of tests involving domain-specific queries, such as medical and legal questions. This was not, however, limited to this particular Large Language Model, but also similar ones like DeepSeek. Even more concerning are hallucinations in multimodal models like those used for deepfakes. Forbes reports that some of these models produce synthetic media that not only look real but are also capable of contributing to fabricated narratives, raising the stakes for the spread of misinformation during elections, crises, and other instances.
It is also notable that AI models are continually improving with each version, focusing on reducing hallucinations and enhancing accuracy. New features, such as providing source links and citations, are being implemented to increase transparency and reliability in responses.
The Misinformation Dilemma
The rise of AI-generated hallucinations exacerbates the already severe problem of online misinformation. Hallucinated content can quickly spread across social platforms, get scraped into training datasets, and re-emerge in new generations of models, creating a dangerous feedback loop. However, it helps that the developers are already aware of such instances and are actively charting out ways in which we can reduce the probability of this error. Some of them are:
- Retrieval-Augmented Generation (RAG): Instead of relying purely on a model’s internal knowledge, RAG allows the model to “look up” information from external databases or trusted sources during the generation process. This can significantly reduce hallucination rates by anchoring responses in verifiable data.
- Use of smaller, more specialised language models: Lightweight models fine-tuned on specific domains, such as medical records or legal texts. They tend to hallucinate less because their scope is limited and better curated.
Furthermore, transparency mechanisms such as source citation, model disclaimers, and user feedback loops can help mitigate the impact of hallucinations. For instance, when a model generates a response, linking back to its source allows users to verify the claims made.
Conclusion
AI hallucinations are an intrinsic part of how generative models function today, and such a side-effect would continue to occur until foundational changes are made in how models are trained and deployed. For the time being, developers, companies, and users must approach AI-generated content with caution. LLMs are, fundamentally, word predictors, brilliant but fallible. Recognising their limitations is the first step in navigating the misinformation dilemma they pose.
References
- https://www.eweek.com/news/ai-hallucinations-increase/
- https://www.resilience.org/stories/2025-05-11/better-ai-has-more-hallucinations/
- https://www.ekathimerini.com/nytimes/1269076/ai-is-getting-more-powerful-but-its-hallucinations-are-getting-worse/
- https://techcrunch.com/2025/05/08/asking-chatbots-for-short-answers-can-increase-hallucinations-study-finds/
- https://en.as.com/latest_news/is-chatgpt-having-robot-dreams-ai-is-hallucinating-and-producing-incorrect-information-and-experts-dont-know-why-n/
- https://www.newscientist.com/article/2479545-ai-hallucinations-are-getting-worse-and-theyre-here-to-stay/
- https://www.forbes.com/sites/conormurray/2025/05/06/why-ai-hallucinations-are-worse-than-ever/
- https://towardsdatascience.com/how-i-deal-with-hallucinations-at-an-ai-startup-9fc4121295cc/
- https://www.informationweek.com/machine-learning-ai/getting-a-handle-on-ai-hallucinations

Introduction
The CID of Jharkhand Police has uncovered a network of around 8000 bank accounts engaged in cyber fraud across the state, with a focus on Deoghar district, revealing a surprising 25% concentration of fraudulent accounts. In a recent meeting with bank officials, the CID shared compiled data, with 20% of the identified accounts traced to State Bank of India branches. This revelation, surpassing even Jamtara's cyber fraud reputation, prompts questions about the extent of cybercrime in Jharkhand. Under Director General Anurag Gupta's leadership, the CID has registered 90 cases, apprehended 468 individuals, and seized 1635 SIM cards and 1107 mobile phones through the Prakharna portal to combat cybercrime.
This shocking revelation by, Jharkhand Police's Criminal Investigation Department (CID) has built a comprehensive database comprising information on about 8000 bank accounts tied to cyber fraud operations in the state. This vital information has aided in the launch of investigations to identify the account holders implicated in these illegal actions. Furthermore, the CID shared this information with bank officials at a meeting on January 12 to speed up the identification process.
Background of the Investigation
The CID shared the collated material with bank officials in a meeting on 12 January 2024 to expedite the identification process. A stunning 2000 of the 8000 bank accounts under investigation are in the Deoghar district alone, with 20 per cent of these accounts connected to various State Bank of India branches. The discovery of 8000 bank accounts related to cybercrime in Jharkhand is shocking and disturbing. Surprisingly, Deoghar district has exceeded even Jamtara, which was famous for cybercrime, accounting for around 25% of the discovered bogus accounts in the state.
As per the information provided by the CID Crime Branch, it has been found that most of the accounts were opened in banks, are currently under investigation and around 2000 have been blocked by the investigating agencies.
Recovery Process
During the investigation, it was found out that most of these accounts were running on rent, the cyber criminals opened them by taking fake phone numbers along with Aadhar cards and identity cards from people in return these people(account holders) will get a fixed amount every month.
The CID has been unrelenting in its pursuit of cybercriminals. Police have recorded 90 cases and captured 468 people involved in cyber fraud using the Prakharna site. 1635 SIM Cards and 1107 mobile phones were confiscated by police officials during raids in various cities.
The Crime Branch has revealed the names of the cities where accounts are opened
- Deoghar 2500
- Dhanbad 1183
- Ranchi 959
- Bokaro 716
- Giridih 707
- Jamshedpur 584
- Hazaribagh 526
- Dumka 475
- Jamtara 443
Impact on the Financial Institutions and Individuals
These cyber scams significantly influence financial organisations and individuals; let us investigate the implications.
- Victims: Cybercrime victims have significant financial setbacks, which can lead to long-term financial insecurity. In addition, people frequently suffer mental pain as a result of the breach of personal information, which causes worry, fear, and a lack of faith in the digital financial system. One of the most difficult problems for victims is the recovery process, which includes retrieving lost cash and repairing the harm caused by the cyberattack. Individuals will find this approach time-consuming and difficult, in a lot of cases people are unaware of where and when to approach and seek help. Hence, awareness about cybercrimes and a reporting mechanism are necessary to guide victims through the recovery process, aiding them in retrieving lost assets and repairing the harm inflicted by cyberattacks.
- Financial Institutions: Financial institutions face direct consequences when they incur significant losses due to cyber financial fraud. Unauthorised account access, fraudulent transactions, and the compromise of client data result in immediate cash losses and costs associated with investigating and mitigating the breach's impact. Such assaults degrade the reputation of financial organisations, undermine trust, erode customer confidence, and result in the loss of potential clients.
- Future Implications and Solutions: Recently, the CID discovered a sophisticated cyber fraud network in Jharkhand. As a result, it is critical to assess the possible long-term repercussions of such discoveries and propose proactive ways to improve cybersecurity. The CID's findings are expected to increase awareness of the ongoing threat of cyber fraud to both people and organisations. Given the current state of cyber dangers, it is critical to implement rigorous safeguards and impose heavy punishments on cyber offenders. Government organisations and regulatory bodies should also adapt their present cybersecurity strategies to address the problems posed by modern cybercrime.
Solution and Preventive Measures
Several solutions can help combat the growing nature of cybercrime. The first and foremost step is to enhance cybersecurity education at all levels, including:
- Individual Level: To improve cybersecurity for individuals, raising awareness across all age groups is crucial. This can only be done by knowing the potential threats by following the best online practices, following cyber hygiene, and educating people to safeguard themselves against financial frauds such as phishing, smishing etc.
- Multi-Layered Authentication: Encouraging individuals to enable MFA for their online accounts adds an extra layer of security by requiring additional verification beyond passwords.
- Continuous monitoring and incident Response: By continuously monitoring their financial transactions and regularly reviewing the online statements and transaction history, ensure that everyday transactions are aligned with your expenditures, and set up the accounts alert for transactions exceeding a specified amount for usual activity.
- Report Suspicious Activity: If you see any fraudulent transactions or activity, contact your bank or financial institution immediately; they will lead you through investigating and resolving the problem. The victim must supply the necessary paperwork to support your claim.
How to reduce the risks
- Freeze compromised accounts: If you think that some of your accounts have been compromised, call the bank immediately and request that the account be frozen or temporarily suspended, preventing further unauthorised truncations
- Update passwords: Update and change your passwords for all the financial accounts, emails, and online banking accounts regularly, if you suspect any unauthorised access, report it immediately and always enable MFA that adds an extra layer of protection to your accounts.
Conclusion
The CID's finding of a cyber fraud network in Jharkhand is a stark reminder of the ever-changing nature of cybersecurity threats. Cyber security measures are necessary to prevent such activities and protect individuals and institutions from being targeted against cyber fraud. As the digital ecosystem continues to grow, it is really important to stay vigilant and alert as an individual and society as a whole. We should actively participate in more awareness activities to update and upgrade ourselves.
References
- https://avenuemail.in/cid-uncovers-alarming-cyber-fraud-network-8000-bank-accounts-in-jharkhand-involved/
- https://www.the420.in/jharkhand-cid-cyber-fraud-crackdown-8000-bank-accounts-involved/
- https://www.livehindustan.com/jharkhand/story-cyber-fraudsters-in-jharkhand-opened-more-than-8000-bank-accounts-cid-freezes-2000-accounts-investigating-9203292.html
.webp)
Introduction
Autonomous transportation, smart cities, remote medical care, and immersive augmented reality are just a few of the revolutionary applications made possible by the global rollout of 5G technology. However, along with this revolution in connectivity, a record-breaking rise in vulnerabilities and threats has emerged, driven by software-defined networks, growing attack surfaces, and increasingly complex networks. As work on next-generation 6G networks accelerates, with commercialisation starting in 2030, security issues are piling up, including those related to AI-driven networks, terahertz communications, and quantum computing attacks. For a nation like India, poised to become a global technological leader, next-generation network procurement is not merely a technical necessity but a strategic imperative. Initiatives such as India-UK collaboration on telecom security in recent years say a lot about how international alliances are the order of the day to address these challenges.
Why Cybersecurity in 5G and 6G Networks is Crucial
With the launch of global 5G services and the rapid introduction of 6G technologies, the telecom sector is seeing a fundamental transformation. Besides expanding connectivity, future networks are also creating the building blocks for networked and highly intelligent environments. With its ultra-high speed of 10 Gbps, network slicing, and ultra-low latency, 5G provides new capabilities that are perfectly suited for mission-critical applications such as telemedicine, autonomous vehicles, and industrial IoT. Sixth-generation wireless technology is still in development, and it will be approximately one hundred times faster than fifth-generation. Here are a few drawbacks and challenges:
- Decentralised Infrastructure (edge computing nodes): Increased number of entry points for attack.
- Virtual Network Functions (VNFs): Greater vulnerability to configuration issues and software exploitation.
- Billions of IoT devices with different security states, thus forming networks that are more difficult to secure.
Although these challenges are unparalleled, the advancement in technology also creates new opportunities.
Understanding the Cyber Threat Landscape for 5G and 6G
The move to 5G and the upgrade to 6G open great opportunities, but also open doors for new cybersecurity risks. Open RAN usage offers flexibility and vendor selection but exposes the supply chain to untested third-party components and attacks. SBA security vulnerabilities can be exploited to disrupt vital network services, resulting in outages or data breaches. Similarly, widespread adoption of edge computing to reduce latency creates multiple entry points for an attacker to target. Compounding the problem is the explosion of IoT device connections through 5G, which, if breached, can fuel massive botnets capable of conducting massive distributed denial-of-service (DDoS) attacks.
Challenges in 6G
- AI-Powered Cyberattacks: AI-native 6G networks are susceptible to adversarial machine learning attacks, data model poisoning, both for security and for traffic optimisation.
- Quantum Threats: Post-quantum cryptography may be required if quantum computing renders current encryption algorithms outdated.
- Privacy Concerns with Digital Twins: 6G may result in creating enormous privacy and data protection issues in addition to offering real-time virtual replicas of the physical world.
- Cross-Border Data Flow Risks: Secure interoperability frameworks and standardised data sovereignty are essential for the worldwide rollout of 6G.
A Critical Step Toward Secure Telecom: The India-UK Partnership
India's recent foray with the UK reflects its active role in shaping the future of telecom security. Major points of the UK-India Telecom Roundtable are:
- MoU between SONIC Labs and C-DOT: Dedicated to Open RAN and AI integration security in 4G/5G deployments. This will offer supply chain diversity without sacrificing resilience.
- Research Partnerships for 6G: Partnerships with UK institutions like CHEDDAR (Cloud & Distributed Computing Hub) and the University of Glasgow 6G Research Centre are focused on developing AI-driven network security solutions, green 6G, and quantum-resistant design.
- Telecom Cybersecurity Centres of Excellence: Constructing two-way CoEs for telecom cybersecurity, ethical AI, and digital twin security models.
- Standardisation Efforts: Joint contribution to ITU for the creation of IMT-2030 standards, in a way that cybersecurity-by-design principles are integrated into worldwide 6G specifications.
- Future Initiatives:
- Application of privacy-enhancing technologies (PETs) for cross-sectoral data usage.
- Secure quantum communications to be used for satellite and submarine cable connections.
- Encouragement of native telecommunication stacks for strategic independence.
Global Policy and Regulatory Aspects
- India's Bharat 6G Vision: India will lead the global standardisation process in the Bharat 6G Alliance with a vision of inclusive, secure, and sustainable connectivity.
- International Harmonisation:
- 3GPP and ITU's joint effort towards standardisation of 6G security.
- Cross-border privacy and cybersecurity compliance system designs to enable secure flows of data.
- Cyber Diplomacy for Telecom Security: Cross-border sharing of information architectures, threat intelligence sharing, and coordinated incident response schemes are essential to 6G security resilience globally.
Building a Secure and Resilient Future for 5G and 6G
Establishing a safe and future-proof 5G and 6G environment should be an end-to-end effort involving governments, industry, and technology vendors. Security should be integrated into the underlying architecture of the networks and not an afterthought feature to be optionally provided. Active engagement in international bodies to establish homogeneous security and privacy standards across geographies is also required. Public-private partnerships, including academia partnerships, will be the driver for innovation and the creation of advanced protection mechanisms. Simultaneously, creating a competent talent pool to manage AI-based threat analysis, quantum-resistant cryptography, and next-generation cryptographic methods will be required to combat the advanced menace of new telecom technologies.
Conclusion
Given 6G on the way and 5G technologies already changing global connections, cybersecurity needs to continue to be a key focus. The partnership between India and the UK serves as an example of why the safe rise of tomorrow's networks depends on global collaboration, AI-driven security measures, plus quantum preparedness. The world can unleash the potential for transformation of 5G and 6G through combining security by design, supporting international standards, and encouraging innovation via cooperation. This will result in an online future that is not only quick and egalitarian but also solid and trustworthy.
References:
- https://www.pib.gov.in/PressReleasePage.aspx?PRID=2105225
- https://www.itu.int/en/ITU-R/study-groups/rsg5/rwp5d/imt-2030/pages/default.aspx
- https://dot.gov.in/sites/default/files/Bharat%206G%20Vision%20Statement%20-%20full.pdf
- https://www.gsma.com/solutions-and-impact/technologies/security/wp-content/uploads/2024/07/FS.40-v3.0-002-19-July.pdf