#FactCheck - "Viral Video Misleadingly Claims Surrender to Indian Army, Actually Shows Bangladesh Army”
Executive Summary:
A viral video has circulated on social media, wrongly showing lawbreakers surrendering to the Indian Army. However, the verification performed shows that the video is of a group surrendering to the Bangladesh Army and is not related to India. The claim that it is related to the Indian Army is false and misleading.

Claims:
A viral video falsely claims that a group of lawbreakers is surrendering to the Indian Army, linking the footage to recent events in India.



Fact Check:
Upon receiving the viral posts, we analysed the keyframes of the video through Google Lens search. The search directed us to credible news sources in Bangladesh, which confirmed that the video was filmed during a surrender event involving criminals in Bangladesh, not India.

We further verified the video by cross-referencing it with official military and news reports from India. None of the sources supported the claim that the video involved the Indian Army. Instead, the video was linked to another similar Bangladesh Media covering the news.

No evidence was found in any credible Indian news media outlets that covered the video. The viral video was clearly taken out of context and misrepresented to mislead viewers.
Conclusion:
The viral video claiming to show lawbreakers surrendering to the Indian Army is footage from Bangladesh. The CyberPeace Research Team confirms that the video is falsely attributed to India, misleading the claim.
- Claim: The video shows miscreants surrendering to the Indian Army.
- Claimed on: Facebook, X, YouTube
- Fact Check: False & Misleading
Related Blogs

Introduction
In the face of escalating cybercrimes in India, criminals are adopting increasingly inventive methods to deceive victims. Imagine opening your phone to the notification of an incoming message from a stranger with a friendly introduction - a beginning that appears harmless, but is the beginning of an awful financial nightmare. "Pig Butchering '' scam—an increasingly sophisticated form of deception that's gaining more widespread popularity. Unlike any other scams, this one plays a long game, spinning a web of trust before it strikes. It's a modern-day financial thriller happening in the real world, with real victims. "pig butchering" scam, involves building trust through fake profiles and manipulating victims emotionally to extort money. The scale of such scams has raised concerns, emphasising the need for awareness and vigilance in the face of evolving cyber threats.
How does 'Pig Butchering' Scam Work?
At its core, the scam starts innocuously, often with a stranger reaching out via text, social media, or apps like WhatsApp or WeChat. The scammer, hiding behind a well-crafted and realistic online persona, seeks to forge a connection. This could be under the pretence of friendship or romance, employing fake photos and stories to seem authentic. Gradually, the scammer builds a rapport, engaging in personal and often non-financial conversations. They may portray themselves as a widow, single parent, or even a military member to evoke empathy and trust. Over time, this connection pivots to investment opportunities, with the scammer presenting lucrative tips or suggestions in stocks or cryptocurrencies. Initially, modest investments are encouraged, and falsified returns are shown to lure in larger sums. Often, the scammer claims affiliation with a profitable financial institution or success in cryptocurrency trading. They direct victims to specific, usually fraudulent, trading platforms under their control. The scam reaches its peak when significant investments are made, only for the scammer to manipulate the situation, block access to the trading platform, or vanish, leaving the victim with substantial losses.
Real-Life Examples and Global Reach
These scams are not confined to one region. In India, for instance, scammers use emotional manipulation, often starting with a WhatsApp message from an unknown, attractive individual. They pose as professionals offering part-time jobs, leading victims through tasks that escalate in investment and complexity. These usually culminate in cryptocurrency investments, with victims unable to withdraw their funds, the money often traced to accounts in Dubai.
In the West, several cases highlight the scam's emotional and financial toll: A Michigan woman was lured by an online boyfriend claiming to make money from gold trading. She invested through a fake brokerage, losing money while being emotionally entangled. A Canadian man named Sajid Ikram lost nearly $400,000 in a similar scam, initially misled by a small successful withdrawal. In California, a man lost $440,000, succumbing to pressure to invest more, including retirement savings and borrowed money. A Maryland victim faced continuous demands from scammers, losing almost $1.4 million in hopes of recovering previous losses. A notable case involved US authorities seizing about $9 million in cryptocurrency linked to a global pig butchering scam, showcasing its extensive reach.
Safeguarding Against Such Scams
Vigilance is crucial to prevent falling victim to these scams. Be skeptical of unsolicited contacts and wary of investment advice from strangers. Conduct thorough research before any financial engagement, particularly on unfamiliar platforms. Indian Cyber Crime Coordination Center warns of red flags like sudden large virtual currency transactions, interest in high-return investments mentioned by new online contacts, and atypical customer behaviour.
Victims should report incidents to various Indian and foreign websites and the Securities Exchange Commission. Financial institutions are advised to report suspicious activities related to these scams. In essence, the pig butchering scam is a cunning blend of emotional manipulation and financial fraud. Staying informed and cautious is key to avoiding these sophisticated traps.
Conclusion
The Pig Butchering Scams are one of the many new breeds of emerging cyber scams that have become a bone of contention for cyber security organisations. It is imperative for netizens to stay vigilant and well-informed about the dynamics of cyberspace and emerging cyber crimes.
References
- https://www.sentinelassam.com/more-news/national-news/from-impersonating-cbi-officers-to-pig-butchering-cyber-criminals-get-creative
- https://hiindia.com/from-impersonating-cbi-officers-to-pig-butchering-cyber-criminals-get-creative/
.webp)
Introduction
The rapid advancement of technology, including generative AI, offers immense benefits but also raises concerns about misuse. The Internet Watch Foundation reported that, as of July 2024, over 3,500 new AI-generated child sexual abuse images appeared on the dark web. The UK’s National Crime Agency records 800 monthly arrests for online child threats and estimates 840,000 adults as potential offenders. In response, the UK is introducing legislation to criminalise AI-generated child exploitation imagery, which will be a part of the Crime and Policing Bill when it comes to parliament in the next few weeks, aligning with global AI regulations like the EU AI Act and the US AI Initiative Act. This policy shift strengthens efforts to combat online child exploitation and sets a global precedent for responsible AI governance.
Current Legal Landscape and the Policy Gap
The UK’s Online Safety Act 2023 aims to combat CSAM and deepfake pornography by holding social media and search platforms accountable for user safety. It mandates these platforms to prevent children from accessing harmful content, remove illegal material, and offer clear reporting mechanisms. For adults, major platforms must be transparent about harmful content policies and provide users control over what they see.
However, the Act has notable limitations, including concerns over content moderation overreach, potential censorship of legitimate debates, and challenges in defining "harmful" content. It may disproportionately impact smaller platforms and raise concerns about protecting journalistic content and politically significant discussions. While intended to enhance online safety, these challenges highlight the complexities of balancing regulation with digital rights and free expression.
The Proposed Criminalisation of AI-Generated Sexual Abuse Content
The proposed law by the UK criminalises the creation, distribution, and possession of AI-generated CSAM and deepfake pornography. It mandates enforcement agencies and digital platforms to identify and remove such content, with penalties for non-compliance. Perpetrators may face up to two years in prison for taking intimate images without consent or installing equipment to facilitate such offences. Currently, sharing or threatening to share intimate images, including deepfakes, is an offence under the Sexual Offences Act 2003, amended by the Online Safety Act 2023. The government plans to repeal certain voyeurism offences, replacing them with broader provisions covering unauthorised intimate recordings. This aligns with its September 2024 decision to classify sharing intimate images as a priority offence under the Online Safety Act, reinforcing its commitment to balancing free expression with harm prevention.
Implications for AI Regulation and Platform Responsibility
The UK's move aligns with its AI Safety Summit commitments, placing responsibility on platforms to remove AI-generated sexual abuse content or face Ofcom enforcement. The Crime and Policing Bill is expected to tighten AI regulations, requiring developers to integrate safeguards against misuse, and the licensing frameworks may enforce ethical AI standards, restricting access to synthetic media tools. Given AI-generated abuse's cross-border nature, enforcement will necessitate global cooperation with platforms, law enforcement, and regulators. Bilateral and multilateral agreements could help harmonise legal frameworks, enabling swift content takedown, evidence sharing, and extradition of offenders, strengthening international efforts against AI-enabled exploitation.
Conclusion and Policy Recommendations
The Crime and Policing Bill marks a crucial step in criminalising AI-generated CSAM and deepfake pornography, strengthening online safety and platform accountability. However, balancing digital rights and enforcement remains a challenge. For effective implementation, industry cooperation is essential, with platforms integrating detection tools and transparent reporting systems. AI ethics frameworks should prevent misuse while allowing innovation, and victim support mechanisms must be prioritised. Given AI-driven abuse's global nature, international regulatory alignment is key for harmonised laws, evidence sharing, and cross-border enforcement. This legislation sets a global precedent, emphasising proactive regulation to ensure digital safety, ethical AI development, and the protection of human dignity.
References
- https://www.iwf.org.uk/about-us/why-we-exist/our-research/how-ai-is-being-abused-to-create-child-sexual-abuse-imagery/
- https://www.reuters.com/technology/artificial-intelligence/uk-makes-use-ai-tools-create-child-abuse-material-crime-2025-02-01/
- https://www.financialexpress.com/life/technology-uk-set-to-ban-ai-tools-for-creating-child-sexual-abuse-images-with-new-laws-3735296/
- https://www.gov.uk/government/publications/national-crime-agency-annual-report-and-accounts-2023-to-2024/national-crime-agency-annual-report-and-accounts-2023-to-2024-accessible#part-1--performance-report

Introduction
Targeting airlines and airports, airline hoax threats are fabricated alarms which intend to disrupt normal day-to-day activities and create panic among the public. Security of public settings is of utmost importance, making them a vulnerable target. The consequences of such threats include the financial loss incurred by parties concerned, increased security protocols to be followed immediately after and in preparation, flight delays and diversions, emergency landings and passenger inconvenience and emotional distress. The motivation behind such threats is malicious intent of varying degrees, breaching national security, integrity and safety. However, apart from the government, airline and social media authorities which already have certain measures in place to tackle such issues, the public, through responsible consumption and verified sharing has an equal role in preventing the spread of misinformation and panic regarding the same.
Hoax Airline Threats
The recent spate of bomb hoax threats to Indian airlines has witnessed false reports about threats to (over) 500 flights since 14/10/2024, the majority being traced to posts on social media handles which are either anonymous or unverified. Some recent incidents include a hoax threat on Air India's flights from Delhi to Mumbai via Indore which was posted on X, 30/10/2024 and a flight from Nepal (Kathmandu) to Delhi on November 2nd, 2024.
As per reports by the Indian Express, steps are being taken to address such incidents by tweaking the assessment criteria for threats (regarding bombs) and authorities such as the Bomb Threat Assessment Committees (BTAC) are being selective in categorising them as specific and non-specific. Some other consideration factors include whether a VIP is onboard and whether the threat has been posted from an anonymous account with a similar history.
CyberPeace Recommendations
- For Public
- Question sensational information: The public should scrutinise the information they’re consuming not only to keep themselves safe but also to be responsible to other citizens. Exercise caution before sharing alarming messages, posts and pieces of information
- Recognising credible sources: Rely only on trustworthy, verified sources when sharing information, especially when it comes to topics as serious as airline safety.
- Avoiding Reactionary Sharing: Sharing in a state of panic can contribute to the chaos created upon receiving unverified news, hence, it is suggested to refrain from reactionary sharing.
- For the Authorities & Agencies
- After a series of hoax bomb threats, the Government of India has issued an advisory to social media platforms calling for them to make efforts for the removal of such malicious content. Adherence to obligations such as the prompt removal of harmful content or disabling access to such unlawful information has been specified under the IT Rules, 2021. They are also obligated under the Bhartiya Nagarik Suraksha Sanhita 2023 to report certain offences on their platform. The Ministry of Civil Aviation’s action plan consists of plans regarding hoax bomb threats being labelled as a cognisable offence, and attracting a no-flyers list as a penalty, among other things.
These plans also include steps such as :
- Introduction of other corrective measures that are to be taken against bad actors (similar to having a non-flyers list).
- Introduction of a reporting mechanism which is specific to such threats.
- Focus on promoting awareness, digital literacy and critical thinking, fact-checking resources as well as encouraging the public to report such hoaxes
Conclusion
Preventing the spread of airline threat hoaxes is a collective responsibility which involves public engagement and ownership to strengthen safety measures and build upon the trust in the overall safety ecosystem (here; airline agencies, government authorities and the public). As the government and agencies take measures to prevent such instances, the public should continue to share information only from and on verified and trusted portals. It is encouraged that the public must remain vigilant and responsible while consuming and sharing information.
References
- https://indianexpress.com/article/business/flight-bomb-threats-assessment-criteria-serious-9646397/
- https://www.wionews.com/world/indian-airline-flight-bound-for-new-delhi-from-nepal-receives-hoax-bomb-threat-amid-rise-in-similar-incidents-772795
- https://www.newindianexpress.com/nation/2024/Oct/26/centre-cautions-social-media-platforms-to-tackle-misinformation-after-hoax-bomb-threat-to-multiple-airlines
- https://economictimes.indiatimes.com/industry/transportation/airlines-/-aviation/amid-rising-hoax-bomb-threats-to-indian-airlines-centre-issues-advisory-to-social-media-companies/articleshow/114624187.cms