#FactCheck - "Viral Video Misleadingly Claims Surrender to Indian Army, Actually Shows Bangladesh Army”
Executive Summary:
A viral video has circulated on social media, wrongly showing lawbreakers surrendering to the Indian Army. However, the verification performed shows that the video is of a group surrendering to the Bangladesh Army and is not related to India. The claim that it is related to the Indian Army is false and misleading.

Claims:
A viral video falsely claims that a group of lawbreakers is surrendering to the Indian Army, linking the footage to recent events in India.



Fact Check:
Upon receiving the viral posts, we analysed the keyframes of the video through Google Lens search. The search directed us to credible news sources in Bangladesh, which confirmed that the video was filmed during a surrender event involving criminals in Bangladesh, not India.

We further verified the video by cross-referencing it with official military and news reports from India. None of the sources supported the claim that the video involved the Indian Army. Instead, the video was linked to another similar Bangladesh Media covering the news.

No evidence was found in any credible Indian news media outlets that covered the video. The viral video was clearly taken out of context and misrepresented to mislead viewers.
Conclusion:
The viral video claiming to show lawbreakers surrendering to the Indian Army is footage from Bangladesh. The CyberPeace Research Team confirms that the video is falsely attributed to India, misleading the claim.
- Claim: The video shows miscreants surrendering to the Indian Army.
- Claimed on: Facebook, X, YouTube
- Fact Check: False & Misleading
Related Blogs

Executive Summary:
A video has gone viral that claims to show Hon'ble Minister of Home Affairs, Shri Amit Shah stating that the BJP-Led Central Government intends to end quotas for scheduled castes (SCs), scheduled tribes (STs), and other backward classes (OBCs). On further investigation, it turns out this claim is false as we found the original clip from an official source, while he delivered the speech at Telangana, Shah talked about falsehoods about religion-based reservations, with specific reference to Muslim reservations. It is a digitally altered video and thus the claim is false.

Claims:
The video which allegedly claims that the Hon'ble Minister of Home Affairs, Shri Amit Shah will be terminating the reservation quota systems of scheduled castes (SCs), scheduled tribes (STs) and other backward classes (OBCs) if BJP government was formed again has been viral on social media platforms.

English Translation: If the BJP government is formed again we will cancel ST, SC reservations: Hon'ble Minister of Home Affairs, Shri Amit Shah


Fact Check:
When the video was received we closely observed the name of the news media channel, and it was V6 News. We divided the video into keyframes and reverse searched the images. For one of the keyframes of the video, we found a similar video with the caption “Union Minister Amit Shah Comments Muslim Reservations | V6 Weekend Teenmaar” uploaded by the V6 News Telugu’s verified Youtube channel on April 23, 2023. Taking a cue from this, we also did some keyword searches to find any relevant sources. In the video at the timestamp of 2:38, Hon'ble Minister of Home Affairs, Shri Amit Shah talks about religion-based reservations calling ‘unconstitutional Muslim Reservation’ and that the Government will remove it.

Further, he talks about the SC, ST, and OBC reservations having full rights for quota but not the Muslim reservation.
While doing the reverse image, we found many other videos uploaded by other media outlets like ANI, Hindustan Times, The Economic Times, etc about ending Muslim reservations from Telangana state, but we found no such evidence that supports the viral claim of removing SC, ST, OBC quota system. After further analysis for any sign of alteration, we found that the viral video was edited while the original information is different. Hence, it’s misleading and false.
Conclusion:
The video featuring the Hon'ble Minister of Home Affairs, Shri Amit Shah announcing that they will remove the reservation quota system of SC, ST and OBC if the new BJP government is formed again in the ongoing Lok sabha election, is debunked. After careful analysis, it was found that the video was fake and was created to misrepresent the actual statement of Hon'ble Minister of Home Affairs, Shri Amit Shah. The original footage surfaced on the V6 News Telugu YouTube channel, in which Hon'ble Minister of Home Affairs, Shri Amit Shah was explaining about religion-based reservations, particularly Muslim reservations in Telangana. Unfortunately, the fake video was false and Hon'ble Minister of Home Affairs, Shri Amit Shah did not mention the end of SC, ST, and OBC reservations.
- Claim: The viral video covers the assertion of Hon'ble Minister of Home Affairs, Shri Amit Shah that the BJP government will soon remove reservation quotas for scheduled castes (SCs), scheduled tribes (STs), and other backward classes (OBCs).
- Claimed on: X (formerly known as Twitter)
- Fact Check: Fake & Misleading

Introduction
Over the past few years, the virtual space has been an irreplaceable livelihood platform for content creators and influencers, particularly on major social media platforms like YouTube and Instagram. Yet, if this growth in digital entrepreneurship is accompanied by anything, it is a worrying trend, a steep surge in account takeover (ATO) attacks against these actors. In recent years, cybercriminals have stepped up the quantity and level of sophistication of such attacks, hacking into accounts, jeopardising the follower base, and incurring economic and reputational damage. They don’t just take over accounts to cause disruption. Instead, they use these hijacked accounts to run scams like fake livestreams and cryptocurrency fraud, spreading them by pretending to be the original account owner. This type of cybercrime is no longer a nuisance; it now poses a serious threat to the creator economy, digital trust, and the wider social media ecosystem.
Why Are Content Creators Prime Targets?
Content creators hold a special place on the web. They are prominent users who live for visibility, public confidence, and ongoing interaction with their followers. Their social media footprint tends to extend across several interrelated platforms, e.g., YouTube, Instagram, X (formerly Twitter), with many of these accounts having similar login credentials or being managed from the same email accounts. This interconnectivity of their online presence crosses multiple platforms and benefits workflow, but makes them appealing targets for hackers. One entry point can give access to a whole chain of vulnerabilities. Attackers, once they control an account, can wield its influence and reach to share scams, lead followers to phishing sites, or spread malware, all from the cover of a trusted name.
Popular Tactics Used by Attackers
- Malicious Livestream Takeovers and Rebranding - Cybercriminals hijack high-subscriber channels and rebrand them to mimic official channels. Original videos are hidden or deleted, replaced with scammy streams using deep fake personas to promote crypto schemes.
- Fake Sponsorship Offers - Creators receive emails from supposed sponsors that contain malware-infected attachments or malicious download links, leading to credential theft.
- Malvertising Campaigns - These involve fake ads on social platforms promoting exclusive software like AI tools or unreleased games. Victims download malware that searches for stored login credentials.
- Phishing and Social Engineering on Instagram - Hackers impersonate Meta support teams via DMs and emails. They direct creators to login pages that are cloned versions of Instagram's site. Others pose as fans to request phone numbers and trick victims into revealing password reset codes.
- Timely Exploits and Event Hijacking - During major public or official events, attackers often escalate their activity. Hijacked accounts are used to promote fake giveaways or exclusive live streams, luring users to malicious websites designed to steal personal information or financial data.
Real-World Impact and Case Examples
The reach and potency of account takeover attacks upon content creators are far-reaching and profound. In a report presented in 2024 by Bitdefender, over 9,000 malicious live streams were seen on YouTube during a year, with many having been streamed from hijacked creator accounts and reassigned to advertise scams and fake content. Perhaps the most high-profile incident was a channel with more than 28 million subscribers and 12.4 billion total views, which was totally taken over and utilised for a crypto fraud scheme live streaming. Additionally, Bitdefender research indicated that over 350 scam domains were utilised by cybercriminals, directly connected via hijacked social media accounts, to entice followers into phishing scams and bogus investment opportunities. Many of these pieces of content included AI-created deep fakes impersonating recognisable personalities like Elon Musk and other public figures, providing the illusion of authenticity around fake endorsements (CCN, 2024). Further, attackers have exploited popular digital events such as esports events, such as Counter-Strike 2 (CS2), by hijacking YouTube gaming channels and livestreaming false giveaways or referring viewers to imitated betting sites.
Protective Measures for Creators
- Enable Multi-Factor Authentication (MFA)
Adds an essential layer of defence. Even if a password is compromised, attackers can't log in without the second factor. Prefer app-based or hardware token authentication.
- Scrutinize Sponsorships
Verify sender domains and avoid opening suspicious attachments. Use sandbox environments to test files. In case of doubt, verify collaboration opportunities through official company sources or verified contacts.
- Monitor Account Activity
Keep tabs on login history, new uploads, and connected apps. Configure alerts for suspicious login attempts or spikes in activity to detect breaches early. Configure alerts for suspicious login attempts or spikes in activity to detect breaches early.
- Educate Your Team
If your account is managed by editors or third parties, train them on common phishing and malware tactics. Employ regular refresher sessions and send mock phishing tests to reinforce awareness.
- Use Purpose-Built Security Tools
Specialised security solutions offer features like account monitoring, scam detection, guided recovery, and protection for team members. These tools can also help identify suspicious activity early and support a quick response to potential threats.
Conclusion
Account takeover attacks are no longer random events, they're systemic risks that compromise the financial well-being and personal safety of creators all over the world. As cybercriminals grow increasingly sophisticated and realistic in their scams, the only solution is a security-first approach. This encompasses a mix of technical controls, platform-level collaboration, education, and investment in creator-centric cybersecurity technologies. In today's fast-paced digital landscape, creators not only need to think about content but also about defending their digital identity. As digital platforms continue to grow, so do the threats targeting creators. However, with the right awareness, tools, and safeguards in place, a secure and thriving digital environment for creators is entirely achievable.
References
- https://www.bitdefender.com/en-au/blog/hotforsecurity/account-takeover-attacks-on-social-media-a-rising-threat-for-content-creators-and-influencers
- https://www.arkoselabs.com/account-takeover/social-media-account-takeover/
- https://www.imperva.com/learn/application-security/account-takeover-ato/
- https://www.security.org/digital-safety/account-takeover-annual-report/
- https://www.niceactimize.com/glossary/account-takeover/

In the rich history of humanity, the advent of artificial intelligence (AI) has added a new, delicate aspect. The aspect of promising technological advancement has the potential to either enrich the nest of our society or destroy it entirely. The latest straw in this complex nest is generative AI, a frontier teeming with both potential and perils. It is a realm where the ethereal concepts of cyber peace and resilience are not just theoretical constructs but tangible necessities.
The spectre of generative AI looms large over the digital landscape, casting a long shadow on the sanctity of data privacy and the integrity of political processes. The seeds of this threat were sown in the fertile soil of the Cambridge Analytica scandal of 2018, a watershed moment that unveiled the extent to which personal data could be harvested and utilized to influence electoral outcomes. However despite the indignation, the scandal resulted in meagre alterations to modus operandi of digital platforms.
Fast forward to the present day, and the spectre has only grown more ominous. A recent report by Human Rights Watch has shed light on the continued exploitation of data-driven campaigning in Hungary's re-election of Viktor Orbán. The report paints a chilling picture of political parties leveraging voter databases for targeted social media advertising, with the ruling Fidesz party even resorting to the unethical use of public service data to bolster its voter database.
The Looming Threat of Disinformation
As we stand on the precipice of 2024, a year that will witness over 50 countries holding elections, the advancements in generative AI could exponentially amplify the ability of political campaigns to manipulate electoral outcomes. This is particularly concerning in countries where information disparities are stark, providing fertile ground for the seeds of disinformation to take root and flourish.
The media, the traditional watchdog of democracy, has already begun to sound the alarm about the potential threats posed by deepfakes and manipulative content in the upcoming elections. The limited use of generative AI in disinformation campaigns has raised concerns about the enforcement of policies against generating targeted political materials, such as those designed to sway specific demographic groups towards a particular candidate.
Yet, while the threat of bad actors using AI to generate and disseminate disinformation is real and present, there is another dimension that has largely remained unexplored: the intimate interactions with chatbots. These digital interlocutors, when armed with advanced generative AI, have the potential to manipulate individuals without any intermediaries. The more data they have about a person, the better they can tailor their manipulations.
Root of the Cause
To fully grasp the potential risks, we must journey back 30 years to the birth of online banner ads. The success of the first-ever banner ad for AT&T, which boasted an astounding 44% click rate, birthed a new era of digital advertising. This was followed by the advent of mobile advertising in the early 2000s. Since then, companies have been engaged in a perpetual quest to harness technology for manipulation, blurring the lines between commercial and political advertising in cyberspace.
Regrettably, the safeguards currently in place are woefully inadequate to prevent the rise of manipulative chatbots. Consider the case of Snapchat's My AI generative chatbot, which ostensibly assists users with trivia questions and gift suggestions. Unbeknownst to most users, their interactions with the chatbot are algorithmically harvested for targeted advertising. While this may not seem harmful in its current form, the profit motive could drive it towards more manipulative purposes.
If companies deploying chatbots like My AI face pressure to increase profitability, they may be tempted to subtly steer conversations to extract more user information, providing more fuel for advertising and higher earnings. This kind of nudging is not clearly illegal in the U.S. or the EU, even after the AI Act comes into effect. The market size of AI in India is projected to touch US$4.11bn in 2023.
Taking this further, chatbots may be inclined to guide users towards purchasing specific products or even influencing significant life decisions, such as religious conversions or voting choices. The legal boundaries here remain unclear, especially when manipulation is not detectable by the user.
The Crucial Dos/Dont's
It is crucial to set rules and safeguards in order to manage the possible threats related to manipulative chatbots in the context of the general election in 2024.
First and foremost, candor and transparency are essential. Chatbots, particularly when employed for political or electoral matters, ought to make it clear to users what they are for and why they are automated. By being transparent, people are guaranteed to be aware that they are interacting with automated processes.
Second, getting user consent is crucial. Before collecting user data for any reason, including advertising or political profiling, users should be asked for their informed consent. Giving consumers easy ways to opt-in and opt-out gives them control over their data.
Furthermore, moral use is essential. It's crucial to create an ethics code for chatbot interactions that forbids manipulation, disseminating false information, and trying to sway users' political opinions. This guarantees that chatbots follow moral guidelines.
In order to preserve transparency and accountability, independent audits need to be carried out. Users might feel more confident knowing that chatbot behavior and data collecting procedures are regularly audited by impartial third parties to ensure compliance with legal and ethical norms.
Important "don'ts" to take into account. Coercion and manipulation ought to be outlawed completely. Chatbots should refrain from using misleading or manipulative approaches to sway users' political opinions or religious convictions.
Another hazard to watch out for is unlawful data collecting. Businesses must obtain consumers' express agreement before collecting personal information, and they must not sell or share this information for political reasons.
At all costs, one should steer clear of fake identities. Impersonating people or political figures is not something chatbots should do because it can result in manipulation and false information.
It is essential to be impartial. Bots shouldn't advocate for or take part in political activities that give preference to one political party over another. In encounters, impartiality and equity are crucial.
Finally, one should refrain from using invasive advertising techniques. Chatbots should ensure that advertising tactics comply with legal norms by refraining from displaying political advertisements or messaging without explicit user agreement.
Present Scenario
As we approach the critical 2024 elections and generative AI tools proliferate faster than regulatory measures can keep pace, companies must take an active role in building user trust, transparency, and accountability. This includes comprehensive disclosure about a chatbot's programmed business goals in conversations, ensuring users are fully aware of the chatbot's intended purposes.
To address the regulatory gap, stronger laws are needed. Both the EU AI Act and analogous laws across jurisdictions should be expanded to address the potential for manipulation in various forms. This effort should be driven by public demand, as the interests of lawmakers have been influenced by intensive Big Tech lobbying campaigns.
At present, India doesn’t have any specific laws pertaining to AI regulation. Ministry of Electronics and Information Technology (MEITY), is the executive body responsible for AI strategies and is constantly working towards a policy framework for AI. The Niti Ayog has presented seven principles for responsible AI which includes equality , inclusivity, safety, privacy, transparency, accountability, dependability and protection of positive human values.
Conclusion
We are at a pivotal juncture in history. As generative AI gains more power, we must proactively establish effective strategies to protect our privacy, rights and democracy. The public's waning confidence in Big Tech and the lessons learned from the techlash underscore the need for stronger regulations that hold tech companies accountable. Let's ensure that the power of generative AI is harnessed for the betterment of society and not exploited for manipulation.
Reference
McCallum, B. S. (2022, December 23). Meta settles Cambridge Analytica scandal case for $725m. BBC News. https://www.bbc.com/news/technology-64075067
Hungary: Data misused for political campaigns. (2022, December 1). Human Rights Watch. https://www.hrw.org/news/2022/12/01/hungary-data-misused-political-campaigns
Statista. (n.d.). Artificial Intelligence - India | Statista Market forecast. https://www.statista.com/outlook/tmo/artificial-intelligence/india