#FactCheck: Viral video claims BSF personnel thrashing a person selling Bangladesh National Flag in West Bengal
Executive Summary:
A video circulating online claims to show a man being assaulted by BSF personnel in India for selling Bangladesh flags at a football stadium. The footage has stirred strong reactions and cross border concerns. However, our research confirms that the video is neither recent nor related to the incident that occurred in India. The content has been wrongly framed and shared with misleading claims, misrepresenting the actual incident.
Claim:
It is being claimed through a viral post on social media that a Border Security Force (BSF) soldier physically attacked a man in India for allegedly selling the national flag of Bangladesh in West Bengal. The viral video further implies that the incident reflects political hostility towards Bangladesh within Indian territory.

Fact Check:
After conducting thorough research, including visual verification, reverse image searching, and confirming elements in the video background, we determined that the video was filmed outside of Bangabandhu National Stadium in Dhaka, Bangladesh, during the crowd buildup prior to the AFC Asian Cup. A match featuring Bangladesh against Singapore.

Second layer research confirmed that the man seen being assaulted is a local flag-seller named Hannan. There are eyewitness accounts and local news sources indicating that Bangladeshi Army officials were present to manage the crowd on the day under review. During the crowd control effort a soldier assaulted the vendor with excessive force. The incident created outrage to which the Army responded by identifying the officer responsible and taking disciplinary measures. The victim was reported to have been offered reparations for the misconduct.

Conclusion:
Our research confirms that the viral video does not depict any incident in India. The claim that a BSF officer assaulted a man for selling Bangladesh flags is completely false and misleading. The real incident occurred in Bangladesh, and involved a local army official during a football event crowd-control situation. This case highlights the importance of verifying viral content before sharing, as misinformation can lead to unnecessary panic, tension, and international misunderstanding.
- Claim: Viral video claims BSF personnel thrashing a person selling Bangladesh National Flag in West Bengal
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

What Is a VPN and its Significance
A Virtual Private Network (VPN) creates a secure and reliable network connection between a device and the internet. It hides your IP address by rerouting it through the VPN’s host servers. For example, if you connect to a US server, you appear to be browsing from the US, even if you’re in India. It also encrypts the data being transferred in real-time so that it is not decipherable by third parties such as ad companies, the government, cyber criminals, or others.
All online activity leaves a digital footprint that is tracked for data collection, and surveillance, increasingly jeopardizing user privacy. VPNs are thus a powerful tool for enhancing the privacy and security of users, businesses, governments and critical sectors. They also help protect users on public Wi-Fi networks ( for example, at airports and hotels), journalists, activists and whistleblowers, remote workers and businesses, citizens in high-surveillance states, and researchers by affording them a degree of anonymity.
What VPNs Do and Don’t
- What VPNs Can Do:
- Mask your IP address to enhance privacy.
- Encrypt data to protect against hackers, especially on public Wi-Fi.
- Bypass geo-restrictions (e.g., access streaming content blocked in India).
- What VPNs Cannot Do:
- Make you completely anonymous and protect your identity (websites can still track you via cookies, browser fingerprinting, etc.).
- Protect against malware or phishing.
- Prevent law enforcement from tracing you if they have access to VPN logs.
- Free VPNs usually even share logs with third parties.
VPNs in the Context of India’s Privacy Policy Landscape
In April 2022, CERT-In (Computer Emergency Response Team- India) released Directions under Section 70B (6) of the Information Technology (“IT”) Act, 2000, mandating VPN service providers to store customer data such as “validated names of subscribers/customers hiring the services, period of hire including dates, IPs allotted to / being used by the members, email address and IP address and time stamp used at the time of registration/onboarding, the purpose for hiring services, validated address and contact numbers, and the ownership pattern of the subscribers/customers hiring services” collected as part of their KYC (Know Your Customer) requirements, for a period of five years, even after the subscription has been cancelled. While this directive was issued to aid with cybersecurity investigations, it undermines the core purpose of VPNs- anonymity and privacy. It also gave operators very little time to carry out compliance measures.
Following this, operators such as NordVPN, ExpressVPN, ProtonVPN, and others pulled their physical servers out of India, and now use virtual servers hosted abroad (e.g., Singapore) with Indian IP addresses. While the CERT-In Directions have extra-territorial applicability, virtual servers are able to bypass them since they physically operate from a foreign jurisdiction. This means that they are effectively not liable to provide user information to Indian investigative agencies, beating the whole purpose of the directive. To counter this, the Indian government could potentially block non-compliant VPN services in the future. Further, there are concerns about overreach since the Directions are unclear about how long CERT-In can retain the data it acquires from VPN operators, how it will be used and safeguarded, and the procedure of holding VPN operators responsible for compliance.
Conclusion: The Need for a Privacy-Conscious Framework
The CERT-In Directions reflect a governance model which, by prioritizing security over privacy, compromises on safeguards like independent oversight or judicial review to balance the two. The policy design renders a lose-lose situation because virtual VPN services are still available, while the government loses oversight. If anything, this can make it harder for the government to track suspicious activity. It also violates the principle of proportionality established in the landmark privacy judgment, Puttaswamy v. Union of India (II) by giving government agencies the power to collect excessive VPN data on any user. These issues underscore the need for a national-level, privacy-conscious cybersecurity framework that informs other policies on data protection and cybercrime investigations. In the meantime, users who use VPNs are advised to choose reputable providers, ensure strong encryption, and follow best practices to maintain online privacy and security.
References
- https://www.kaspersky.com/resource-center/definitions/what-is-a-vpn
- https://internetfreedom.in/top-secret-one-year-on-cert-in-refuses-to-reveal-information-about-compliance-notices-issued-under-its-2022-directions-on-cybersecurity/#:~:text=tl;dr,under%20this%20new%20regulatory%20mandate.
- https://www.wired.com/story/vpn-firms-flee-india-data-collection-law/#:~:text=Starting%20today%2C%20the%20Indian%20Computer,years%2C%20even%20after%20they%20have

Introduction
The Sexual Harassment of minors in cyberspace has become a matter of grave concern that needs to be addressed. Sextortion is the practice of extorting individuals into sharing explicit and sexual content under the threat of exposure. This grim activity has evolved into a pervasive issue on several social media platforms, particularly Instagram. To combat this illicit act, big corporate giants such as Meta have deployed a comprehensive ‘nudity protection’ feature, leveraging the use of AI (Artificial Intelligence) algorithms to ascertain and address the rapid distribution of unsolicited explicit content.
The Meta Initiative presented a multifaceted approach to improve user safety, especially for young people online, who are more vulnerable to predatory behavior.
The Salient Feature
Instagram’s use of advanced AI algorithms to automatically identify and blur out explicit images shared within direct messages is the driving force behind this initiative. This new safety measure serves two essential purposes.
- Preventing dissemination of sensitive content - The feature, when enabled, obstructs the visibility of sensitive personal pictures and also limits dissemination of the same.
- Empower minors to exercise more control over their social media - This cutting feature comes with the ability to disable the nudity protection at the will of users, allowing users, including minors, to regulate their exposure to age-inappropriate and harmful materials online. The nudity protection feature is enabled for all users under 18 as a default setting on Instagram globally. This measure guarantees a baseline standard of security for the most vulnerable demographic of users. Adults are able to exercise more autonomy over the feature, receiving periodic prompts for its voluntary activationWhen this feature detects an explicit image, it automatically blurs the image with cautionary overlay, enabling recipients to make an informed decision about whether or not they wish to view the flagged content. The decision to introduce this feature is an interesting and sensitive approach to balancing individual agency with institutionalising online protection.
Comprehensive Safety Measures Beyond Nudity Detection
The cutting-edge nudity protection feature is a crucial element of Instagram’s new strategy and is supported by a comprehensive set of measures devised to tackle sextortion and ensure a safe cyber environment for its users:
Awareness Drives and Safety Tips - Users sending and receiving sexually explicit content are directed to a screen with curated safety tips to ensure complete user awareness and inspire due diligence. These safety tips are critical in raising awareness about the risks of sharing sensitive content and inculcating responsible online behaviour.
New Technology to Identify Sextortionists - Meta Platforms are constantly evolving, and new sophisticated algorithms are introduced to better detect malicious accounts engaged in possible sextortion. These proactive measures check for any predatory behaviour so that such threats can be neutralised before they escalate and do grave harm.
Superior Reporting and Support Mechanisms - Instagram is implementing new technology to bolster its reporting mechanisms so that users reporting concerns pertaining to nudity, sexual exploitation and threats are instantaneously directed to local child safety authorities for necessary support and assistance.
This new sophisticated approach highlights Instagram's Commitment to forging a safer haven for users by addressing various aspects of this grim issue through the three-pronged strategy of detection, prevention and support.
User’s Safety and Accountability
The implementation of the nudity protection feature and various associated safety measures is Meta’s way of tackling the growing concern about user safety in a more proactive manner, especially when it concerns minors. Instagram’s experience with this feature will likely be the sandbox in which Meta tests its new user protection strategy and refines it before extending it to other platforms like Facebook and WhatsApp.
Critical Reception and Future Outlook
The nudity protection feature has been met with positive feedback from experts and online safety advocates, commending Instagram for taking a proactive stance against sextortion and exploitation. However, critics also emphasise the need for continued innovation, transparency, and accountability to effectively address evolving threats and ensure comprehensive protection for all users.
Conclusion
As digital spaces continue to evolve, Meta Platforms must demonstrate an ongoing commitment to adapting its safety measures and collaborating with relevant stakeholders to stay ahead of emerging challenges. Ongoing investment in advanced technology, user education, and robust support systems will be crucial in maintaining a secure and responsible online environment. Ultimately, Instagram's nudity protection feature represents a significant step forward in the fight against online sexual exploitation and abuse. By leveraging cutting-edge technology, fostering user awareness, and implementing comprehensive safety protocols, Meta Platforms is setting a positive example for other social media platforms to prioritise user safety and combat predatory behaviour in digital spaces.
References
- https://www.nbcnews.com/tech/tech-news/instagram-testing-blurring-nudity-messages-protect-teens-sextortion-rcna147402
- https://techcrunch.com/2024/04/11/meta-will-auto-blur-nudity-in-instagram-dms-in-latest-teen-safety-step/
- https://hypebeast.com/2024/4/instagram-dm-nudity-blurring-feature-teen-safety-info

Introduction
AI has transformed the way we look at advanced technologies. As the use of AI is evolving, it also raises a concern about AI-based deepfake scams. Where scammers use AI technologies to create deep fake videos, images and audio to deceive people and commit AI-based crimes. Recently a Kerala man fall victim to such a scam. He received a WhatsApp video call, the scammer impersonated the face of the victim’s known friend using AI-based deep fake technology. There is a need for awareness and vigilance to safeguard ourselves from such incidents.
Unveiling the Kerala deep fake video call Scam
The man in Kerala received a WhatsApp video call from a person claiming to be his former colleague in Andhra Pradesh. In actuality, he was the scammer. He asked for help of 40,000 rupees from the Kerala man via google pay. Scammer to gain the trust even mentioned some common friends with the victim. The scammer said that he is at the Dubai airport and urgently need the money for the medical emergency of his sister.
As AI is capable of analysing and processing data such as facial images, videos, and audio creating a realistic deep fake of the same which closely resembles as real one. In the Kerala Deepfake video call scam the scammer made a video call that featured a convincingly similar facial appearance and voice as same to the victim’s colleague which the scammer was impersonating. The Kerala man believing that he was genuinely communicating with his colleague, transferred the money without hesitation. The Kerala man then called his former colleague on the number he had saved earlier in his contact list, and his former colleague said that he has not called him. Kerala man realised that he had been cheated by a scammer, who has used AI-based deep-fake technology to impersonate his former colleague.
Recognising Deepfake Red Flags
Deepfake-based scams are on the rise, as they pose challenges that really make it difficult to distinguish between genuine and fabricated audio, videos and images. Deepfake technology is capable of creating entirely fictional photos and videos from scratch. In fact, audio can be deepfaked too, to create “voice clones” of anyone.
However, there are some red flags which can indicate the authenticity of the content:
- Video quality- Deepfake videos often have compromised or poor video quality, and unusual blur resolution, which might pose a question to its genuineness.
- Looping videos: Deepfake videos often loop or unusually freeze or where the footage repeats itself, indicating that the video content might be fabricated.
- Verify Separately: Whenever you receive requests for such as financial help, verify the situation by directly contacting the person through a separate channel such as a phone call on his primary contact number.
- Be vigilant: Scammers often possess a sense of urgency leading to giving no time to the victim to think upon it and deceiving them by making a quick decision. So be vigilant and cautious when receiving and entertaining such a sudden emergency which demands financial support from you on an urgent basis.
- Report suspicious activity: If you encounter such activities on your social media accounts or through such calls report it to the platform or to the relevant authority.
Conclusion
The advanced nature of AI deepfake technology has introduced challenges in combatting such AI-based cyber crimes. The Kerala man’s case of falling victim to an AI-based deepfake video call and losing Rs 40,000 serves as an alarming need to remain extra vigilant and cautious in the digital age. So in the reported incident where Kerala man received a call from a person appearing as his former colleague but in actuality, he was a scammer and tricking the victim by using AI-based deepfake technology. By being aware of such types of rising scams and following precautionary measures we can protect ourselves from falling victim to such AI-based cyber crimes. And stay protected from such malicious scammers who exploit these technologies for their financial gain. Stay cautious and safe in the ever-evolving digital landscape.