#FactCheck - Viral Image of Broken Mahatma Gandhi Statue Is Not from Bangladesh
An image showing a damaged statue of Mahatma Gandhi, broken into two pieces, is being widely shared on social media. The image shows Gandhi’s statue with its head separated from the body, prompting strong reactions online.
Social media users are claiming that the incident occurred in Bangladesh, alleging that Mahatma Gandhi’s statue was deliberately vandalised there. The image is being described as a recent incident and is being circulated across platforms with provocative and inflammatory captions.
Cyber Peace Foundation’s research and verification found that the claim being shared online is misleading. Our rsearch revealed that the viral image is not from Bangladesh. The image is actually from Chakulia in Uttar Dinajpur district of West Bengal, India
Claim:
Social media users claim that Mahatma Gandhi’s statue was vandalised in Bangladesh, and that the viral image shows a recent incident from the country.One Facebook user shared the video on 19 January 2026, making derogatory remarks and falsely linking the incident to Bangladesh. The post has since been widely shared on social media platforms. (Archived links and screenshots are available.)

Fact Check:
Our research revealed that the viral image is not from Bangladesh. The image is actually from Chakulia in Uttar Dinajpur district of West Bengal, India. To verify the claim, we conducted a reverse image search using Google Lens on key frames from the viral video. This led us to a report published by ABP Live Bangla on 16 January 2026, which featured the same visuals. Link and screenshot

According to ABP Live Bangla, the statue of Mahatma Gandhi was damaged during a protest in Chakulia. The statue’s head was found separated from the body. While a portion of the broken statue remained at the site on Thursday night, it was reported missing by Friday morning. The report further stated that extensive damage was observed at BDO Office No. 2 in Golpokhar. Gandhi’s statue, located at the entrance of the administrative building, was found broken, and ashes were discovered near the premises. Government staff were seen clearing scattered debris from the site.
The incident reportedly occurred during a SIR (Special Intensive Revision) hearing at the BDO office, which was disrupted due to vandalism. In connection with the violence and damage to government property, 21 people have been arrested so far. In the next stage of verification, we found the same footage in a 16 January 2026 report by local Bengali news channel K TV, which also showed clear visuals of the damaged Mahatma Gandhi statue. Link and screenshot.

Conclusion:
The viral image of Mahatma Gandhi’s broken statue does not depict an incident from Bangladesh. The image is from Chakulia in West Bengal’s Uttar Dinajpur district, where the statue was damaged during a protest.
Related Blogs

The rapid innovation of technology and its resultant proliferation in India has integrated businesses that market technology-based products with commerce. Consumer habits have now shifted from traditional to technology-based products, with many consumers opting for smart devices, online transactions and online services. This migration has increased potential data breaches, product defects, misleading advertisements and unfair trade practices.
The need to regulate technology-based commercial industry is seen in the backdrop of various threats that technologies pose, particularly to data. Most devices track consumer behaviour without the authorisation of the consumer. Additionally, products are often defunct or complex to use and the configuration process may prove to be lengthy with a vague warranty.
It is noted that consumers also face difficulties in the technology service sector, even while attempting to purchase a product. These include vendor lock-ins (whereby a consumer finds it difficult to migrate from one vendor to another), dark patterns (deceptive strategies and design practices that mislead users and violate consumer rights), ethical concerns etc.
Against this backdrop, consumer laws are now playing catch up to adequately cater to new consumer rights that come with technology. Consumer laws now have to evolve to become complimentary with other laws and legislation that govern and safeguard individual rights. This includes emphasising compliance with data privacy regulations, creating rules for ancillary activities such as advertising standards and setting guidelines for both product and product seller/manufacturer.
The Legal Framework in India
Currently, Consumer Laws in India while not tech-targeted, are somewhat adequate; The Consumer Protection Act 2019 (“Act”) protects the rights of consumers in India. It places liability on manufacturers, sellers and service providers for any harm caused to a consumer by faulty/defective products. As a result, manufacturers and sellers of ‘Internet & technology-based products’ are brought under the ambit of this Act. The Consumer Protection Act 2019 may also be viewed in light of the Digital Personal Data Protection Act 2023, which mandates the security of the digital personal data of an individual. Envisioned provisions such as those pertaining to mandatory consent, purpose limitation, data minimization, mandatory security measures by organisations, data localisation, accountability and compliance by the DPDP Act can be applied to information generated by and for consumers.
Multiple regulatory authorities and departments have also tasked themselves to issue guidelines that imbibe the principle of caveat venditor. To this effect, the Networks & Technologies (NT) wing of the Department of Telecommunications (DoT) on 2 March 2023, issued the Advisory Guidelines to M2M/IoT stakeholders for securing consumer IoT (“Guidelines”) aiming for M2M/IoT (i.e. Machine to Machine/Internet of things) compliance with the safety and security standards and guidelines in order to protect the users and the networks that connect these devices. The comprehensive Guidelines suggest the removal of universal default passwords and usernames such as “admin” that come preprogrammed with new devices and mandate the password reset process to be done after user authentication. Web services associated with the product are required to use Multi-Factor Authentication and duty is cast on them to not expose any unnecessary user information prior to authentication. Further, M2M/IoT stakeholders are required to provide a public point of contact for reporting vulnerability and security issues. Such stakeholders must also ensure that the software components are updateable in a secure and timely manner. An end-of-life policy is to be published for end-point devices which states the assured duration for which a device will receive software updates.
The involvement of regulatory authorities depends on the nature of technology products; a single product or technical consumer threat may see multiple guidelines. The Advertising Standards Council of India (ASCI) notes that cryptocurrency and related products were considered as the most violative category to commit fraud. In an attempt to protect consumer safety, it introduced guidelines to regulate advertising and promotion of virtual digital assets (VDA) exchange and trading platforms and associated services as a necessary interim measure in February 2022. It mandates that all VDA ads must carry the stipulated disclaimer “Crypto products and NFTs are unregulated and can be highly risky. There may be no regulatory recourse for any loss from such transactions.” must be made in a prominent and unmissable manner.
Further, authorities such as Securities and Exchange Board of India (SEBI) and the Reserve Bank of India (RBI) also issue cautionary notes to consumers and investors against crypto trading and ancillary activities. Even bodies like Bureau of Indian Standards (BIS) act as a complimenting authority, since product quality, including electronic products, is emphasised by mandating compliance to prescribed standards.
It is worth noting that ASCI has proactively responded to new-age technology-induced threats to consumers by attempting to tackle “dark patterns” through its existing Code on Misleading Ads (“Code”), since it is applicable across media to include online advertising on websites and social media handles. It was noted by ASCI that 29% of advertisements were disguised ads by influencers, which is a form of dark pattern. Although the existing Code addressed some issues, a need was felt to encompass other dark patterns.
Perhaps in response, the Central Consumer Protection Authority in November 2023 released guidelines addressing “dark patterns” under the Consumer Protection Act 2019 (“Guidelines”). The Guidelines define dark patterns as deceptive strategies and design practices that mislead users and violate consumer rights. These may include creating false urgency, scarcity or popularity of a product, basket sneaking (whereby additional services are added automatically on purchase of a product or service), confirm shaming (it refers to statements such as “I will stay unsecured” when opting out of travel insurance on booking of transportation tickets), etc. The Guidelines also cater to several data privacy considerations; for example, they stipulate a bar on encouraging consumers from divulging more personal information while making purchases due to difficult language and complex settings of their privacy policies, thereby ensuring compliance of technology product sellers and e-commerce platforms/vendors with data privacy laws in India. It is to be noted that the Guidelines are applicable on all platforms that systematically offer goods and services in India, advertisers and sellers.
Conclusion
Consumer laws for technology-based products in India play a pivotal role in safeguarding the rights and interests of individuals in an era marked by rapid technological advancements. These legislative frameworks, spanning facets such as data protection, electronic transactions, and product liability, assume a pivotal role in establishing a regulatory equilibrium that addresses the nuanced challenges of the digital age. The dynamic evolution of the digital landscape necessitates an adaptive legal infrastructure that ensures ongoing consumer safeguarding amidst technological innovations. As the digital landscape evolves, it is imperative for regulatory frameworks to adapt, ensuring that consumers are protected from potential risks associated with emerging technologies. Striking a balance between innovation and consumer safety requires ongoing collaboration between policymakers, businesses, and consumers. By staying attuned to the evolving needs of the digital age, Indian consumer laws can provide a robust foundation for security and equitable relationships between consumers and technology-based products.
References:
- https://dot.gov.in/circulars/advisory-guidelines-m2miot-stakeholders-securing-consumer-iot
- https://www.mondaq.com/india/advertising-marketing--branding/1169236/asci-releases-guidelines-to-govern-ads-for-cryptocurrency
- https://www.ascionline.in/the-asci-code/#:~:text=Chapter%20I%20(4)%20of%20the,nor%20deceived%20by%20means%20of
- https://www.ascionline.in/wp-content/uploads/2022/11/dark-patterns.pdf

As AI language models become more powerful, they are also becoming more prone to errors. One increasingly prominent issue is AI hallucinations, instances where models generate outputs that are factually incorrect, nonsensical, or entirely fabricated, yet present them with complete confidence. Recently, ChatGPT released two new models—o3 and o4-mini, which differ from earlier versions as they focus more on step-by-step reasoning rather than simple text prediction. With the growing reliance on chatbots and generative models for everything from news summaries to legal advice, this phenomenon poses a serious threat to public trust, information accuracy, and decision-making.
What Are AI Hallucinations?
AI hallucinations occur when a model invents facts, misattributes quotes, or cites nonexistent sources. This is not a bug but a side effect of how Large Language Models (LLMs) work, and it is only the probability that can be reduced, not their occurrence altogether. Trained on vast internet data, these models predict what word is likely to come next in a sequence. They have no true understanding of the world or facts, they simulate reasoning based on statistical patterns in text. What is alarming is that the newer and more advanced models are producing more hallucinations, not fewer. seemingly counterintuitive. This has been prevalent reasoning-based models, which generate answers step-by-step in a chain-of-thought style. While this can improve performance on complex tasks, it also opens more room for errors at each step, especially when no factual retrieval or grounding is involved.
As per reports shared on TechCrunch, it mentioned that when users asked AI models for short answers, hallucinations increased by up to 30%. And a study published in eWeek found that ChatGPT hallucinated in 40% of tests involving domain-specific queries, such as medical and legal questions. This was not, however, limited to this particular Large Language Model, but also similar ones like DeepSeek. Even more concerning are hallucinations in multimodal models like those used for deepfakes. Forbes reports that some of these models produce synthetic media that not only look real but are also capable of contributing to fabricated narratives, raising the stakes for the spread of misinformation during elections, crises, and other instances.
It is also notable that AI models are continually improving with each version, focusing on reducing hallucinations and enhancing accuracy. New features, such as providing source links and citations, are being implemented to increase transparency and reliability in responses.
The Misinformation Dilemma
The rise of AI-generated hallucinations exacerbates the already severe problem of online misinformation. Hallucinated content can quickly spread across social platforms, get scraped into training datasets, and re-emerge in new generations of models, creating a dangerous feedback loop. However, it helps that the developers are already aware of such instances and are actively charting out ways in which we can reduce the probability of this error. Some of them are:
- Retrieval-Augmented Generation (RAG): Instead of relying purely on a model’s internal knowledge, RAG allows the model to “look up” information from external databases or trusted sources during the generation process. This can significantly reduce hallucination rates by anchoring responses in verifiable data.
- Use of smaller, more specialised language models: Lightweight models fine-tuned on specific domains, such as medical records or legal texts. They tend to hallucinate less because their scope is limited and better curated.
Furthermore, transparency mechanisms such as source citation, model disclaimers, and user feedback loops can help mitigate the impact of hallucinations. For instance, when a model generates a response, linking back to its source allows users to verify the claims made.
Conclusion
AI hallucinations are an intrinsic part of how generative models function today, and such a side-effect would continue to occur until foundational changes are made in how models are trained and deployed. For the time being, developers, companies, and users must approach AI-generated content with caution. LLMs are, fundamentally, word predictors, brilliant but fallible. Recognising their limitations is the first step in navigating the misinformation dilemma they pose.
References
- https://www.eweek.com/news/ai-hallucinations-increase/
- https://www.resilience.org/stories/2025-05-11/better-ai-has-more-hallucinations/
- https://www.ekathimerini.com/nytimes/1269076/ai-is-getting-more-powerful-but-its-hallucinations-are-getting-worse/
- https://techcrunch.com/2025/05/08/asking-chatbots-for-short-answers-can-increase-hallucinations-study-finds/
- https://en.as.com/latest_news/is-chatgpt-having-robot-dreams-ai-is-hallucinating-and-producing-incorrect-information-and-experts-dont-know-why-n/
- https://www.newscientist.com/article/2479545-ai-hallucinations-are-getting-worse-and-theyre-here-to-stay/
- https://www.forbes.com/sites/conormurray/2025/05/06/why-ai-hallucinations-are-worse-than-ever/
- https://towardsdatascience.com/how-i-deal-with-hallucinations-at-an-ai-startup-9fc4121295cc/
- https://www.informationweek.com/machine-learning-ai/getting-a-handle-on-ai-hallucinations

Executive Summary:
One of the most complex threats that have appeared in the space of network security is focused on the packet rate attacks that tend to challenge traditional approaches to DDoS threats’ involvement. In this year, the British based biggest Internet cloud provider of Europe, OVHcloud was attacked by a record and unprecedented DDoS attack reaching the rate of 840 million packets per second. Targets over 1 Tbps have been observed more regularly starting from 2023, and becoming nearly a daily occurrence in 2024. The maximum attack on May 25, 2024, got to 2.5 Tbps, this points to a direction to even larger and more complex attacks of up to 5 Tbps. Many of these attacks target critical equipment such as Mikrotik models within the core network environment; detection and subsequent containment of these threats prove a test for cloud security measures.
Modus Operandi of a Packet Rate Attack:
A type of cyberattack where an attacker sends with a large volume of packets in a short period of time aimed at a network device is known as packet rate attack, or packet flood attack or network flood attack under volumetric DDoS attack. As opposed to the deliberately narrow bandwidth attacks, these raids target the computation time linked with package processing.
Key technical characteristics include:
- Packet Size: Usually compact, and in many cases is less than 100 bytes
- Protocol: Named UDP, although it can also involve TCP SYN or other protocol flood attacks
- Rate: Exceeding 100 million packets per second (Mpps), with recent attacks exceeding 840 Mpps
- Source IP Diversity: Usually originating from a small number of sources and with a large number of requests per IP, which testifies about the usage of amplification principles
- Attack on the Network Stack : To understand the impact, let's examine how these attacks affect different layers of the network stack:
1. Layer 3 (Network Layer):
- Each packet requires routing table lookups and hence routers and L3 switches have the problem of high CPU usage.
- These mechanisms can often be saturated so that network communication will be negatively impacted by the attacker.
2. Layer 4 (Transport Layer):
- Other stateful devices (e.g. firewalls, load balancers) have problems with tables of connections
- TCP SYN floods can also utilize all connection slots so that no incoming genuine connection can be made.
3. Layer 7 (Application Layer):
- Web servers and application firewalls may be triggered to deliver a better response in a large number of requests
- Session management systems can become saturated, and hence, the performance of future iterations will be a little lower than expected in terms of their perceived quality by the end-user.
Technical Analysis of Attack Vectors
Recent studies have identified several key vectors exploited in high-volume packet rate attacks:
1.MikroTik RouterOS Exploitation:
- Vulnerability: CVE-2023-4967
- Impact: Allows remote attackers to generate massive packet floods
- Technical detail: Exploits a flaw in the FastTrack implementation
2.DNS Amplification:
- Amplification factor: Up to 54x
- Technique: Exploits open DNS resolvers to generate large responses to small queries
- Challenge: Difficult to distinguish from legitimate DNS traffic
3.NTP Reflection:
- Command: monlist
- Amplification factor: Up to 556.9x
- Mitigation: Requires NTP server updates and network-level filtering
Mitigation Strategies: A Technical Perspective
1. Combating packet rate attacks requires a multi-layered approach:
- Hardware-based Mitigation:
- Implementation: FPGA-based packet processing
- Advantage: Can handle millions of packets per second with minimal latency
- Challenge: High cost and specialized programming requirements
2.Anycast Network Distribution:
- Technique: Distributing traffic across multiple global nodes
- Benefit: Dilutes attack traffic, preventing single-point failures
- Consideration: Requires careful BGP routing configuration
3.Stateless Packet Filtering:
- Method: Applying filtering rules without maintaining connection state
- Advantage: Lower computational overhead compared to stateful inspection
- Trade-off: Less granular control over traffic
4.Machine Learning-based Detection:
- Approach: Using ML models to identify attack patterns in real-time
- Key metrics: Packet size distribution, inter-arrival times, protocol anomalies
- Challenge: Requires continuous model training to adapt to new attack patterns
Performance Metrics and Benchmarking
When evaluating DDoS mitigation solutions for packet rate attacks, consider these key performance indicators:
- Flows per second (fps) or packet per second (pps) capability
- Dispersion and the latency that comes with it is inherent to mitigation systems.
- The false positive rate in the case of the attack detection
- Exposure time before beginning of mitigation from the moment of attack
Way Forward
The packet rate attacks are constantly evolving where the credible defenses have not stayed the same. The next step entails extension to edge computing and 5G networks for distributing mitigation closer to the attack origins. Further, AI-based proactive tools of analysis for prediction of such threats will help to strengthen the protection of critical infrastructure against them in advance.
In order to stay one step ahead in this, it is necessary to constantly conduct research, advance new technologies, and work together with other cybersecurity professionals. There is always a need to develop secure defenses that safeguard these networks.
Reference:
https://blog.ovhcloud.com/the-rise-of-packet-rate-attacks-when-core-routers-turn-evil/
https://cybersecuritynews.com/record-breaking-ddos-attack-840-mpps/
https://www.cloudflare.com/learning/ddos/famous-ddos-attacks/