#FactCheck - "Viral Video Falsely Claimed as Evidence of Attacks in Bangladesh is False & Misleading”
Executive Summary:
A misleading video of a child covered in ash allegedly circulating as the evidence for attacks against Hindu minorities in Bangladesh. However, the investigation revealed that the video is actually from Gaza, Palestine, and was filmed following an Israeli airstrike in July 2024. The claim linking the video to Bangladesh is false and misleading.

Claims:
A viral video claims to show a child in Bangladesh covered in ash as evidence of attacks on Hindu minorities.

Fact Check:
Upon receiving the viral posts, we conducted a Google Lens search on keyframes of the video, which led us to a X post posted by Quds News Network. The report identified the video as footage from Gaza, Palestine, specifically capturing the aftermath of an Israeli airstrike on the Nuseirat refugee camp in July 2024.
The caption of the post reads, “Journalist Hani Mahmoud reports on the deadly Israeli attack yesterday which targeted a UN school in Nuseirat, killing at least 17 people who were sheltering inside and injuring many more.”

To further verify, we examined the video footage where the watermark of Al Jazeera News media could be seen, We found the same post posted on the Instagram account on 14 July, 2024 where we confirmed that the child in the video had survived a massacre caused by the Israeli airstrike on a school shelter in Gaza.

Additionally, we found the same video uploaded to CBS News' YouTube channel, where it was clearly captioned as "Video captures aftermath of Israeli airstrike in Gaza", further confirming its true origin.

We found no credible reports or evidence were found linking this video to any incidents in Bangladesh. This clearly implies that the viral video was falsely attributed to Bangladesh.
Conclusion:
The video circulating on social media which shows a child covered in ash as the evidence of attack against Hindu minorities is false and misleading. The investigation leads that the video originally originated from Gaza, Palestine and documents the aftermath of an Israeli air strike in July 2024.
- Claims: A video shows a child in Bangladesh covered in ash as evidence of attacks on Hindu minorities.
- Claimed by: Facebook
- Fact Check: False & Misleading
Related Blogs

Introduction
Have you ever wondered how the internet works? Yes, there are screens and wires, but what’s going on beneath the surface? Every time you open a website, send an email, chat on messaging apps, or stream movies, you’re relying on something you probably don’t think about: the TCP/IP protocol suite. Without it, the internet as we know it wouldn’t exist. Let’s take a look at why this unassuming set of rules allows us to connect to anyone anywhere in the world.
The Problem: Networks That Couldn't Talk to Each Other
The internet is widely called a network of networks. A network is a group of devices that are connected and can share data with each other.
Researchers and governments began building early computer networks in the 1960s and 70s. But as the Cold War intensified, the U.S. military felt the need to establish a robust data-sharing infrastructure through interconnected networks that could withstand attacks. At the time, each network had different standards and protocols, which meant getting networks to communicate wasn’t easy or efficient. One network would have to be subsumed into another. This would lead to major problems in the reliability of data relay, flexibility of including more nodes, scalability of the interconnected network, and innovation.
The Breakthrough: Open Architecture Networking
This changed in the 1970s, when Bob Kahn proposed the concept of open architecture networking. It was a simple but revolutionary idea. He envisioned a system where all networks could talk to each other as equals. In this conceptualisation, all networks, even though unique in design and interface, could connect as peers to facilitate end-to-end communication. End-to-end communication helps deliver data between the source and destination without relying on intermediate nodes to control or modify it. This helps to make data relay more reliable and less prone to errors.
Along with Vint Cerf, he developed a network protocol, the TCP/IP suite, that would go on to enable different networks across satellite, wired, and non-wired domains to communicate with one another.
What Is TCP/IP?
TCP/IP stands for Transmission Control Protocol / Internet Protocol. It’s a set of communication rules that allow computers and devices to exchange information across different networks.
It’s powerful because:
- Layered and open architecture: Each function (like data delivery or routing) is handled by a specific layer. This modular design makes it easy to build new technologies like the World Wide Web or streaming services on top of it.
- Decentralisation: There's no single point of control. Any device can connect to another across the internet, making it scalable and resilient.
- Standardisation: TCP/IP works across all kinds of hardware and operating systems, making it truly universal.
The Core Components
- TCP (Transmission Control Protocol): Ensures that data is delivered accurately and in order. If any piece is lost or duplicated, TCP handles it.
- IP (Internet Protocol): Handles addressing and routing. It decides where each packet of data should go and how it gets there.
- UDP (User Datagram Protocol): A lightweight version of TCP, used when speed is more important than accuracy, such as for video calls or online gaming.
Why It Matters
The TCP/IP protocol suite introduced a set of standardised guidelines that enable networks to communicate, thereby laying the foundation of the Internet. It has made the Internet global, open, reliable, interoperable, scalable, and resilient, — features because of which the Internet has come to become the backbone of modern communication systems. So the next time you open a browser or send a message, remember: it’s TCP/IP quietly making it all possible.
References
- https://www.techtarget.com/searchnetworking/definition/ARPANET
- https://www.internetsociety.org/internet/history-internet/brief-history-internet/
- https://www.geeksforgeeks.org/tcp-ip-model/
- https://www.oreilly.com/library/view/tcpip-network-administration/0596002971/ch01.html
.webp)
Introduction
In the fast-paced digital age, misinformation spreads faster than actual news. This was seen recently when inaccurate information on social media was spread, stating that the Election Commission of India (ECI) had taken down e-voter rolls for some states from its website overnight. The rumour confused the public and caused political debate in states like Maharashtra, MP, Bihar, UP and Haryana, resulting in public confusion. But the ECI quickly called the viral information "fake news" and made sure that voters could still get access to the electoral rolls of all States and Union Territories, available at voters.eci.gov.in. The incident shows how electoral information could be harmed by the impact of misinformation and how important it is to verify the authenticity.
The Incident and Allegations
On August 7, 2025, social media posts on platforms like X and WhatsApp claimed that the Election Commission of India had removed e-voter lists from its website. The posts appeared after public allegations about irregularities in certain constituencies. However, the claims about the removal of voter lists were unverified.
The Election Commission’s Response
In a formal tweet posted on X, it stated categorically:
“This is a fake news. Anyone can download the Electoral Roll for any of 36 States/UTs through this link: https://voters.eci.gov.in/download-eroll.”
The Commission clarified that no deletion has been done at all and that all the voters' rolls are intact and accessible to the public. Keeping with the spirit of transparency, the ECI reaffirmed its overall practice of public access to electoral information by clarifying that the system is intact and accessible for inspection.
Importance of Timely Clarifications
By countering factually incorrect information the moment it was spread on a large scale, the ECI stopped possible harm to public trust. Election officials rely upon being trusted, and any speculation concerning their honesty can prove harmful to democracy. Such prompt action stops false information from becoming a standard in public discourse.
Misinformation in the Electoral Space
- How False Narratives Gain Traction
Election misinformation increases in significant political environments. Social media, confirmation bias, and increased emotional states during elections enable rumour spread. On this occasion, the unfounded report struck a chord with widespread political distrust, and hence, people easily believed and shared it without checking if it was true or not.
- Risks to Democratic Integrity
When misinformation impacts election procedures, the consequences can be profound:
- Erosion of Trust: People can lose faith in the neutrality of election administrators quite easily.
- Polarization: Untrue assertions tend to reinforce political divides, rendering constructive communication more difficult.
- The Role of Media Literacy
Combating such mis-disinformation requires more than official statements. Media skills training courses can equip individuals with the ability to recognise warning signs in suspect messages. Even basic actions like checking official sources prior to sharing can move far in keeping untruths from being spread.
Strategies to Counter Electoral Misinformation
Multi-Stakeholder Action
Effective counteracting of electoral disinformation requires coordination among election officials, fact-checkers, media, and platforms. Actions that are suggested include:
- Rapid Response Protocols: Institutions should maintain dedicated monitoring teams for quick rebuttals.
- Confirmed Channels of Communication: Providing official sites and pages for actual electoral news.
- Proactive Transparency: Regular publication of electoral process updates can anticipate rumours.
- Platform Accountability: Social media sites must label or limit the visibility of information found to be false by credentialed fact-checkers.
Conclusion
The recent allegations of e-voter rolls deletion underscore the susceptibility of contemporary democracies to mis-disinformation. Even though the circumstances were brought back into order by the ECI's swift and unambiguous denunciation, the incident itself serves to emphasise the necessity of preventive steps to maintain election faith. Even though fact-checking alone might not work in an environment where the information space is growing more polarised and algorithmic, the long-term solution to such complications is to grow an ironclad democratic culture where everyone, every organisation, and platforms value the truth over clickbait. The lesson is clear: in the age of instant news, accurate communication is vital for maintaining democratic integrity, not extravagances.
References
- https://www.newsonair.gov.in/election-commission-dismisses-fake-news-on-removal-of-e-voter-rolls/
- https://economictimes.indiatimes.com/news/india/eci-dismisses-claims-of-removing-e-voter-rolls-from-its-website-calls-it-fake-news/articleshow/123190662.cms
- https://www.thehindu.com/news/national/vote-theft-claim-of-congress-factually-incorrect-election-commission/article69921742.ece
- https://www.thehindu.com/opinion/editorial/a-crisis-of-trust-on-the-election-commission-of-india/article69893682.ece

Introduction
Considering the development of technology, Voice cloning schemes are one such issue that has recently come to light. Scammers are moving forward with AI, and their methods and plans for deceiving and scamming people have also altered. Deepfake technology creates realistic imitations of a person’s voice that can be used to conduct fraud, dupe a person into giving up crucial information, or even impersonate a person for illegal purposes. We will look at the dangers and risks associated with AI voice cloning frauds, how scammers operate and how one might protect themselves from one.
What is Deepfake?
Artificial intelligence (AI), known as “deepfake,” can produce fake or altered audio, video, and film that pass for the real thing. The words “deep learning” and “fake” are combined to get the name “deep fake”. Deepfake technology creates content with a realistic appearance or sound by analysing and synthesising diverse volumes of data using machine learning algorithms. Con artists employ technology to portray someone doing something that has never been in audio or visual form. The best example is the American President, who used deep voice impersonation technology. Deep voice impersonation technology can be used maliciously, such as in deep voice fraud or disseminating false information. As a result, there is growing concerned about the potential influence of deep fake technology on society and the need for effective tools to detect and mitigate the hazards it may provide.
What exactly are deepfake voice scams?
Artificial intelligence (AI) is sometimes utilised in deepfake speech frauds to create synthetic audio recordings that seem like real people. Con artists can impersonate someone else over the phone and pressure their victims into providing personal information or paying money by using contemporary technology. A con artist may pose as a bank employee, a government official, or a friend or relative by utilising a deep false voice. It aims to earn the victim’s trust and raise the likelihood that they will fall for the hoax by conveying a false sense of familiarity and urgency. Deep fake speech frauds are increasing in frequency as deep fake technology becomes more widely available, more sophisticated, and harder to detect. In order to avoid becoming a victim of such fraud, it is necessary to be aware of the risks and take appropriate measures.
Why do cybercriminals use AI voice deep fake?
In order to mislead users into providing private information, money, or system access, cybercriminals utilise artificial intelligence (AI) speech-deep spoofing technology to claim to be people or entities. Using AI voice-deep fake technology, cybercriminals can create audio recordings that mimic real people or entities, such as CEOs, government officials, or bank employees, and use them to trick victims into taking activities that are advantageous to the criminals. This can involve asking victims for money, disclosing login credentials, or revealing sensitive information. In phishing assaults, where fraudsters create audio recordings that impersonate genuine messages from organisations or people that victims trust, deepfake AI voice technology can also be employed. These audio recordings can trick people into downloading malware, clicking on dangerous links, or giving out personal information. Additionally, false audio evidence can be produced using AI voice-deep fake technology to support false claims or accusations. This is particularly risky regarding legal processes because falsified audio evidence may lead to wrongful convictions or acquittals. Artificial intelligence voice deep fake technology gives con artists a potent tool for tricking and controlling victims. Every organisation and the general population must be informed of this technology’s risk and adopt the appropriate safety measures.
How to spot voice deepfake and avoid them?
Deep fake technology has made it simpler for con artists to edit audio recordings and create phoney voices that exactly mimic real people. As a result, a brand-new scam called the “deep fake voice scam” has surfaced. In order to trick the victim into handing over money or private information, the con artist assumes another person’s identity and uses a fake voice. What are some ways to protect oneself from deepfake voice scams? Here are some guidelines to help you spot them and keep away from them:
- Steer clear of telemarketing calls
- One of the most common tactics used by deep fake voice con artists, who pretend to be bank personnel or government officials, is making unsolicited phone calls.
- Listen closely to the voice
- Anyone who phones you pretending to be someone else should pay special attention to their voice. Are there any peculiar pauses or inflexions in their speech? Something that doesn’t seem right can be a deep voice fraud.
- Verify the caller’s identity
- It’s crucial to verify the caller’s identity in order to avoid falling for a deep false voice scam. You might ask for their name, job title, and employer when in doubt. You can then do some research to be sure they are who they say they are.
- Never divulge confidential information
- No matter who calls, never give out personal information like your Aadhar, bank account information, or passwords over the phone. Any legitimate companies or organisations will never request personal or financial information over the phone; if they do, it’s a warning sign that they’re a scammer.
- Report any suspicious activities
- Inform the appropriate authorities if you think you’ve fallen victim to a deep voice fraud. This may include your bank, credit card company, local police station, or the nearest cyber cell. By reporting the fraud, you could prevent others from being a victim.
Conclusion
In conclusion, the field of AI voice deep fake technology is fast expanding and has huge potential for beneficial and detrimental effects. While deep fake voice technology has the potential to be used for good, such as improving speech recognition systems or making voice assistants sound more realistic, it may also be used for evil, such as deep fake voice frauds and impersonation to fabricate stories. Users must be aware of the hazard and take the necessary precautions to protect themselves as AI voice deep fake technology develops, making it harder to detect and prevent deep fake schemes. Additionally, it is necessary to conduct ongoing research and develop efficient techniques to identify and control the risks related to this technology. We must deploy AI appropriately and ethically to ensure that AI voice-deep fake technology benefits society rather than harming or deceiving it.