#FactCheck - "Viral Video Falsely Claimed as Evidence of Attacks in Bangladesh is False & Misleading”
Executive Summary:
A misleading video of a child covered in ash allegedly circulating as the evidence for attacks against Hindu minorities in Bangladesh. However, the investigation revealed that the video is actually from Gaza, Palestine, and was filmed following an Israeli airstrike in July 2024. The claim linking the video to Bangladesh is false and misleading.

Claims:
A viral video claims to show a child in Bangladesh covered in ash as evidence of attacks on Hindu minorities.

Fact Check:
Upon receiving the viral posts, we conducted a Google Lens search on keyframes of the video, which led us to a X post posted by Quds News Network. The report identified the video as footage from Gaza, Palestine, specifically capturing the aftermath of an Israeli airstrike on the Nuseirat refugee camp in July 2024.
The caption of the post reads, “Journalist Hani Mahmoud reports on the deadly Israeli attack yesterday which targeted a UN school in Nuseirat, killing at least 17 people who were sheltering inside and injuring many more.”

To further verify, we examined the video footage where the watermark of Al Jazeera News media could be seen, We found the same post posted on the Instagram account on 14 July, 2024 where we confirmed that the child in the video had survived a massacre caused by the Israeli airstrike on a school shelter in Gaza.

Additionally, we found the same video uploaded to CBS News' YouTube channel, where it was clearly captioned as "Video captures aftermath of Israeli airstrike in Gaza", further confirming its true origin.

We found no credible reports or evidence were found linking this video to any incidents in Bangladesh. This clearly implies that the viral video was falsely attributed to Bangladesh.
Conclusion:
The video circulating on social media which shows a child covered in ash as the evidence of attack against Hindu minorities is false and misleading. The investigation leads that the video originally originated from Gaza, Palestine and documents the aftermath of an Israeli air strike in July 2024.
- Claims: A video shows a child in Bangladesh covered in ash as evidence of attacks on Hindu minorities.
- Claimed by: Facebook
- Fact Check: False & Misleading
Related Blogs

India’s online gaming industry has grown at lightning speed, drawing millions of users across age groups. From casual games and e-sports to fantasy leagues and online poker, digital entertainment has become both a social and economic phenomenon. But with this growth came rising concerns of addiction, financial loss, misleading ads, and even criminal misuse of gaming platforms for illegal betting. To address these concerns, the Government of India introduced the Promotion and Regulation of Online Gaming Act and draft Rules in October 2025. While the Act represents a crucial step toward accountability and user protection, it also raises difficult questions about freedom, innovation, and investor confidence.
The Current Legal Framework
The 2025 Act, along with corresponding changes in the Information Technology and GST laws, aims to create a safer and more transparent gaming environment.
1. Ban on real-money games:
Any online game where money is involved, whether it’s entry fees, bets, or prizes, is now banned, regardless of whether it is based on skill or chance. As a result, previously permitted formats such as fantasy sports, rummy, and poker once defended as “games of skill” now fall within the category of banned activities.
2. Promotion of e-sports and social gaming
Not all gaming is banned. Casual games, e-sports, and social games that don’t involve money are fully allowed. The government is encouraging these as part of India’s growing digital economy.
3. Advertising and financial restrictions: Banks, payment gateways, and advertisers cannot facilitate or promote real-money games. Any platform offering deposits or prize pools can be blocked.
4. Central regulatory authority: The law establishes a national body to classify games, monitor compliance, and address complaints. It has the power to order the locking of violative content and websites.
Why Regulation Was Needed
The push for regulation came after a surge in online betting scams, debt-related suicides, and disputes about whether certain apps were skill-based or chance-based. State governments had taken conflicting positions, some banning, others licensing such games. Meanwhile, offshore gaming apps operated freely in India’s grey market.
The 2025 Act thus attempts to impose uniformity, protect minors, and bring moral and fiscal discipline to a rapidly expanding digital frontier. Its underlying philosophy resembles that of the Digital Personal Data Protection Act, encouraging responsible use of technology rather than an unregulated free-for-all.
Key Challenges and Gaps
(a) Clarity of Definitions
The Act bans all real-money games, ignoring the difference between skill-based games and chance-based games. This could lead to legal challenges under Article 19(1)(g), which protects the right to do business. Games like rummy or fantasy cricket, which need real skill, arguably shouldn’t be banned outright
(b) Weak Consumer and Child Protection
Although age verification and KYC are mandated, compliance at the user-end remains uncertain. India needs a Responsible Gaming Code covering:
- Spending limits and cooling-off periods;
- Self-exclusion options;
- Transparent disclosure of odds; and
- Algorithmic fairness audits.
These measures can help mitigate addiction and prevent exploitation of minors.
(c) Federal Conflicts
“Betting and gambling” fall within the State List under India’s Constitution, yet the 2025 Act seeks national uniformity. States like Tamil Nadu and Karnataka already have independent bans. Without harmonisation, legal disputes between state and central authorities could multiply. A cooperative federal framework allowing states to adopt central norms voluntarily could offer flexibility without fragmentation.
(d) Regulatory Transparency
The gaming regulator has a lot of power, like deciding which games are allowed and blocking websites. But it’s not clear who chooses its members or how people can challenge its decisions. Including court oversight, public input, and regular reporting would make the regulator fairer and more reliable.
What’s Next for India’s Online Gaming
India’s online gaming scene is at a turning point. Banning all money-based games might reduce risks, but it also slows innovation and limits opportunities. A better approach could be to license skill-based or low-risk games with proper KYC and audits, set up a Responsible Gaming Charter with input from government, industry, and civil society, and create rules for offshore platforms targeting Indian players. Player data should be protected under the Digital Personal Data Protection Act, 2023, and the law should be reviewed every few years to keep up with new tech like the metaverse, NFTs, and AI-powered games.
Conclusion
CyberPeace has already provided its detailed feedback to MEITy as on 30th October, 2025 hopes the finalised rules are released soon with the acknowledgment of the challenges discussed. The Promotion and Regulation of Online Gaming Act, 2025, marks an important turning point since this is India’s first serious attempt to bring order to a chaotic digital arena. The goal is to keep players safe, stop crime, and hold platforms accountable. But the tricky part is moving away from blanket bans. We need rules that let new ideas grow, respect people’s rights, and keep players safe. With a few smart changes and fair enforcement, India could have a gaming industry that’s safe, responsible, and ready to compete globally.
References
- https://ssrana.in/articles/indias-online-gaming-bill-2025-regulation-prohibition-and-the-future-of-digital-play/
- https://www.google.com/amp/s/m.economictimes.com/news/economy/policy/new-online-gaming-law-takes-effect-money-games-banned-from-today/amp_articleshow/124255401.cms
- https://www.google.com/amp/s/timesofindia.indiatimes.com/technology/tech-news/government-proposes-to-make-violation-of-online-money-game-rules-non-bailable-draft-rules-ban-/amp_articleshow/124277740.cms
- https://www.egf.org.in/
- https://www.pib.gov.in/PressNoteDetails.aspx?NoteId=155075&ModuleId=3

In the vast, uncharted territories of the digital world, a sinister phenomenon is proliferating at an alarming rate. It's a world where artificial intelligence (AI) and human vulnerability intertwine in a disturbing combination, creating a shadowy realm of non-consensual pornography. This is the world of deepfake pornography, a burgeoning industry that is as lucrative as it is unsettling.
According to a recent assessment, at least 100,000 deepfake porn videos are readily available on the internet, with hundreds, if not thousands, being uploaded daily. This staggering statistic prompts a chilling question: what is driving the creation of such a vast number of fakes? Is it merely for amusement, or is there a more sinister motive at play?
Recent Trends and Developments
An investigation by India Today’s Open-Source Intelligence (OSINT) team reveals that deepfake pornography is rapidly morphing into a thriving business. AI enthusiasts, creators, and experts are extending their expertise, investors are injecting money, and even small financial companies to tech giants like Google, VISA, Mastercard, and PayPal are being misused in this dark trade. Synthetic porn has existed for years, but advances in AI and the increasing availability of technology have made it easier—and more profitable—to create and distribute non-consensual sexually explicit material. The 2023 State of Deepfake report by Home Security Heroes reveals a staggering 550% increase in the number of deepfakes compared to 2019.
What’s the Matter with Fakes?
But why should we be concerned about these fakes? The answer lies in the real-world harm they cause. India has already seen cases of extortion carried out by exploiting deepfake technology. An elderly man in UP’s Ghaziabad, for instance, was tricked into paying Rs 74,000 after receiving a deep fake video of a police officer. The situation could have been even more serious if the perpetrators had decided to create deepfake porn of the victim.
The danger is particularly severe for women. The 2023 State of Deepfake Report estimates that at least 98 percent of all deepfakes is porn and 99 percent of its victims are women. A study by Harvard University refrained from using the term “pornography” for creating, sharing, or threatening to create/share sexually explicit images and videos of a person without their consent. “It is abuse and should be understood as such,” it states.
Based on interviews of victims of deepfake porn last year, the study said 63 percent of participants talked about experiences of “sexual deepfake abuse” and reported that their sexual deepfakes had been monetised online. It also found “sexual deepfake abuse to be particularly harmful because of the fluidity and co-occurrence of online offline experiences of abuse, resulting in endless reverberations of abuse in which every aspect of the victim’s life is permanently disrupted”.
Creating deepfake porn is disturbingly easy. There are largely two types of deepfakes: one featuring faces of humans and another featuring computer-generated hyper-realistic faces of non-existing people. The first category is particularly concerning and is created by superimposing faces of real people on existing pornographic images and videos—a task made simple and easy by AI tools.
During the investigation, platforms hosting deepfake porn of stars like Jennifer Lawrence, Emma Stone, Jennifer Aniston, Aishwarya Rai, Rashmika Mandanna to TV actors and influencers like Aanchal Khurana, Ahsaas Channa, and Sonam Bajwa and Anveshi Jain were encountered. It takes a few minutes and as little as Rs 40 for a user to create a high-quality fake porn video of 15 seconds on platforms like FakeApp and FaceSwap.
The Modus Operandi
These platforms brazenly flaunt their business association and hide behind frivolous declarations such as: the content is “meant solely for entertainment” and “not intended to harm or humiliate anyone”. However, the irony of these disclaimers is not lost on anyone, especially when they host thousands of non-consensual deepfake pornography.
As fake porn content and its consumers surge, deepfake porn sites are rushing to forge collaborations with generative AI service providers and have integrated their interfaces for enhanced interoperability. The promise and potential of making quick bucks have given birth to step-by-step guides, video tutorials, and websites that offer tools and programs, recommendations, and ratings.
Nearly 90 per cent of all deepfake porn is hosted by dedicated platforms that charge for long-duration premium fake content and for creating porn—of whoever a user wants, and take requests for celebrities. To encourage them further, they enable creators to monetize their content.
One such website, Civitai, has a system in place that pays “rewards” to creators of AI models that generate “images of real people'', including ordinary people. It also enables users to post AI images, prompts, model data, and LoRA (low-rank adaptation of large language models) files used in generating the images. Model data designed for adult content is gaining great popularity on the platform, and they are not only targeting celebrities. Common people are equally susceptible.
Access to premium fake porn, like any other content, requires payment. But how can a gateway process payment for sexual content that lacks consent? It seems financial institutes and banks are not paying much attention to this legal question. During the investigation, many such websites accepting payments through services like VISA, Mastercard, and Stripe were found.
Those who have failed to register/partner with these fintech giants have found a way out. While some direct users to third-party sites, others use personal PayPal accounts to manually collect money in the personal accounts of their employees/stakeholders, which potentially violates the platform's terms of use that ban the sale of “sexually oriented digital goods or content delivered through a digital medium.”
Among others, the MakeNude.ai web app – which lets users “view any girl without clothing” in “just a single click” – has an interesting method of circumventing restrictions around the sale of non-consensual pornography. The platform has partnered with Ukraine-based Monobank and Dublin’s BetaTransfer Kassa which operates in “high-risk markets”.
BetaTransfer Kassa admits to serving “clients who have already contacted payment aggregators and received a refusal to accept payments, or aggregators stopped payments altogether after the resource was approved or completely freeze your funds”. To make payment processing easy, MakeNude.ai seems to be exploiting the donation ‘jar’ facility of Monobank, which is often used by people to donate money to Ukraine to support it in the war against Russia.
The Indian Scenario
India currently is on its way to design dedicated legislation to address issues arising out of deepfakes. Though existing general laws requiring such platforms to remove offensive content also apply to deepfake porn. However, persecution of the offender and their conviction is extremely difficult for law enforcement agencies as it is a boundaryless crime and sometimes involves several countries in the process.
A victim can register a police complaint under provisions of Section 66E and Section 66D of the IT Act, 2000. Recently enacted Digital Personal Data Protection Act, 2023 aims to protect the digital personal data of users. Recently Union Government issued an advisory to social media intermediaries to identify misinformation and deepfakes. Comprehensive law promised by Union IT minister Ashwini Vaishnav will be able to address these challenges.
Conclusion
In the end, the unsettling dance of AI and human vulnerability continues in the dark web of deepfake pornography. It's a dance that is as disturbing as it is fascinating, a dance that raises questions about the ethical use of technology, the protection of individual rights, and the responsibility of financial institutions. It's a dance that we must all be aware of, for it is a dance that affects us all.
References
- https://www.indiatoday.in/india/story/deepfake-porn-artificial-intelligence-women-fake-photos-2471855-2023-12-04
- https://www.hindustantimes.com/opinion/the-legal-net-to-trap-peddlers-of-deepfakes-101701520933515.html
- https://indianexpress.com/article/opinion/columns/with-deepfakes-getting-better-and-more-alarming-seeing-is-no-longer-believing/

Over The Top (OTT)
OTT messaging platforms have taken the world by storm; everyone across the globe is working on OTT platforms, and they have changed the dynamics of accessibility and information speed forever. Whatsapp is one of the leading OTT messaging platforms under the tech giant Meta as of 2013. All tasks, whether personal or professional, can be performed over Whatsapp, and as of today, Whatsapp has 2.44 billion users worldwide, with 487.5 Million users in India alone[1]. With such a vast user base, it is pertinent to have proper safety and security measures and mechanisms on these platforms and active reporting options for the users. The growth of OTT platforms has been exponential in the previous decade. As internet penetration increased during the Covid-19 pandemic, the following factors contributed towards the growth of OTT platforms –
- Urbanisation and Westernisation
- Access to Digital Services
- Media Democratization
- Convenience
- Increased Internet Penetration
These factors have been influential in providing exceptional content and services to the consumers, and extensive internet connectivity has allowed people from the remotest part of the country to use OTT messaging platforms. But it is pertinent to maintain user safety and security by the platforms and abide by the policies and regulations to maintain accountability and transparency.
New Safety Features
Keeping in mind the safety requirements and threats coming with emerging technologies, Whatsapp has been crucial in taking out new technology and policy-based security measures. A number of new security features have been added to WhatsApp to make it more difficult to take control of other people’s accounts. The app’s privacy and security-focused features go beyond its assertion that online chats and discussions should be as private and secure as in-person interactions. Numerous technological advancements pertaining to that goal have focussed on message security, such as adding end-to-end encryption to conversations. The new features allegedly increase user security on the app.
WhatsApp announced that three new security features are now available to all users on Android and iOS devices. The new security features are called Account Protect, Device Verification, and Automatic Security Codes
- For instance, a new programme named “Account Protect” will start when users migrate an account from an old device to a new one. If users receive an unexpected alert, it may be a sign that someone is trying to access their account without their knowledge. Users may see an alert on their previous handset asking them to confirm that they are truly transitioning away from it.
- To make sure that users cannot install malware to access other people’s messages, another function called “Device Verification” operates in the background. Without the user’s knowledge, this feature authenticates devices in the background. In particular, WhatsApp claims it is concerned about unlicensed WhatsApp applications that contain spyware made explicitly for this use. Users do not need to take any action due to the company’s new checks that help authenticate user accounts to prevent this.
- The final feature is dubbed “automatic security codes,” It builds on an already-existing service that lets users verify that they are speaking with the person they believe they are. This is still done manually, but by default, an automated version will be carried out with the addition of a tool to determine whether the connection is secure.
While users can now view the code by visiting a user’s profile, the social media platform will start to develop a concept called “Key Transparency” to make it easier for its users to verify the validity of the code. Update to the most recent build if you use WhatsApp on Android because these features have already been released. If you use iOS, the security features have not yet been released, although an update is anticipated soon.
Conclusion
Digital safety is a crucial matter for netizens across the world; platforms like Whatsapp, which enjoy a massive user base, should lead the way in terms of OTT platforms’ cyber security by inculcating the use of emerging technologies, user reporting, and transparency in the principles and also encourage other platforms to replicate their security mechanisms to keep bad actors at bay. Account Protect, Device Verification, and Automatic Security Codes will go a long way in protecting the user’s interests while simultaneously maintaining convenience, thus showing us that the future with such platforms is bright and secure.
[1] https://verloop.io/blog/whatsapp-statistics-2023/#:~:text=1.,over%202.44%20billion%20users%20worldwide.