#FactCheck-Mosque fire in India? False, it's from Indonesia
Executive Summary:
A social media viral post claims to show a mosque being set on fire in India, contributing to growing communal tensions and misinformation. However, a detailed fact-check has revealed that the footage actually comes from Indonesia. The spread of such misleading content can dangerously escalate social unrest, making it crucial to rely on verified facts to prevent further division and harm.

Claim:
The viral video claims to show a mosque being set on fire in India, suggesting it is linked to communal violence.

Fact Check
The investigation revealed that the video was originally posted on 8th December 2024. A reverse image search allowed us to trace the source and confirm that the footage is not linked to any recent incidents. The original post, written in Indonesian, explained that the fire took place at the Central Market in Luwuk, Banggai, Indonesia, not in India.

Conclusion: The viral claim that a mosque was set on fire in India isn’t True. The video is actually from Indonesia and has been intentionally misrepresented to circulate false information. This event underscores the need to verify information before spreading it. Misinformation can spread quickly and cause harm. By taking the time to check facts and rely on credible sources, we can prevent false information from escalating and protect harmony in our communities.
- Claim: The video shows a mosque set on fire in India
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

In the rich history of humanity, the advent of artificial intelligence (AI) has added a new, delicate aspect. The aspect of promising technological advancement has the potential to either enrich the nest of our society or destroy it entirely. The latest straw in this complex nest is generative AI, a frontier teeming with both potential and perils. It is a realm where the ethereal concepts of cyber peace and resilience are not just theoretical constructs but tangible necessities.
The spectre of generative AI looms large over the digital landscape, casting a long shadow on the sanctity of data privacy and the integrity of political processes. The seeds of this threat were sown in the fertile soil of the Cambridge Analytica scandal of 2018, a watershed moment that unveiled the extent to which personal data could be harvested and utilized to influence electoral outcomes. However despite the indignation, the scandal resulted in meagre alterations to modus operandi of digital platforms.
Fast forward to the present day, and the spectre has only grown more ominous. A recent report by Human Rights Watch has shed light on the continued exploitation of data-driven campaigning in Hungary's re-election of Viktor Orbán. The report paints a chilling picture of political parties leveraging voter databases for targeted social media advertising, with the ruling Fidesz party even resorting to the unethical use of public service data to bolster its voter database.
The Looming Threat of Disinformation
As we stand on the precipice of 2024, a year that will witness over 50 countries holding elections, the advancements in generative AI could exponentially amplify the ability of political campaigns to manipulate electoral outcomes. This is particularly concerning in countries where information disparities are stark, providing fertile ground for the seeds of disinformation to take root and flourish.
The media, the traditional watchdog of democracy, has already begun to sound the alarm about the potential threats posed by deepfakes and manipulative content in the upcoming elections. The limited use of generative AI in disinformation campaigns has raised concerns about the enforcement of policies against generating targeted political materials, such as those designed to sway specific demographic groups towards a particular candidate.
Yet, while the threat of bad actors using AI to generate and disseminate disinformation is real and present, there is another dimension that has largely remained unexplored: the intimate interactions with chatbots. These digital interlocutors, when armed with advanced generative AI, have the potential to manipulate individuals without any intermediaries. The more data they have about a person, the better they can tailor their manipulations.
Root of the Cause
To fully grasp the potential risks, we must journey back 30 years to the birth of online banner ads. The success of the first-ever banner ad for AT&T, which boasted an astounding 44% click rate, birthed a new era of digital advertising. This was followed by the advent of mobile advertising in the early 2000s. Since then, companies have been engaged in a perpetual quest to harness technology for manipulation, blurring the lines between commercial and political advertising in cyberspace.
Regrettably, the safeguards currently in place are woefully inadequate to prevent the rise of manipulative chatbots. Consider the case of Snapchat's My AI generative chatbot, which ostensibly assists users with trivia questions and gift suggestions. Unbeknownst to most users, their interactions with the chatbot are algorithmically harvested for targeted advertising. While this may not seem harmful in its current form, the profit motive could drive it towards more manipulative purposes.
If companies deploying chatbots like My AI face pressure to increase profitability, they may be tempted to subtly steer conversations to extract more user information, providing more fuel for advertising and higher earnings. This kind of nudging is not clearly illegal in the U.S. or the EU, even after the AI Act comes into effect. The market size of AI in India is projected to touch US$4.11bn in 2023.
Taking this further, chatbots may be inclined to guide users towards purchasing specific products or even influencing significant life decisions, such as religious conversions or voting choices. The legal boundaries here remain unclear, especially when manipulation is not detectable by the user.
The Crucial Dos/Dont's
It is crucial to set rules and safeguards in order to manage the possible threats related to manipulative chatbots in the context of the general election in 2024.
First and foremost, candor and transparency are essential. Chatbots, particularly when employed for political or electoral matters, ought to make it clear to users what they are for and why they are automated. By being transparent, people are guaranteed to be aware that they are interacting with automated processes.
Second, getting user consent is crucial. Before collecting user data for any reason, including advertising or political profiling, users should be asked for their informed consent. Giving consumers easy ways to opt-in and opt-out gives them control over their data.
Furthermore, moral use is essential. It's crucial to create an ethics code for chatbot interactions that forbids manipulation, disseminating false information, and trying to sway users' political opinions. This guarantees that chatbots follow moral guidelines.
In order to preserve transparency and accountability, independent audits need to be carried out. Users might feel more confident knowing that chatbot behavior and data collecting procedures are regularly audited by impartial third parties to ensure compliance with legal and ethical norms.
Important "don'ts" to take into account. Coercion and manipulation ought to be outlawed completely. Chatbots should refrain from using misleading or manipulative approaches to sway users' political opinions or religious convictions.
Another hazard to watch out for is unlawful data collecting. Businesses must obtain consumers' express agreement before collecting personal information, and they must not sell or share this information for political reasons.
At all costs, one should steer clear of fake identities. Impersonating people or political figures is not something chatbots should do because it can result in manipulation and false information.
It is essential to be impartial. Bots shouldn't advocate for or take part in political activities that give preference to one political party over another. In encounters, impartiality and equity are crucial.
Finally, one should refrain from using invasive advertising techniques. Chatbots should ensure that advertising tactics comply with legal norms by refraining from displaying political advertisements or messaging without explicit user agreement.
Present Scenario
As we approach the critical 2024 elections and generative AI tools proliferate faster than regulatory measures can keep pace, companies must take an active role in building user trust, transparency, and accountability. This includes comprehensive disclosure about a chatbot's programmed business goals in conversations, ensuring users are fully aware of the chatbot's intended purposes.
To address the regulatory gap, stronger laws are needed. Both the EU AI Act and analogous laws across jurisdictions should be expanded to address the potential for manipulation in various forms. This effort should be driven by public demand, as the interests of lawmakers have been influenced by intensive Big Tech lobbying campaigns.
At present, India doesn’t have any specific laws pertaining to AI regulation. Ministry of Electronics and Information Technology (MEITY), is the executive body responsible for AI strategies and is constantly working towards a policy framework for AI. The Niti Ayog has presented seven principles for responsible AI which includes equality , inclusivity, safety, privacy, transparency, accountability, dependability and protection of positive human values.
Conclusion
We are at a pivotal juncture in history. As generative AI gains more power, we must proactively establish effective strategies to protect our privacy, rights and democracy. The public's waning confidence in Big Tech and the lessons learned from the techlash underscore the need for stronger regulations that hold tech companies accountable. Let's ensure that the power of generative AI is harnessed for the betterment of society and not exploited for manipulation.
Reference
McCallum, B. S. (2022, December 23). Meta settles Cambridge Analytica scandal case for $725m. BBC News. https://www.bbc.com/news/technology-64075067
Hungary: Data misused for political campaigns. (2022, December 1). Human Rights Watch. https://www.hrw.org/news/2022/12/01/hungary-data-misused-political-campaigns
Statista. (n.d.). Artificial Intelligence - India | Statista Market forecast. https://www.statista.com/outlook/tmo/artificial-intelligence/india
.webp)
The concept of web accessibility (i.e., access to the internet) stems from the recognition of internet access as an inalienable right. In 2016, the United Nations Human Rights Commission (UNHRC) General Assembly referred to the access to Internet as an essential human right. The Supreme Court of India also declared such internet access as a fundamental right under the Constitution of India. Various international instruments of which India is a signatory, such as the United Nations Convention on Rights of Persons with Disabilities (UNCRPD) mandate access to information. The heavy reliance on the internet and websites necessitates making the web space inclusive, navigational and accessible to all individuals, including persons with disabilities.
Various laws mandate web accessibility:
- Right of Persons with Disability Act, 2016: The Right of Persons with Disability Act 2016 Is the primary document for the protection of the rights of persons with disabilities to ensure their full participation. The Act provides several direct and indirect provisions (such as Section 2(y) “Reasonable Accommodation”, Section 40 on “Accessibility”, and Section 42 on “Access to Information and Communication Technology”) to ensure that technology products and services are accessible to a person with disabilities.
- Rights of Persons with Disabilities Rules 2017: The 2017 rules under Rule 15 (2) task the respective Ministries and Departments to ensure compliance with accessibility standards.
- Guidelines for Indian Government Websites (GIGW): The GIGW provide a framework for websites to be designed in accordance with Web Content Accessibility Guidelines (WCAG) 2.0 standards. The GIGW enables websites to obtain certification by the Standardisation Testing and Quality Certification Directorate, after audit.
Various other policies include;
- National Policy on Universal Electronic Accessibility, 2013: The National Policy ("Policy") on Electronic Accessibility recognizes the need to eliminate discrimination on the basis of disabilities and to facilitate equal access to Electronics & ICTs. The National Policy also recognizes the diversity of differently-abled persons and provides for their specific needs. The Policy covers accessibility requirements in the area of Electronics & ICT by different stakeholders. It recognizes the need to ensure that accessibility standards, guidelines and universal design concepts are adopted and adhered to.
- Web Content Accessibility Guidelines (WCAG): The WCAG defines how to make web content more accessible to persons with disabilities. While adhering to these guidelines is optional, various versions of the WCAG have been issued. It operates on four principles; perceivable, operable, understandable and robust. It provides a path to ensuring compliance and demonstrating reasonable accommodation for persons with disabilities.
However, despite the laws, web accessibility remains a challenge. A vast majority of Indian websites, especially e-commerce entities and several government websites remain inaccessible to persons with disabilities and most often do not conform with international accessibility standards. A report by the Centre of Internet and Society states that out of the 7800 websites of the Government of India, 5815 had accessibility barriers and 1985 websites failed to open. The report also notes that more than half of the websites had no navigation markup and only 52 websites had the option to change colours. The Ministry of Electronics and Information Technology (MeITy), during the 258th Session of the Rajya Sabha on 9 December 2022 noted that 95 websites of the Central Government have been made accessible to persons with disabilities during the COVID-19 pandemic, however, only 45 websites of the Central Government have been certified as compliant under the Guidelines for Indian Government Websites (GIGW). As of that date, certification of the remaining governmental websites remains incomplete due to the pandemic. Meity also stated that the Department of Empowerment of Persons with Disabilities in 2017 sanctioned a project to be implemented by ERNET India for making 917 websites of State and Union territories. Under the project, a total of 647 websites have been made accessible as of that date.
Conclusion
While India has established a robust legal framework and policies emphasizing the importance of web accessibility as a fundamental right, the existing gap between legislation and effective implementation poses a significant challenge. The reported accessibility barriers on numerous government and e-commerce websites indicate a pressing need for heightened efforts in enforcing and enhancing accessibility standards.
In addressing these challenges, continued collaboration between government agencies, private entities and advocacy groups can play a crucial role. Ongoing monitoring, regular audits and public awareness campaigns may contribute to improving accessibility for persons with disabilities to ensure an inclusive environment and compliance with fundamental laws.
References:
- https://www.legalserviceindia.com/legal/article-2967-right-to-internet-and-fundamental-rights.html
- https://www.indiacode.nic.in/bitstream/123456789/15939/1/the_rights_of_persons_with_disabilities_act%2C_2016.pdf
- https://www.meity.gov.in/writereaddata/files/National%20Policy%20on%20Universal%20Electronics%281%29_0.pdf
- https://www.meity.gov.in/writereaddata/files/National%20Policy%20on%20Universal%20Electronics%281%29_0.pdf
- https://www.w3.org/TR/WCAG21/#:~:text=Web%20Content%20Accessibility%20Guidelines%20(WCAG)%202.1%20defines%20how%20to%20make,%2C%20learning%2C%20and%20neurological%20disabilities.
- https://www.boia.org/blog/india-digital-accessibility-laws-an-overview
- https://cis-india.org/accessibility/accessibility-of-govt-websites.pdf/view
- https://sansad.in/rs/questions/questions-and-answers

Executive Summary
A video circulating widely on social media claims to show a pilot of the Indian Air Force (IAF) crying and expressing fear about flying fighter jets, allegedly citing poor maintenance and frequent crashes. The clip is being linked to the crash of an IAF Sukhoi-30 fighter jet in Assam on March 5, in which two pilots lost their lives. In the viral video, a man dressed like a pilot is seen speaking emotionally, saying that flying fighter jets has become frightening due to lack of maintenance and repeated accidents. Several users are sharing the clip claiming that the man in the video is an IAF pilot revealing the reality behind aircraft crashes. However, research by the CyberPeace found the claim to be false. The video does not depict a real pilot or an actual incident. Instead, it appears to be an AI-generated clip created and circulated with the intent to spread misinformation.
Claim:
An Instagram user, ‘samacharsaar0’, shared the viral video on March 10, 2026, with the English caption: “2300 aircraft crashes, 1300 pilots dead: A major challenge before the IAF.”
- Source: :https://www.instagram.com/reel/DVqa4lNiYJQ
- Archived link::https://perma.cc/EUZ8-DHE3

Fact Check:
The claim was also debunked by PIB Fact Check. While verifying the viral video, PIB clarified that the clip is artificially generated and not related to any real IAF personnel.
To further verify the authenticity of the video, we analyzed it using AI detection tools. The tool Hive Moderation indicated a 99.9% probability that the video was generated using artificial intelligence.

We also examined the clip using another AI detection platform, Undetectable. The analysis suggested an 82% likelihood that the video was created with AI tools. The tool also indicated the possibility that the footage may have been generated using the Sora AI video generation tool.

Conclusion
Our research concludes that the viral video of a crying “pilot” is not authentic. The clip has been created using artificial intelligence and is being misleadingly shared as a real Indian Air Force pilot speaking about aircraft crashes. The government has also denied the claim associated with the video.