#FactCheck-Mosque fire in India? False, it's from Indonesia
Executive Summary:
A social media viral post claims to show a mosque being set on fire in India, contributing to growing communal tensions and misinformation. However, a detailed fact-check has revealed that the footage actually comes from Indonesia. The spread of such misleading content can dangerously escalate social unrest, making it crucial to rely on verified facts to prevent further division and harm.

Claim:
The viral video claims to show a mosque being set on fire in India, suggesting it is linked to communal violence.

Fact Check
The investigation revealed that the video was originally posted on 8th December 2024. A reverse image search allowed us to trace the source and confirm that the footage is not linked to any recent incidents. The original post, written in Indonesian, explained that the fire took place at the Central Market in Luwuk, Banggai, Indonesia, not in India.

Conclusion: The viral claim that a mosque was set on fire in India isn’t True. The video is actually from Indonesia and has been intentionally misrepresented to circulate false information. This event underscores the need to verify information before spreading it. Misinformation can spread quickly and cause harm. By taking the time to check facts and rely on credible sources, we can prevent false information from escalating and protect harmony in our communities.
- Claim: The video shows a mosque set on fire in India
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
A zero-click cyber attack solely relies on software and hardware flaws, bypassing any human factor to infect a device and take control over its data. It is almost impossible to discover the attack and know that the device is hacked unless someone on your side is closely monitoring your network traffic data.
At Kaspersky, security analysts used their SIEM solution KUMA to monitor their corporate WiFi network traffic and discovered this mysterious attack. They took necessary actions to investigate it and even went a step further to dive right into the action and uncover the entire attack chain.
A few months ago, Kaspersky shared their findings about this attack on iOS devices. They shared how these zero-click vulnerabilities were being exploited by the attackers and called this attack ‘Operation Triangulation’.
A zero-click exploit in the network
Kaspersky detected a zero-click attack on the iPhones of their colleagues while monitoring their corporate WiFi network traffic. They managed to get detailed information on all the stages of the attack by simply identifying a pattern in the domain names flowing through their network. Although the attackers were quite experienced, their mistakes helped Kaspersky detect critical vulnerabilities in all iOS devices.
The name-pattern
These previously unsuspected domains had a similar name-style which consisted of two names and ended with ‘.com’, such as ‘backuprabbit.com’ and ‘cloudsponcer.com’. They were used in pairs, one for an exportation process and the other served as a command and control server. These domains showed high outbound traffic, they were registered with NameCheap and protected with Cloudflare.
The network pattern
Each time a connection to these suspicious domains was made, it was preceded by an iMessage connection which indicated these domains are being accessed by iOS devices. It was observed that the devices connected to these domains, downloaded attachments, performed a few requests to a first level domain which was an exploitation framework server, then made regular connections with the second level domain which was a command and control server controlled by the attackers.
Getting more information
To get more information about the attack all the infected devices were collected and backed up after carefully informing the device owners. Although the attackers had managed to clean their artefacts, the backed up data was used to perform digital forensic procedures and find traces of the attacks. This helped Kaspersky to figure out how the infection might be taking place.
The attacker’s mistakes
The attackers deleted all the attachment files and exploits but did not delete the modified SMS attachment folder. That folder had no files left inside it. The attackers removed evidence from other databases as well, like the ‘SMS.db’ database, however another database called ‘datausage.sqlite’ was not sanitised.
The ‘datausage.sqlite’ database is the most important database when it comes to iOS forensics as its contents can be used to track applications and network usage. Upon examination of this database, a process logged as ‘BackupAgent’ was found to be making network connections at the same time the device was making connections to the suspicious domains.
The indicator of compromise
‘BackupAgent’ stood out in this scenario because although it is a legitimate binary, it has been deprecated since iOS4 and it should not have been making any network connections. This identified the ‘BackupAgent’ process as the first solid indicator of compromise in Operation Triangulation. The indicator is termed as- ‘Data usage by process BackupAgent’, and was used to determine if any specific device was infected.
Taking it a step ahead
The team at Kaspersky successfully identified the indicator of compromise and determined which devices were infected, but as the attackers were experienced enough to delete their payloads, they decided to set a trap and perform a man-in-the-middle attack. When they did, the attackers were unable to detect it.
The man-in the-middle attack
Kaspersky prepared a server with ‘WireGuard’ and ‘mitmproxy’. They installed root certificates on devices that could be used as targets for the attackers and routed all the network traffic to that server. They also developed a ‘Telegram’ bot to notify them about new infections as they decrypted the network traffic.
Setting up a bot proved to be an effective way of real time monitoring while modifying all the network packets on-the-fly with ‘mitmproxy’, this gave them unlimited power! Their trap was successful in capturing a payload sent by the attackers and it was analysed in detail.
The name was in the payload
The payload was an HTML page with obfuscator javascript which performed various code checks and canvas footprinting. It rendered a yellow triangle and calculated its hash value. This is why the operation was named Operation Triangulation.
The team at Kaspersky started cracking various layers of asymmetric cryptography with regular expressions. They patched the stages one-by-one on the fly to move the logic from each stage to ‘mitmproxy’ and finally implemented a 400 line ‘mitmproxy’ add-on. This add-on decrypted all the validators, exploits, spyware and additional modules.
The mystery
It is remarkable how Kaspersky detected the attack and identified multiple vulnerabilities, set up a trap to capture a payload and decrypted it completely. They shared all their findings with the device manufacturer and Apple responded by sending out a security patch update addressing four zero-day vulnerabilities.
A zero-click vulnerability
Traditionally any spyware relies on the user to to click on a compromised link or file to initiate the infection. However a zero-click vulnerability is a specific flaw in the device software or hardware that the attacker can use to infect the device without the need for a click or tap from the user.
The vulnerabilities identified
- Tricky Font Flaw (CVE-2023-41990): A clandestine method involving the manipulation of font rendering on iPhones, akin to a secret code deciphered by the attackers.Apple swiftly addressed this vulnerability in versions iOS 15.7.8 and iOS 16.3.
- Kernel Trick (CVE-2023-32434): Exploiting a hidden language understood only by the iPhone's core, the attackers successfully compromised the kernel's integrity.Apple responded with fixes implemented in iOS 15.7.7, iOS 15.8, and iOS 16.5.1.
- Web Sneakiness (CVE-2023-32435): Leveraging a clever ploy in the interpretation of web content by iPhones, the attackers manipulated the device's behaviour.Apple addressed this vulnerability in iOS 15.7.7 and iOS 16.5.1.
- Kernel Key (CVE-2023-38606): The pinnacle of the operation, the attackers discovered a covert method to tamper with the iPhone's core, the kernel.Apple responded with a fix introduced in iOS 16.6, thwarting the intrusion into the most secure facets of the iPhone
Still, how these attackers were able to find this critical vulnerability in a device which stands out for it’s security features is still unknown.
CyberPeace Advisory
Zero-click attacks are a real threat, but you can defend yourself. Being aware of the risks and taking proactive steps can significantly reduce vulnerability. Regularly installing the latest updates for your operating system, apps, and firmware helps patch vulnerabilities before attackers can exploit them.
- Keep your software updated as they contain crucial security patches that plug vulnerabilities before attackers can exploit them.
- Use security software to actively scan for suspicious activity and malicious code, acting as a first line of defence against zero-click intrusions.
- Be cautious with unsolicited messages if the offer seems too good to be true or the link appears suspicious as it can contain malware that can infect your device.
- Disable automatic previews as it can potentially trigger malicious code hidden within the content.
- Be mindful of what you install and avoid unverified apps and pirated software, as they can be Trojan horses laden with malware.
- Stay informed about the latest threats and updates by following reliable news sources and security blogs to stay ahead of the curve, recognize potential zero-click scams and adjust your behaviour accordingly.
Check out our (advisory report)[add report link] to get in depth information.
Conclusion
Operation Triangulation stands as a testament to the continuous cat-and-mouse game between cybercriminals and tech giants. While the covert spy mission showcased the vulnerabilities present in earlier iPhone versions, Apple's prompt response underscores the commitment to user security. As the digital landscape evolves, vigilance, timely updates, and collaborative efforts remain essential in safeguarding against unforeseen cyber threats.
References:
- Operation Triangulation: iOS devices targeted with previously unknown malware | Securelist, 1 June, 2023
- Operation Triangulation: The last (hardware) mystery | Securelist, 27 December, 2023.
- 37C3 - Operation Triangulation: What You Get When Attack iPhones of Researchers (youtube.com), 29 December,2023

Executive Summary:
A video purporting to be from Lal Chowk in Srinagar, which features Lord Ram's hologram on a clock tower, has gone popular on the internet. The footage is from Dehradun, Uttarakhand, not Jammu and Kashmir, the CyberPeace Research Team discovered.
Claims:
A Viral 48-second clip is getting shared over the Internet mostly in X and Facebook, The Video shows a car passing by the clock tower with the picture of Lord Ram. A screen showcasing songs about Lord Ram is shown when the car goes forward and to the side of the road.

The Claim is that the Video is from Kashmir, Srinagar

Similar Post:

Fact Check:
The CyberPeace Research team found that the Information is false. Firstly we did some keyword search relating to the Caption and found that the Clock Tower in Srinagar is not similar to the Video.

We found an article by NDTV mentioning Srinagar Lal Chowk’s Clock Tower, It's the only Clock Tower in the Middle of Road. We are somewhat confirmed that the Video is not From Srinagar. We then ran a reverse image search of the Video by breaking down into frames.
We found another Video that visualizes a similar structure tower in Dehradun.

Taking a cue from this we then Searched for the Tower in Dehradun and tried to see if it matches with the Video, and yes it’s confirmed that the Tower is a Clock Tower in Paltan Bazar, Dehradun and the Video is actually From Dehradun but not from Srinagar.
Conclusion:
After a thorough Fact Check Investigation of the Video and the originality of the Video, we found that the Visualisation of Lord Ram in the Clock Tower is not from Srinagar but from Dehradun. Internet users who claim the Visual of Lord Ram from Srinagar is totally Baseless and Misinformation.
- Claim: The Hologram of Lord Ram on the Clock Tower of Lal Chowk, Srinagar
- Claimed on: Facebook, X
- Fact Check: Fake

As AI language models become more powerful, they are also becoming more prone to errors. One increasingly prominent issue is AI hallucinations, instances where models generate outputs that are factually incorrect, nonsensical, or entirely fabricated, yet present them with complete confidence. Recently, ChatGPT released two new models—o3 and o4-mini, which differ from earlier versions as they focus more on step-by-step reasoning rather than simple text prediction. With the growing reliance on chatbots and generative models for everything from news summaries to legal advice, this phenomenon poses a serious threat to public trust, information accuracy, and decision-making.
What Are AI Hallucinations?
AI hallucinations occur when a model invents facts, misattributes quotes, or cites nonexistent sources. This is not a bug but a side effect of how Large Language Models (LLMs) work, and it is only the probability that can be reduced, not their occurrence altogether. Trained on vast internet data, these models predict what word is likely to come next in a sequence. They have no true understanding of the world or facts, they simulate reasoning based on statistical patterns in text. What is alarming is that the newer and more advanced models are producing more hallucinations, not fewer. seemingly counterintuitive. This has been prevalent reasoning-based models, which generate answers step-by-step in a chain-of-thought style. While this can improve performance on complex tasks, it also opens more room for errors at each step, especially when no factual retrieval or grounding is involved.
As per reports shared on TechCrunch, it mentioned that when users asked AI models for short answers, hallucinations increased by up to 30%. And a study published in eWeek found that ChatGPT hallucinated in 40% of tests involving domain-specific queries, such as medical and legal questions. This was not, however, limited to this particular Large Language Model, but also similar ones like DeepSeek. Even more concerning are hallucinations in multimodal models like those used for deepfakes. Forbes reports that some of these models produce synthetic media that not only look real but are also capable of contributing to fabricated narratives, raising the stakes for the spread of misinformation during elections, crises, and other instances.
It is also notable that AI models are continually improving with each version, focusing on reducing hallucinations and enhancing accuracy. New features, such as providing source links and citations, are being implemented to increase transparency and reliability in responses.
The Misinformation Dilemma
The rise of AI-generated hallucinations exacerbates the already severe problem of online misinformation. Hallucinated content can quickly spread across social platforms, get scraped into training datasets, and re-emerge in new generations of models, creating a dangerous feedback loop. However, it helps that the developers are already aware of such instances and are actively charting out ways in which we can reduce the probability of this error. Some of them are:
- Retrieval-Augmented Generation (RAG): Instead of relying purely on a model’s internal knowledge, RAG allows the model to “look up” information from external databases or trusted sources during the generation process. This can significantly reduce hallucination rates by anchoring responses in verifiable data.
- Use of smaller, more specialised language models: Lightweight models fine-tuned on specific domains, such as medical records or legal texts. They tend to hallucinate less because their scope is limited and better curated.
Furthermore, transparency mechanisms such as source citation, model disclaimers, and user feedback loops can help mitigate the impact of hallucinations. For instance, when a model generates a response, linking back to its source allows users to verify the claims made.
Conclusion
AI hallucinations are an intrinsic part of how generative models function today, and such a side-effect would continue to occur until foundational changes are made in how models are trained and deployed. For the time being, developers, companies, and users must approach AI-generated content with caution. LLMs are, fundamentally, word predictors, brilliant but fallible. Recognising their limitations is the first step in navigating the misinformation dilemma they pose.
References
- https://www.eweek.com/news/ai-hallucinations-increase/
- https://www.resilience.org/stories/2025-05-11/better-ai-has-more-hallucinations/
- https://www.ekathimerini.com/nytimes/1269076/ai-is-getting-more-powerful-but-its-hallucinations-are-getting-worse/
- https://techcrunch.com/2025/05/08/asking-chatbots-for-short-answers-can-increase-hallucinations-study-finds/
- https://en.as.com/latest_news/is-chatgpt-having-robot-dreams-ai-is-hallucinating-and-producing-incorrect-information-and-experts-dont-know-why-n/
- https://www.newscientist.com/article/2479545-ai-hallucinations-are-getting-worse-and-theyre-here-to-stay/
- https://www.forbes.com/sites/conormurray/2025/05/06/why-ai-hallucinations-are-worse-than-ever/
- https://towardsdatascience.com/how-i-deal-with-hallucinations-at-an-ai-startup-9fc4121295cc/
- https://www.informationweek.com/machine-learning-ai/getting-a-handle-on-ai-hallucinations