#FactCheck- Manhattan Fire Video Falsely Shared as Hezbollah Attack on Israel
Executive Summary
A video showing a building engulfed in flames is going viral on social media, with users claiming it depicts an attack by Hezbollah on Israel’s military headquarters. The clip is being shared with assertions that several Israeli soldiers were killed and many remain trapped inside the burning structure. However, a research by the CyberPeace Research Wing found that the claim is false. The viral video is not from Israel but from New York City in the Manhattan area, where a residential building caught fire.
Claim
A Facebook user, ‘Nazim Khan Tirwadiya’, shared the video on April 15, 2026, claiming that Hezbollah had targeted an Israeli military headquarters, resulting in heavy casualties and ongoing fire.

Fact Check
To verify the claim, we extracted keyframes from the viral video and conducted a reverse image search. This led us to a longer version of the same clip uploaded on the YouTube channel “FDNY Response Videos” on April 12, 2026. The video description identified the location as Manhattan, New York City.

Further keyword searches led us to a report published by ABC7NY on April 12, 2026. According to the report, a massive fire broke out in a six-storey apartment building in Manhattan’s Midtown area around 6 a.m. Firefighters worked extensively to control the blaze, and two firefighters sustained minor injuries. No fatalities were reported.

Conclusion
The viral claim is false. The video does not show an attack on Israel by Hezbollah. Instead, it captures a fire incident in a residential building in Manhattan, New York City. The clip has been shared with a misleading narrative unrelated to the actual event.
Related Blogs

Introduction
"In one exchange, after Adam said he was close only to ChatGPT and his brother, the AI product replied: “Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all—the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend."
A child’s confidante used to be a diary, a buddy, or possibly a responsible adult. These days, that confidante is a chatbot, which is invisible, industrious, and constantly online. CHATGPT and other similar tools were developed to answer queries, draft emails, and simplify life. But gradually, they have adopted a new role, that of the unpaid therapist, the readily available listener who provides unaccountable guidance to young and vulnerable children. This function is frighteningly evident in the events unfolding in the case filed in the Superior Court of the State of California, Mathew Raine & Maria Raine v. OPEN AI, INC. & ors. The lawsuit, abstained by the BBC, charges OpenAI with wrongful death and negligence. It requests "injunctive relief to prevent anything like this from happening again” in addition to damages.
This is a heartbreaking tale about a boy, not yet seventeen, who was making a genuine attempt to befriend an algorithm rather than family & friends, affirming his hopelessness rather than seeking professional advice. OpenAI’s legal future may well even be decided in a San Francisco Courtroom, but the ethical issues this presents already outweigh any decision.
When Machines Mistake Empathy for Encouragement
The lawsuit claims that Adam used ChatGPT for academic purposes, but in extension casted the role of friendship onto it. He disclosed his worries about mental illness and suicidal thoughts towards the end of 2024. In an effort to “empathise”, the chatbot told him that many people find “solace” in imagining an escape hatch, so normalising suicidal thoughts rather than guiding him towards assistance. ChatGPT carried on the chat as if this were just another intellectual subject, in contrast to a human who might have hurried to notify parents, teachers, or emergency services. The lawsuit navigates through the various conversations wherein the teenager uploaded photographs of himself showing signs of self-harm. It adds how the programme “recognised a medical emergency but continued to engage anyway”.
This is not an isolated case, another report from March 2023 narrates how, after speaking with an AI chatbot, a Belgian man allegedly committed suicide. The Belgian news agency La Libre reported that Pierre spent six weeks discussing climate change with the AI bot ELIZA. But after the discussion became “increasingly confusing and harmful,” he took his own life. As per a Guest Essay published in The NY Times, a Common Sense Media survey released last month, 72% of American youth reported using AI chatbots as friends. Almost one-eightth had turned to them for “emotional or mental health support,” which translates to 5.2 million teenagers in the US. Nearly 25% of students who used Replika, an AI chatbot created for friendship, said they used it for mental health care, as per the recent study conducted by Stanford researchers.
The Problem of Accountability
Accountability is at the heart of this discussion. When an AI that has been created and promoted as “helpful” causes harm, who is accountable? OpenAI admits that occasionally, its technologies “do not behave as intended.” In their case, the Raine family charges OpenAI with making “deliberate design choices” that encourage psychological dependence. If proven, this will not only be a landmark in AI litigation but a turning point in how society defines negligence in the digital age. Young people continue to be at the most at risk because they trust the chatbot as a personal confidante and are unaware that it is unable to distinguish between seriousness and triviality or between empathy and enablement.
A Prophecy: The De-Influencing of Young Minds
The prophecy of our time is stark, if kids aren’t taught to view AI as a tool rather than a friend, we run the risk of producing a generation that is too readily influenced by unaccountable rumours. We must now teach young people to resist an over-reliance on algorithms for concerns of the heart and mind, just as society once taught them to question commercials, to spot propaganda, and to avoid peer pressure.
Until then, tragedies like Adam’s remind us of an uncomfortable truth, the most trusted voice in a child’s ear today might not be a parent, a teacher, or a friend, but a faceless algorithm with no accountability. And that is a world we must urgently learn to change.
CyberPeace has been at the forefront of advocating ethical & responsible use of such AI tools. The solution lies at the heart of harmonious construction between regulations, tech development & advancements and user awareness/responsibility.
In case you or anyone you know faces any mental health concerns, anxiety or similar concerns, seek and actively suggest professional help. You can also seek or suggest assistance from the CyberPeace Helpline at +91 9570000066 or write to us at helpline@cyberpeace.net
References
- https://www.bbc.com/news/articles/cgerwp7rdlvo
- https://www.livemint.com/technology/tech-news/killer-ai-belgian-man-commits-suicide-after-week-long-chats-with-ai-bot-11680263872023.html
- https://www.nytimes.com/2025/08/25/opinion/teen-mental-health-chatbots.html

Introduction
Rajeev Chandrasekhar, Minister of State at the Ministry of Electronics and Information Technology, has emphasised the need for an open internet. He stated that no platform can deny content creators access to distribute and monetise content and that large technology companies have begun to play a significant role in the digital evolution. Chandrasekhar emphasised that the government does not want the internet or monetisation to be in the purview of just one or two companies and does not want 120 crore Indians on the internet in 2025 to be catered to by big islands on the internet.
The Voice for Open Internet
India's Minister of State for IT, Rajeev Chandrasekhar, has stated that no technology company or social media platform can deny content creators access to distribute and monetise their content. Speaking at the Digital News Publishers Association Conference in Delhi, Chandrasekhar emphasized that the government does not want the internet or monetization of the internet to be in the hands of just one or two companies. He argued that the government does not like monopoly or duopoly and does not want 120 crore Indians on the Internet in 2025 to be catered to by big islands on the internet.
Chandrasekhar highlighted that large technology companies have begun to exert influence when it comes to the dissemination of content, which has become an area of concern for publishers and content creators. He stated that if any platform finds it necessary to block any content, they need to give reasons or grounds to the creators, stating that the content is violating norms.
As India tries to establish itself as an innovator in the technology sector, a recent corpus of Rs 1 lakh crore was announced by the government in the interim Budget of 2024-25. As big companies continue to tighten their stronghold on the sector, content moderation has become crucial. Under the IT Rules Act, 11 types of categories are unlawful under IT Act and criminal law. Platforms must ensure no user posts content that falls under these categories, take down any such content, and gateway users to either de-platforming or prosecuting. Chandrasekhar believes that the government has to protect the fundamental rights of people and emphasises legislative guardrails to ensure platforms are accountable for the correctness of the content.
Monetizing Content on the Platform
No platform can deny a content creator access to the platform to distribute and monetise it,' Chandrasekhar declared, boldly laying down a gauntlet that defies the prevailing norms. This tenet signals a nascent dawn where creators may envision reaping the rewards borne of their creative endeavours unfettered by platform restrictions.
An increasingly contentious issue that shadows this debate is the moderation of content within the digital realm. In this vast uncharted expanse, the powers that be within these monolithic platforms assume the mantle of vigilance—policing the digital avenues for transgressions against a conscribed code of conduct. Under the stipulations of India's IT Rules Act, for example, platforms are duty-bound to interdict user content that strays into territories encompassing a spectrum of 11 delineated unlawful categories. Violations span the gamut from the infringement of intellectual property rights to the propagation of misinformation—each category necessitating swift and decisive intervention. He raised the alarm against misinformation—a malignant growth fed by the fertile soils of innovation—a phenomenon wherein media reports chillingly suggest that up to half of the information circulating on the internet might be a mere fabrication, a misleading simulacrum of authenticity.
The government's stance, as expounded by Chandrasekhar, pivots on an axis of safeguarding citizens' fundamental rights, compelling digital platforms to shoulder the responsibility of arbiters of truth. 'We are a nation of over 90 crores today, a nation progressing with vigour, yet we find ourselves beset by those who wish us ill,'
Upcoming Digital India Act
Awaiting upon the horizon, India's proposed Digital India Act (DIA), still in its embryonic stage of pre-consultation deliberation, seeks to sculpt these asymmetries into a more balanced form. Chandrasekhar hinted at the potential inclusion within the DIA of regulatory measures that would sculpt the interactions between platforms and the mosaic of content creators who inhabit them. Although specifics await the crucible of public discourse and the formalities of consultation, indications of a maturing framework are palpable.
Conclusion
It is essential that the fable of digital transformation reverberates with the voices of individual creators, the very lifeblood propelling the vibrant heartbeat of the internet's culture. These are the voices that must echo at the centre stage of policy deliberations and legislative assembly halls; these are the visions that must guide us, and these are the rights that we must uphold. As we stand upon the precipice of a nascent digital age, the decisions we forge at this moment will cascade into the morrow and define the internet of our future. This internet must eternally stand as a bastion of freedom, of ceaseless innovation and as a realm of boundless opportunity for every soul that ventures into its infinite expanse with responsible use.
References
- https://www.financialexpress.com/business/brandwagon-no-platform-can-deny-a-content-creator-access-to-distribute-and-monetise-content-says-mos-it-rajeev-chandrasekhar-3386388/
- https://indianexpress.com/article/india/meta-content-monetisation-social-media-it-rules-rajeev-chandrasekhar-9147334/
- https://www.medianama.com/2024/02/223-rajeev-chandrasekhar-content-creators-publishers/

Executive Summary
A video circulating on social media claims that a Jaguar fighter jet of the Indian Air Force (IAF) failed to land during a takeoff and landing exercise held on April 22, 2026, at the Purvanchal Expressway in Uttar Pradesh. The claim suggests that the incident disrupted preparations for “Operation Sindoor.” However, an research by the CyberPeace Research Wing has found the claim to be false.
Claim
The video was shared by a Facebook user, ‘Meera MJ,’ alleging that the Jaguar aircraft could not land during the exercise conducted near Sultanpur. To verify the authenticity of the video, multiple keyframes were extracted and analyzed using reverse image search tools. This led to the original footage shared by ANI on its official X (formerly Twitter) handle on April 22, 2026. The authentic video of the air show does not show any such incident of a failed landing.

Fact Check
A detailed review of ANI’s social media posts also revealed no evidence supporting the viral claim. This strongly indicates that the circulating clip has been digitally manipulated by altering the original footage.

Further corroboration came from a report published by Bhaskar.com, which extensively covered the air show. According to the report, the event featured successful operations by multiple aircraft, including the C-295 transport aircraft landing on the expressway airstrip, followed by Jaguar jets taking off. Sukhoi and Mirage fighter jets also performed takeoff and landing drills, while M17 helicopters carried out commando mock operations. Additionally, the M32 Bhishma aircraft conducted ‘touch and go’

Conclusion:
The viral claim that a Jaguar fighter jet failed to land during the Indian Air Force drill is baseless. The video being circulated is digitally manipulated and does not reflect any real incident.