#FactCheck - Philadelphia Plane Crash Video Falsely Shared as INS Vikrant Attack on Karachi Port
Executive Summary:
A video currently circulating on social media falsely claims to show the aftermath of an Indian Navy attack on Karachi Port, allegedly involving the INS Vikrant. Upon verification, it has been confirmed that the video is unrelated to any naval activity and in fact depicts a plane crash that occurred in Philadelphia, USA. This misrepresentation underscores the importance of verifying information through credible sources before drawing conclusions or sharing content.
Claim:
Social media accounts shared a video claiming that the Indian Navy’s aircraft carrier, INS Vikrant, attacked Karachi Port amid rising India-Pakistan tensions. Captions such as “INDIAN NAVY HAS DESTROYED KARACHI PORT” accompanied the footage, which shows a crash site with debris and small fires.

Fact Check:
After reverse image search we found that the viral video to earlier uploads on Facebook and X (formerly Twitter) dated February 2, 2025. The footage is from a plane crash in Philadelphia, USA, involving a Mexican-registered Learjet 55 (tail number XA-UCI) that crashed near Roosevelt Mall.

Major American news outlets, including ABC7, reported the incident on February 1, 2025. According to NBC10 Philadelphia, the crash resulted in the deaths of seven individuals, including one child.

Conclusion:
The viral video claiming to show an Indian Navy strike on Karachi Port involving INS Vikrant is entirely misleading. The footage is from a civilian plane crash that occurred in Philadelphia, USA, and has no connection to any military activity or recent developments involving the Indian Navy. Verified news reports confirm the incident involved a Mexican-registered Learjet and resulted in civilian casualties. This case highlights the ongoing issue of misinformation on social media and emphasizes the need to rely on credible sources and verified facts before accepting or sharing sensitive content, especially on matters of national security or international relations.
- Claim: INS Vikrant, attacked Karachi Port amid rising India-Pakistan tensions
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs
.webp)
Introduction
The ongoing armed conflict between Israel and Hamas/ Palestine is in the news all across the world. The latest conflict was triggered by unprecedented attacks against Israel by Hamas militants on October 7, killing thousands of people. Israel has launched a massive counter-offensive against the Islamic militant group. Amid the war, the bad information and propaganda spreading on various social media platforms, tech researchers have detected a network of 67 accounts that posted false content about the war and received millions of views. The ‘European Commission’ has sent a letter to Elon Musk, directing them to remove illegal content and disinformation; otherwise, penalties can be imposed. The European Commission has formally requested information from several social media giants on their handling of content related to the Israel-Hamas war. This widespread disinformation impacts and triggers the nature of war and also impacts the world and affects the goodwill of the citizens. The bad group, in this way, weaponise the information and fuels online hate activity, terrorism and extremism, flooding political polarisation with hateful content on social media. Online misinformation about the war is inciting extremism, violence, hate and different propaganda-based ideologies. The online information environment surrounding this conflict is being flooded with disinformation and misinformation, which amplifies the nature of war and too many fake narratives and videos are flooded on social media platforms.
Response of social media platforms
As there is a proliferation of online misinformation and violent content surrounding the war, It imposes a question on social media companies in terms of content moderation and other policy shifts. It is notable that Instagram, Facebook and X(Formerly Twitter) all have certain features in place giving users the ability to decide what content they want to view. They also allow for limiting the potentially sensitive content from being displayed in search results.
The experts say that It is of paramount importance to get a sort of control in this regard and define what is permissible online and what is not, Hence, what is required is expertise to determine the situation, and most importantly, It requires robust content moderation policies.
During wartime, people who are aggrieved or provoked are often targeted by this internet disinformation that blends ideological beliefs and spreads conspiracy theories and hatred. This is not a new phenomenon, it is often observed that disinformation-spreading groups emerged and became active during such war and emergency times and spread disinformation and propaganda-based ideologies and influence the society at large by misrepresenting the facts and planted stories. Social media has made it easier to post user-generated content without properly moderating it. However, it is a shared responsibility of tech companies, users, government guidelines and policies to collectively define and follow certain mechanisms to fight against disinformation and misinformation.
Digital Services Act (DSA)
The newly enacted EU law, i.e. Digital Services Act, pushes various larger online platforms to prevent posts containing illegal content and also puts limits on targeted advertising. DSA enables to challenge the of illegal online content and also poses requirements to prevent misinformation and disinformation and ensure more transparency over what the users see on the platforms. Rules under the DSA cover everything from content moderation & user privacy to transparency in operations. DSA is a landmark EU legislation moderating online platforms. Large tech platforms are now subject to content-related regulation under this new EU law ‘The Digital Services Act’, which also requires them to prevent the spread of misinformation and disinformation and overall ensure a safer online environment.
Indian Scenario
The Indian government introduced the Intermediary Guidelines (Intermediary Guidelines and Digital Media Ethics Code) Rules, updated in 2023 which talks about the establishment of a "fact check unit" to identify false or misleading online content. Digital Personal Data Protection, 2023 has also been enacted which aims to protect personal data. The upcoming Digital India bill is also proposed to be tabled in the parliament, this act will replace the current Information & Technology Act, of 2000. The upcoming Digital India bill can be seen as future-ready legislation to strengthen India’s current cybersecurity posture. It will comprehensively deal with the aspects of ensuring privacy, data protection, and fighting growing cyber crimes in the evolving digital landscape and ensuring a safe digital environment. Certain other entities including civil societies are also actively engaged in fighting misinformation and spreading awareness for safe and responsible use of the Internet.
Conclusion:
The widespread disinformation and misinformation content amid the Israel-Hamas war showcases how user-generated content on social media shows you the illusion of reality. There is widespread misinformation, misleading content or posts on social media platforms, and misuse of new advanced AI technologies that even make it easier for bad actors to create synthetic media content. It is also notable that social media has connected us like never before. Social media is a great platform with billions of active social media users around the globe, it offers various conveniences and opportunities to individuals and businesses. It is just certain aspects that require the attention of all of us to prevent the bad use of social media. The social media platforms and regulatory authorities need to be vigilant and active in clearly defining and improving the policies for content regulation and safe and responsible use of social media which can effectively combat and curtail the bad actors from misusing social media for their bad motives. As a user, it's the responsibility of users to exercise certain duties and promote responsible use of social media. With the increasing penetration of social media and the internet, misinformation is rampant all across the world and remains a global issue which needs to be addressed properly by implementing strict policies and adopting best practices to fight the misinformation. Users are encouraged to flag and report misinformative or misleading content on social media and should always verify it from authentic sources. Hence creating a safer Internet environment for everyone.
References:
- https://abcnews.go.com/US/experts-fear-hate-extremism-social-media-israel-hamas-war/story?id=104221215
- https://edition.cnn.com/2023/10/14/tech/social-media-misinformation-israel-hamas/index.html
- https://www.nytimes.com/2023/10/13/business/israel-hamas-misinformation-social-media-x.html
- https://www.africanews.com/2023/10/24/fact-check-misinformation-about-the-israel-hamas-war-is-flooding-social-media-here-are-the//
- https://www.theverge.com/23845672/eu-digital-services-act-explained

Introduction
Artificial Intelligence (AI) has transcended its role as a futuristic tool; it is already an integral part of the decision-making process in various sectors, including governance, the medical field, education, security, and the economy, worldwide. On the one hand, there are concerns about the nature of AI, its advantages and disadvantages, and the risks it may pose to the world. There are also doubts about the technology’s capacity to provide effective solutions, especially when threats such as misinformation, cybercrime, and deepfakes are becoming more common.
Recently, global leaders have reiterated that the use of AI should continue to be human-centric, transparent, and governed responsibly. The issue of offering unbridled access to innovators, while also preventing harm, is a dilemma that must be resolved.
AI as a Global Public Good
In earlier times only the most influential states and large corporations controlled the supply and use of advanced technologies, and they guarded them as national strategic assets. In contrast, AI has emerged as a digital innovation that exists and evolves within a deeply interconnected environment, which makes access far more distributed than before. Usage of AI in a specific country will not only bring its pros and cons to that particular place, but the rest of the world as well. For instance, deepfake scams and biased algorithms will not only affect the people in the country where they are created but also in all other countries where such people might be doing business or communicating.
The Growing Threat of AI Misuse
- Deepfakes, Crime, and Digital Terrorism
The application of artificial intelligence in the wrong way is quickly becoming one of the main security problems. Deepfake technology is being used to carry out electoral misinformation spread, communicate lies, and create false narratives. Cybercriminals are now making use of AI to make phishing attacks faster and more efficient, hack into security systems, and come up with elaborate social engineering tactics. In the case of extremist groups, AI has the power to give a better quality of propaganda, recruitment, and coordination.
- Solution - Human Oversight and Safety-by-Design
To overcome these dangers, a global AI system must be developed based on the principles of safety-by-design. This means incorporating moral safeguards right from the development phase rather than reacting after the damage is done. Moreover, human control is just as vital. Artificial intelligence (AI) systems that influence public confidence, security, or human rights should always be under the control of human decision-makers. Automated decision-making where there is no openness or the possibility of auditing could lead to black-box systems being developed, where the assignment of responsibility is unclear.
Three Pillars of a Responsible AI Framework
- Equitable Access to AI Technologies
One of the major hindrances to global AI development is the non-uniformity of access. The provision of high-end computing capability, data infrastructure, and AI research resources is still highly localised in some areas. A sustainable framework needs to be set up so that smaller countries, rural areas, and people speaking different languages will also be able to share the benefits of AI. The distribution of access fairly will be a gradual process, but at the same time, it will lead to the creation of new ideas and improvements in the different places where the local markets are. Thus, there would be no digital divide, and the AI future would not be exclusively determined by the wealthy economies. - Population-Level Skilling and Talent Readiness
AI will have an impact on worldwide working areas. Thus, societies must not only equip their people with the existing job skills but also with the future technology-based skills. Massive AI literacy programs, digital competencies enhancement, and cross-disciplinary education are very important. Forecasting human resources for roles in AI governance, data ethics, cyber security, and modern technologies will help prevent large scale displacement while also promoting growth that is genuinely inclusive. - Responsible and Human-Centric Deployment
Adoption of Responsible AI makes sure that technology is used for social good and not just for making profits. The human-centred AI directs its applications to the sectors like healthcare, agriculture, education, disaster management, and public services, especially the underserved regions in the world that are most in need of these innovations. This strategy guarantees that progress in technology will improve human life instead of making the situation worse for the poor or taking away the responsibility from humans.
Need for a Global AI Governance Framework
- Why International Cooperation Matters
AI governance cannot be fragmented. Different national regulations lead to the creation of loopholes that allow bad actors to operate in different countries. Hence, global coordination and harmonisation of safety frameworks is of utmost importance. A single AI governance framework should stipulate:
- Clear responsible prohibition on AI misuse in terrorism, deepfakes, and cybercrime .
- Transparency and algorithm audits as a compulsory requirement.
- Independent global oversight bodies.
- Ethical codes of conduct in harmony with humanitarian laws.
Framework like this makes it clear that AI will be shaped by common values rather than being subject to the influence of different interest groups.
- Talent Mobility and Open Innovation
If AI is to be universally accepted, then global mobility of talent must be made easier. The flow of innovation takes place when the interaction between researchers, engineers, and policymakers is not limited by borders.
- AI, Equity, and Global Development
The rapid concentration of technology in a few hands poses the risk of widening the gap in equality among countries. Most developing countries are facing the problems of poor infrastructure, lack of education and digital resources. By regarding them only as technology markets and not as partners in innovation, they become even more isolated from the mainstream of development. An AI development mix of human-centred and technology-driven must consider that the global stillness is broken only by the inclusion of the participation of the whole world. For example, the COVID-19 pandemic has already demonstrated how technology can be a major factor in the building of healthcare and crisis resilience. As a matter of fact, when fairly used, AI has a significant role to play in the realisation of the Sustainable Development Goals.
Conclusion
AI is located at a crucial junction. It can either enhance human progress or increase the digital risks. Making sure that AI is a global good goes beyond mere sophisticated technology; it requires moral leadership, inclusion in governance, and collaboration between countries. Preventing misuse by means of openness, supervision by humans, and policies that are responsible will be vital in keeping public trust. Properly guided, AI can make society more resilient, speed up development, and empower future generations. The future we choose is determined by how responsibly we act today.
As PM Modi stated ‘AI should serve as a global good, and at the same time nations must stay vigilant against its misuse’. CyberPeace reinforces this vision by advocating responsible innovation and a secure digital future for all.
References
- https://www.hindustantimes.com/india-news/ai-a-global-good-but-must-guard-against-misuse-pm-101763922179359.html
- https://www.deccanherald.com/india/g20-summit-pm-modi-goes-against-donald-trumps-stand-seeks-global-governance-for-ai-3807928
- https://timesofindia.indiatimes.com/india/need-global-compact-to-prevent-ai-misuse-pm-modi/articleshow/125525379.cms

Executive Summary:
A video is widely circulating on social media in which Israel’s Prime Minister Benjamin Netanyahu appears to praise India’s Prime Minister Narendra Modi. The viral clip is being shared with the claim that during a speech delivered on February 25, 2026, Netanyahu announced a special aid package for Afghanistan at the request of PM Modi. However, research by CyberPeace found the claim to be false. The research revealed that the circulating video was generated using artificial intelligence. The probe also confirmed that Netanyahu did not make any announcement related to Afghanistan or the Taliban during the speech.
Claim
On March 1, 2026, a social media user shared the viral video on Facebook claiming that Netanyahu praised PM Modi and announced a special assistance package for Afghanistan following India’s request. The links to the post and its archive are provided below, along with a screenshot.

Fact Check:
To verify the claim, we first searched Google using relevant keywords. However, we did not find any credible media reports supporting the claim that Israel had announced such an aid package for Afghanistan. Next, we extracted key frames from the viral video and performed a reverse image search using Google Lens. During this process, we found the original video on the YouTube channel of VERTEX, which had been uploaded on February 25, 2026.

A detailed review of the original video revealed that the viral clip circulating on social media is not part of the original footage. This indicates that the circulating clip has been manipulated and shared with a misleading claim. In the original video, Netanyahu was addressing a special parliamentary session in Jerusalem, where he spoke about the growing trade, strategic cooperation, and strengthening diplomatic relations between India and Israel. He described the partnership between the two democracies as a significant and historic milestone in bilateral relations. Upon carefully listening to the viral clip, we noticed irregularities in the voice and tone, which raised suspicions that it might be AI-generated. We then analyzed the video using the AI detection tool TruthScan. The results indicated that the viral video has approximately a 75% probability of being AI-generated.

Conclusion
Our research found that the viral video was created using artificial intelligence. Moreover, Israel’s Prime Minister Benjamin Netanyahu did not make any announcement regarding Afghanistan or the Taliban during the speech being referenced. The claim circulating on social media is therefore false.