#FactCheck - Philadelphia Plane Crash Video Falsely Shared as INS Vikrant Attack on Karachi Port
Executive Summary:
A video currently circulating on social media falsely claims to show the aftermath of an Indian Navy attack on Karachi Port, allegedly involving the INS Vikrant. Upon verification, it has been confirmed that the video is unrelated to any naval activity and in fact depicts a plane crash that occurred in Philadelphia, USA. This misrepresentation underscores the importance of verifying information through credible sources before drawing conclusions or sharing content.
Claim:
Social media accounts shared a video claiming that the Indian Navy’s aircraft carrier, INS Vikrant, attacked Karachi Port amid rising India-Pakistan tensions. Captions such as “INDIAN NAVY HAS DESTROYED KARACHI PORT” accompanied the footage, which shows a crash site with debris and small fires.

Fact Check:
After reverse image search we found that the viral video to earlier uploads on Facebook and X (formerly Twitter) dated February 2, 2025. The footage is from a plane crash in Philadelphia, USA, involving a Mexican-registered Learjet 55 (tail number XA-UCI) that crashed near Roosevelt Mall.

Major American news outlets, including ABC7, reported the incident on February 1, 2025. According to NBC10 Philadelphia, the crash resulted in the deaths of seven individuals, including one child.

Conclusion:
The viral video claiming to show an Indian Navy strike on Karachi Port involving INS Vikrant is entirely misleading. The footage is from a civilian plane crash that occurred in Philadelphia, USA, and has no connection to any military activity or recent developments involving the Indian Navy. Verified news reports confirm the incident involved a Mexican-registered Learjet and resulted in civilian casualties. This case highlights the ongoing issue of misinformation on social media and emphasizes the need to rely on credible sources and verified facts before accepting or sharing sensitive content, especially on matters of national security or international relations.
- Claim: INS Vikrant, attacked Karachi Port amid rising India-Pakistan tensions
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
26th November 2024 marked a historical milestone for India as a Hyderabad-based space technology firm TakeMe2Space, announced the forthcoming launch of MOI-TD “(My Orbital Infrastructure - Technology Demonstrator)”, India's first AI lab in space. The mission will demonstrate real-time data processing in orbit, making space research more affordable and accessible according to the Company. The launch is scheduled for mid-December 2024 aboard the ISRO's PSLV C60 launch vehicle. It represents a transformative phase for innovation and exploration in India's AI and space technology integration space.
The Vision Behind the Initiative
The AI Laboratory in orbit is designed to enable autonomous decision-making, revolutionising satellite exploration and advancing cutting-edge space research. It signifies a major step toward establishing space-based data centres, paving the way for computing capabilities that will support a variety of applications.
While space-based data centres currently cost 10–15 times more than terrestrial alternatives, this initiative leverages high-intensity solar power in orbit to significantly reduce energy consumption. Training AI models in space could lower energy costs by up to 95% and cut carbon emissions by at least tenfold, even when factoring in launch emissions. It positions MOI-TD as an eco-friendly and cost-efficient solution.
Technological Innovations and Future Impact of AI in Space
This AI Laboratory, MOI-TD includes control software and hardware components, including reaction wheels, magnetometers, an advanced onboard computer, and an AI accelerator. The satellite also features flexible solar cells that could power future satellites. It will enable the processing of real-time space data, pattern recognition, and autonomous decision-making and address the latency issues, ensuring faster and more efficient data analysis, while the robust hardware designs tackle the challenges posed by radiation and extreme space environments. Advanced sensor integration will further enhance data collection, facilitating AI model training and validation.
These innovations drive key applications with transformative potential. It will allow users to access the satellite platform through OrbitLaw, a web-based console that will allow users to upload AI models to aid climate monitoring, disaster prediction, urban growth analysis and custom Earth observation use cases. TakeMe2Space has already partnered with a leading Malaysian university and an Indian school (grades 9 and 10) to showcase the satellite’s potential for democratizing space research.
Future Prospects and India’s Global Leadership in AI and Space Research
As per Stanford’s HAI Global AI Vibrancy rankings, India secured 4th place due to its R&D leadership, vibrant AI ecosystem, and public engagement for AI. This AI laboratory is a step further in advancing India’s role in the development of regulatory frameworks for ethical AI use, fostering robust public-private partnerships, and promoting international cooperation to establish global standards for AI applications.
Cost-effectiveness and technological exercise are some of India’s unique strengths and could position the country as a key player in the global AI and space research arena and draw favourable comparisons with initiatives by NASA, ESA, and private entities like SpaceX. By prioritising ethical and sustainable practices and fostering collaboration, India can lead in shaping the future of AI-driven space exploration.
Conclusion
India’s first AI laboratory in space, MOI-TD, represents a transformative milestone in integrating AI with space technology. This ambitious project promises to advance autonomous decision-making, enhance satellite exploration, and democratise space research. Additionally, factors such as data security, fostering international collaboration and ensuring sustainability should be taken into account while fostering such innovations. With this, India can establish itself as a leader in space research and AI innovation, setting new global standards while inspiring a future where technology expands humanity’s frontiers and enriches life on Earth.
References
- https://www.ptinews.com/story/national/start-up-to-launch-ai-lab-in-space-in-december/2017534
- https://economictimes.indiatimes.com/tech/startups/spacetech-startup-takeme2space-to-launch-ai-lab-in-space-in-december/articleshow/115701888.cms?from=mdr
- https://www.ibm.com/think/news/data-centers-space
- https://cio.economictimes.indiatimes.com/amp/news/next-gen-technologies/spacetech-startup-takeme2space-to-launch-ai-lab-in-space-in-december/115718230

Introduction:
Welcome to the second edition of our blog on Digital forensics series. In our previous blog we discussed what digital forensics is, the process followed by the tools, and the subsequent challenges faced in the field. Further, we looked at how the future of Digital Forensics will hold in the current scenario. Today, we will explore differences between 3 particular similar sounding terms that vary significantly in functionality when implemented: Copying, Cloning and Imaging.
In Digital Forensics, the preservation and analysis of electronic evidence are important for investigations and legal proceedings. Replication of the data and devices is one of the fundamental tasks in this domain, without compromising the integrity of the original evidence.
Three primary techniques -- copying, cloning, and imaging -- are used for this purpose. Each technique has its own strengths and is applied according to the needs of the investigation.
In this blog, we will examine the differences between copying, cloning and imaging. We will talk about the importance of each technique, their applications and why imaging is considered the best for forensic investigations.
Copying
Copying means duplicating data or files from one location to another. When one does copying, it implies that one is using standard copy commands. However, when dealing with evidence, it might be hard to use copy only. It is because the standard copy can alter the metadata and change the hidden or deleted data .
The characteristics of copying include:
- Speed: copying is simpler and faster,compared to cloning or imaging.
- Risk: The risk involved in copying is that the metadata might be altered and all the data might be captured.
Cloning
It is the process where the transfer of the entire contents of a hard drive or a storage device is done on another storage device. This process is known as cloning . This way, the cloning process captures both the active data and the unallocated space and hidden partitions, thus containing the whole structure of the original device. Cloning is generally used at the sector level of the device. Clones can be used as the working copy of a device .
Characteristics of cloning:
- bit-for-bit replication: cloning keeps the exact content and the whole structure of the original device.
- Use cases: cloning is used when it is needed to keep the original device intact for further examination or a legal affair.
- Time consuming: Cloning is usually longer in comparison to simple copying since it involves the whole detailed replication. Though it depends on various factors like the size of the storage device, the speed of the devices involved, and the method of cloning.
Imaging:
It is the process of creating a forensic image of a storage device. A forensic image is a replica copy of every bit of data that was on the source device, this including the allocated, unallocated, and the available slack space .
The image is then used for analysis and investigation, and the original evidence is left untouched. Images can’t be used as the working copies of a device. Unlike cloning, which produces working copies, forensic images are typically used for analysis and investigation purposes and are not intended for regular use as working copies.
Characteristics of Imaging:
- Integrity: Imaging ensures the integrity and authenticity of the evidence produced
- Flexibility: Forensic image replicas can be mounted as a virtual drive to create image-specific mode for analysis of data without affecting the original evidence .
- Metadata: Imaging captures metadata associated with the data, thus promoting forensic analysis.
Key Differences
- Purpose: Copying is for everyday use but not good for forensic investigations requiring data integrity. Cloning and imaging are made for forensic preservation.
- Depth of Replication: Cloning and imaging captures the entire storage device including hidden, unallocated, and deleted data whereas copying may miss crucial forensic data.
- Data Integrity: Imaging and cloning keep the integrity of the original evidence thus making them suitable for legal and forensic use. Which is a critical aspect of forensic investigations.
- Forensic Soundness: Imaging is considered the best in digital forensics due to its comprehensive and non-invasive nature.
- Cloning is generally from one hard disk to another, where as imaging creates a compressed file that contains a snapshot of the entire hard drive or a specific partitions
Conclusion
Therefore, copying, cloning, and imaging all deal with duplication of data or storage devices with significant variations, especially in digital forensic. However, for forensic investigations, imaging is the most selected approach due to the correct preservation of the evidence state for any analysis or legal use . Therefore, it is essential for forensic investigators to understand these rigorous differences to avail of real and uncontaminated digital evidence for their investigation and legal argument.

AI-generated content has been taking up space in the ever-changing dynamics of today's tech landscape. Generative AI has emerged as a powerful tool that has enabled the creation of hyper-realistic audio, video, and images. While advantageous, this ability has some downsides, too, particularly in content authenticity and manipulation.
The impact of this content is varied in the areas of ethical, psychological and social harms seen in the past couple of years. A major concern is the creation of non-consensual explicit content, including nudes. This content includes content where an individual’s face gets superimposed onto explicit images or videos without their consent. This is not just a violation of privacy for individuals, and can have humongous consequences for their professional and personal lives. This blog examines the existing laws and whether they are equipped to deal with the challenges that this content poses.
Understanding the Deepfake Technology
Deepfake technology is a media file (image, video, or speech) that typically represents a human subject that is altered deceptively using deep neural networks (DNNs). It is used to alter a person’s identity, and it usually takes the form of a “face swap” where the identity of a source subject is transferred onto a destination subject. The destination’s facial expressions and head movements remain the same, but the appearance in the video is that of the source. In the case of videos, the identities can be substituted by way of replacement or reenactment.
This superimposed content creates realistic content, such as fake nudes. Presently, creating a deepfake is not a costly endeavour. It requires a Graphics Processing Unit (GPU), software that is free, open-source, and easy to download, and graphics editing and audio-dubbing skills. Some of the common apps to create deepfakes are DeepFaceLab and FaceSwap, which are both public and open source and are supported by thousands of users who actively participate in the evolution and development of these software and models.
Legal Gaps and Challenges
Multiple gaps and challenges exist in the legal space for deepfakes and their regulation. They are:
- The inadequate definitions governing AI-generated explicit content often lead to enforcement challenges.
- Jurisdictional challenges due to the cross-border nature of crimes and the difficulties caused by international cooperation measures are in the early stages for AI content.
- There is a gap between the current consent-based and harassment laws for AI-generated nudes.
- Providing evidence or providing proof for the intent and identification of perpetrators in digital crimes is a challenge that is yet to be overcome.
Policy Responses and Global Trends
Presently, the global response to deepfakes is developing. The UK has developed the Online Safety Bill, the EU has the AI Act, the US has some federal laws such as the National AI Initiative Act of 2020 and India is currently developing the India AI Act as the specific legislation dealing with AI and its correlating issues.
The IT Rules, 2021, and the DPDP Act, 2023, regulate digital platforms by mandating content governance, privacy policies, grievance redressal, and compliance with removal orders. Emphasising intermediary liability and safe harbour protections, these laws play a crucial role in tackling harmful content like AI-generated nudes, while the DPDP Act focuses on safeguarding privacy and personal data rights.
Bridging the Gap: CyberPeace Recommendations
- Initiate legislative reforms by advocating for clear and precise definitions for the consent frameworks and instituting high penalties for AI-based offences, particularly those which are aimed at sexually explicit material.
- Advocate for global cooperation and collaborations by setting up international standards and bilateral and multilateral treaties that address the cross-border nature of these offences.
- Platforms should push for accountability by pushing for stricter platform responsibility for the detection and removal of harmful AI-generated content. Platforms should introduce strong screening mechanisms to counter the huge influx of harmful content.
- Public campaigns which spread awareness and educate users about their rights and the resources available to them in case such an act takes place with them.
Conclusion
The rapid advancement of AI-generated explicit content demands immediate and decisive action. As this technology evolves, the gaps in existing legal frameworks become increasingly apparent, leaving individuals vulnerable to profound privacy violations and societal harm. Addressing this challenge requires adaptive, forward-thinking legislation that prioritises individual safety while fostering technological progress. Collaborative policymaking is essential and requires uniting governments, tech platforms, and civil society to develop globally harmonised standards. By striking a balance between innovation and societal well-being, we can ensure that the digital age is not only transformative but also secure and respectful of human dignity. Let’s act now to create a safer future!
References
- https://etedge-insights.com/technology/artificial-intelligence/deepfakes-and-the-future-of-digital-security-are-we-ready/
- https://odsc.medium.com/the-rise-of-deepfakes-understanding-the-challenges-and-opportunities-7724efb0d981
- https://insights.sei.cmu.edu/blog/how-easy-is-it-to-make-and-detect-a-deepfake/