#FactCheck - 2013 Aircraft Video Misleadingly Shared as Ajit Pawar’s Plane Accident
Executive Summary:
A video showing poor runway visibility from inside an aircraft cockpit is being widely shared on social media, linking it to an alleged aircraft accident involving Maharashtra Deputy Chief Minister Ajit Pawar in Baramati on January 28, 2025. Users claim that the footage captured the final moments before the crash, suggesting that the runway visibility disappeared just seconds before landing. However, research conducted by the CyberPeace found the viral claim to be misleading. The research revealed that the video has no connection to any aircraft accident involving Deputy Chief Minister Ajit Pawar. In reality, the video dates back to 2013 and shows a pilot attempting to land an aircraft amid heavy rain. During the approach, the runway briefly disappears from the pilot’s view, prompting the pilot to abort the landing and execute a go-around. The aircraft later lands safely after weather conditions improve.
Claim
An Instagram user shared the viral video on January 29, 2026, claiming:“Baramati plane crash: video of the aircraft accident surfaces. Runway disappears just three seconds before landing.” (The link to the post, its archived version, and screenshots are provided below.)

Fact Check
To verify the claim, we extracted keyframes from the viral video and conducted a reverse image search using Google Lens. The search led us to the same video uploaded on a YouTube channel named douglesso, which was published on June 12, 2013. (Footage link and screenshot available below.)

Further research led us to a report published by the American media website CNET, which featured the same visual. According to the report, the video shows a Boeing Business Jet attempting to land during heavy rainfall. The aircraft was conducting a CAT I Instrument Landing System (ILS) approach when a sudden downpour drastically reduced visibility at decision height. As the runway briefly disappeared from view, the pilots aborted the landing and carried out a go-around. The aircraft later landed safely once weather conditions improved. (The link to the CNET report and its screenshot are provided below.)
- https://www.cnet.com/culture/this-is-what-happens-when-a-plane-is-landing-and-the-runway-disappears/

Conclusion
Our research confirms that the video circulating on social media is unrelated to any recent aircraft accident involving Maharashtra Deputy Chief Minister Ajit Pawar. The clip is an old video from 2013, which is now being shared with a false and misleading claim.
Related Blogs
.webp)
Introduction
The AI Action Summit is a global forum that brings together world leaders, policymakers, technology experts, and industry representatives to discuss AI governance, ethics, and its role in society. This year, the week-long Paris AI Action Summit officially culminated on the 11th of February, 2025. It brought together experts from the industry, policymakers, and other dignitaries to discuss Artificial Intelligence and its challenges. The event was co-chaired by Indian Prime Minister Narendra Modi and French President Emmanuel Macron. In line with the summit, the Indian delegation actively engaged in the 2nd India-France AI Policy Roundtable, an official side event of the summit, and the 14th India-France CEOs Forum. These discussions were on diverse sectors including defense, aerospace, technology, etc. among other things.
Prime Minister Modi’s Address
During the AI Action Summit in Paris, Prime Minister Narendra Modi drew attention to the revolutionary effect of AI in politics, the economy, security, and society. Stressing the requirement of international cooperation, he promoted strong frameworks of governance to combat AI-based risks and consequently, build public confidence in new technologies. Needed efforts with respect to cybersecurity issues such as deepfakes and disinformation were also acknowledged.
Democratising AI, and sharing its benefits, particularly with the Global South not only aligned with Sustainable Development Goals (SDGs) but also affirmed India’s resolve towards sharing expertise and best practices. India’s remarkable feat of creating a Digital Public Infrastructure, that caters to a population of 1.4 billion through open and accessible technology was highlighted as well.
Among the key announcements, India revealed its plans to create its own Large Language Model (LLM) that reflects the country's linguistic diversity, strengthening its AI aspirations. Further, India will be hosting the next AI Action Summit, reaffirming its position in international AI leadership. The Prime Minister also welcomed France's initiatives, such as the launch of the "AI Foundation" and the "Council for Sustainable AI", initiated by President Emmanuel Macron. He emphasized the necessity to extend the Global Partnership for AI and to get it more representative and inclusive so that Global South voices are actually incorporated into AI innovation and governance.
Other Perspectives
Though there were 58 countries that signed the international agreement on a more open, inclusive, sustainable, and ethical approach to AI development (including India, France, and China), the UK and the US have refused to sign the international agreement at the AI Summit stating their issues with global governance and national security. While the former raised concerns about the lack of sufficient details regarding the establishment of global AI governance and AI’s effect on national security as their reason, the latter showcased its reservations about the overly wide AI regulations which had the potential to hamper a transformative industry. Meanwhile, the US is also looking forward to ‘Stargate’, its $500 billion AI infrastructure project alongside the companies- OpenAI, Softbank, and Oracle.
CyberPeace Insights
The Summit has garnered greater significance with the backdrop of the release of platforms such as DeepSeek R1, China’s AI assistant system similar to that of OpenAI’s ChatGPT. On its release, it was the top-rated free application on Apple’s app store and sent the technology stocks tumbling. Moreover, investors world over appreciated the creation of the model which was made roughly in about $5 million while other AI companies spent more in comparison (keeping in mind the restrictions caused by the chip export controls in China). This breakthrough challenges the conventional notion that massive funding is a prerequisite for innovation, offering hope for India’s burgeoning AI ecosystem. With the IndiaAI mission and fewer geopolitical restrictions, India stands at a pivotal moment to drive responsible AI advancements.
References:
- https://www.mea.gov.in/press-releases.htm?dtl/39023/Prime_Minister_cochairs_AI_Action_Summit_in_Paris_February_11_2025
- https://indianexpress.com/article/explained/explained-sci-tech/what-is-stargate-trumps-500-billion-ai-project-9793165/
- https://pib.gov.in/PressReleasePage.aspx?PRID=2102056
- https://pib.gov.in/PressReleasePage.aspx?PRID=2101947
- https://pib.gov.in/PressReleasePage.aspx?PRID=2101896
- https://www.timesnownews.com/technology-science/uk-and-us-decline-to-sign-global-ai-agreement-at-paris-ai-action-summit-here-is-why-article-118164497
- https://www.thehindu.com/sci-tech/technology/india-57-others-sign-paris-joint-statement-on-inclusive-sustainable-ai/article69207937.ece

Introduction
The Telecom Regulatory Authority of India (TRAI) has directed all telcos to set up detection systems based on Artificial Intelligence and Machine Learning (AI/ML) technologies in order to identify and control spam calls and text messages from unregistered telemarketers (UTMs).
The TRAI Directed telcos
The telecom regulator, TRAI, has directed all Access Providers to detect Unsolicited commercial communication (UCC)by systems, which is based on Artificial Intelligence and Machine Learning to detect, identify, and act against senders of Commercial Communication who are not registered in accordance with the provisions of the Telecom Commercial Communication Customer Preference Regulations, 2018 (TCCCPR-2018). Unregistered Telemarketers (UTMs) are entities that do not register with Access Providers and use 10-digit mobile numbers to send commercial communications via SMS or calls.
TRAI steps to curb Unsolicited commercial communication
TRAI has taken several initiatives to reduce Unsolicited Commercial Communication (UCC), which is a major source of annoyance for the public. It has resulted in fewer complaints filed against Registered Telemarketers (RTMs). Despite the TSPs’ efforts, UCC from Unregistered Telemarketers (UTMs) continues. Sometimes, these UTMs use messages with bogus URLs and phone numbers to trick clients into revealing crucial information, leading to financial loss.
To detect, identify, and prosecute all Unregistered Telemarketers (UTMs), the TRAI has mandated that Access Service Providers implement the UCC.
Detect the System with the necessary functionalities within the TRAI’s Telecom Commercial Communication Customer Preference Regulations, 2018 framework.
Access service providers have implemented such detection systems based on their applicability and practicality. However, because UTMs are constantly creating new strategies for sending unwanted communications, the present UCC detection systems provided by Access Service providers cannot detect such UCC.
TRAI also Directs Telecom Providers to Set Up Digital Platform for Customer Consent to Curb Promotional Calls and Messages.
Unregistered Telemarketers (UTMs) sometimes use messages with fake URLs and phone numbers to trick customers into revealing essential information, resulting in financial loss.

TRAI has urged businesses like banks, insurance companies, financial institutions, and others to re-verify their SMS content templates with telcos within two weeks. It also directed telecom companies to stop misusing commercial messaging templates within the next 45 days.
The telecom regulator has also instructed operators to limit the number of variables in a content template to three. However, if any business intends to utilise more than three variables in a content template for communicating with their users, this should be permitted only after examining the example message, as well as adequate justifications and justification.
In order to ensure consistency in UCC Detect System implementations, TRAI has directed all Access Providers to deploy UCC and detect systems based on artificial intelligence and Machine Learning that are capable of constantly evolving to deal with new signatures, patterns, and techniques used by UTMs.
Access Providers have also been directed to use the DLT platform to share intelligence with others. Access Providers have also been asked to ensure that such UCC Detect System detects senders that send unsolicited commercial communications in bulk and do not comply with the requirements. All Access Providers are directed to follow the instructions and provide an update on actions done within thirty days.
The move by TRAI is to curb the menacing calls as due to this, the number of scam cases is increasing, and now a new trend of scams started as recently, a Twitter user reported receiving an automated call from +91 96681 9555 with the message “This call is from Delhi Police.” It then asked her to stay in the queue since some of her documents needed to be picked up. Then he said he works as a sub-inspector at the Kirti Nagar police station in New Delhi. He then inquired whether she had recently misplaced her Aadhaar card, PAN card, or ATM card, to which she replied ‘no’. The scammer then poses as a cop and requests that she authenticate the last four digits of her card because they have found a card with her name on it. And a lot of other people tweeted about it.

Conclusion
TRAI directed the telcos to check the calls and messages from Unregistered numbers. This step of TRAI will curb the pesky calls and messages and catch the Frauds who are not registered with the regulation. Sometimes the unregistered sender sends fraudulent links, and through these fraudulent calls and messages, the sender tries to take the personal information of the customers, which results in financial losses.

Introduction
The advent of AI-driven deepfake technology has facilitated the creation of explicit counterfeit videos for sextortion purposes. There has been an alarming increase in the use of Artificial Intelligence to create fake explicit images or videos for sextortion.
What is AI Sextortion and Deepfake Technology
AI sextortion refers to the use of artificial intelligence (AI) technology, particularly deepfake algorithms, to create counterfeit explicit videos or images for the purpose of harassing, extorting, or blackmailing individuals. Deepfake technology utilises AI algorithms to manipulate or replace faces and bodies in videos, making them appear realistic and often indistinguishable from genuine footage. This enables malicious actors to create explicit content that falsely portrays individuals engaging in sexual activities, even if they never participated in such actions.
Background on the Alarming Increase in AI Sextortion Cases
Recently there has been a significant increase in AI sextortion cases. Advancements in AI and deepfake technology have made it easier for perpetrators to create highly convincing fake explicit videos or images. The algorithms behind these technologies have become more sophisticated, allowing for more seamless and realistic manipulations. And the accessibility of AI tools and resources has increased, with open-source software and cloud-based services readily available to anyone. This accessibility has lowered the barrier to entry, enabling individuals with malicious intent to exploit these technologies for sextortion purposes.

The proliferation of sharing content on social media
The proliferation of social media platforms and the widespread sharing of personal content online have provided perpetrators with a vast pool of potential victims’ images and videos. By utilising these readily available resources, perpetrators can create deepfake explicit content that closely resembles the victims, increasing the likelihood of success in their extortion schemes.
Furthermore, the anonymity and wide reach of the internet and social media platforms allow perpetrators to distribute manipulated content quickly and easily. They can target individuals specifically or upload the content to public forums and pornographic websites, amplifying the impact and humiliation experienced by victims.
What are law agencies doing?
The alarming increase in AI sextortion cases has prompted concern among law enforcement agencies, advocacy groups, and technology companies. This is high time to make strong Efforts to raise awareness about the risks of AI sextortion, develop detection and prevention tools, and strengthen legal frameworks to address these emerging threats to individuals’ privacy, safety, and well-being.
There is a need for Technological Solutions, which develops and deploys advanced AI-based detection tools to identify and flag AI-generated deepfake content on platforms and services. And collaboration with technology companies to integrate such solutions.
Collaboration with Social Media Platforms is also needed. Social media platforms and technology companies can reframe and enforce community guidelines and policies against disseminating AI-generated explicit content. And can ensure foster cooperation in developing robust content moderation systems and reporting mechanisms.
There is a need to strengthen the legal frameworks to address AI sextortion, including laws that specifically criminalise the creation, distribution, and possession of AI-generated explicit content. Ensure adequate penalties for offenders and provisions for cross-border cooperation.
Proactive measures to combat AI-driven sextortion
Prevention and Awareness: Proactive measures raise awareness about AI sextortion, helping individuals recognise risks and take precautions.
Early Detection and Reporting: Proactive measures employ advanced detection tools to identify AI-generated deepfake content early, enabling prompt intervention and support for victims.
Legal Frameworks and Regulations: Proactive measures strengthen legal frameworks to criminalise AI sextortion, facilitate cross-border cooperation, and impose offender penalties.
Technological Solutions: Proactive measures focus on developing tools and algorithms to detect and remove AI-generated explicit content, making it harder for perpetrators to carry out their schemes.
International Cooperation: Proactive measures foster collaboration among law enforcement agencies, governments, and technology companies to combat AI sextortion globally.
Support for Victims: Proactive measures provide comprehensive support services, including counselling and legal assistance, to help victims recover from emotional and psychological trauma.
Implementing these proactive measures will help create a safer digital environment for all.

Misuse of Technology
Misusing technology, particularly AI-driven deepfake technology, in the context of sextortion raises serious concerns.
Exploitation of Personal Data: Perpetrators exploit personal data and images available online, such as social media posts or captured video chats, to create AI- manipulation violates privacy rights and exploits the vulnerability of individuals who trust that their personal information will be used responsibly.
Facilitation of Extortion: AI sextortion often involves perpetrators demanding monetary payments, sexually themed images or videos, or other favours under the threat of releasing manipulated content to the public or to the victims’ friends and family. The realistic nature of deepfake technology increases the effectiveness of these extortion attempts, placing victims under significant emotional and financial pressure.
Amplification of Harm: Perpetrators use deepfake technology to create explicit videos or images that appear realistic, thereby increasing the potential for humiliation, harassment, and psychological trauma suffered by victims. The wide distribution of such content on social media platforms and pornographic websites can perpetuate victimisation and cause lasting damage to their reputation and well-being.
Targeting teenagers– Targeting teenagers and extortion demands in AI sextortion cases is a particularly alarming aspect of this issue. Teenagers are particularly vulnerable to AI sextortion due to their increased use of social media platforms for sharing personal information and images. Perpetrators exploit to manipulate and coerce them.
Erosion of Trust: Misusing AI-driven deepfake technology erodes trust in digital media and online interactions. As deepfake content becomes more convincing, it becomes increasingly challenging to distinguish between real and manipulated videos or images.
Proliferation of Pornographic Content: The misuse of AI technology in sextortion contributes to the proliferation of non-consensual pornography (also known as “revenge porn”) and the availability of explicit content featuring unsuspecting individuals. This perpetuates a culture of objectification, exploitation, and non-consensual sharing of intimate material.
Conclusion
Addressing the concern of AI sextortion requires a multi-faceted approach, including technological advancements in detection and prevention, legal frameworks to hold offenders accountable, awareness about the risks, and collaboration between technology companies, law enforcement agencies, and advocacy groups to combat this emerging threat and protect the well-being of individuals online.