#FactCheck -Viral Video Falsely Linked to Baramati Plane Crash Involving Ajit Pawar
Executive Summary:
A video claiming to show the plane crash that allegedly killed Maharashtra Deputy Chief Minister Ajit Pawar has been widely circulated on social media. The circulation began soon after reports emerged of a tragic aircraft accident in Baramati, Maharashtra, on January 28, 2026, in which Ajit Pawar and five others were reported to have died. The viral video shows a plane crashing to the ground moments after take-off. Social media users have claimed that the footage captures the exact incident in which Ajit Pawar was on board. However, an research by the CyberPeacehas found that this claim is false.
Claim:
An Instagram user shared the video on January 28, 2026, claiming that it showed the plane crash in Maharashtra in which Deputy Chief Minister Ajit Pawar and others allegedly lost their lives. The caption accompanying the video read:“This morning, Deputy CM Ajit Pawar and six others tragically died in a plane crash in Maharashtra.”
Links to the post and its archived version are provided below.

Fact Check:
To verify the authenticity of the viral video, the CyberPeaceconducted a reverse image search of its keyframes. During this process, the same visuals were found in a video report uploaded on News9 Live’s official YouTube channel on October 23, 2025.

According to the report, the footage shows a plane crash in Venezuela, not India. The incident occurred shortly after a Piper Cheyenne aircraft took off from Paramillo Airport in Táchira, Venezuela. The aircraft crashed within seconds of take-off, killing both occupants on board. The deceased were identified as pilot José Bortone and co-pilot Juan Maldonado. Further confirmation came from a report published on October 22, 2025, by Latin American news outlet El Tiempo. The Spanish-language report also featured the same video visuals and stated that a small aircraft lost control and crashed on the runway at Paramillo Airport in Venezuela, resulting in the deaths of the pilot and co-pilot.

Conclusion
The CyberPeace’s research clearly establishes that the viral video being shared as footage of Ajit Pawar’s alleged plane crash in Baramati is misleading. The video actually shows a plane crash that occurred in Venezuela in October 2025 and has been falsely linked to a tragic claim in India.
Related Blogs

Introduction
Fundamentally, artificial intelligence (AI) is the greatest extension of human intelligence. It is the culmination of centuries of logic, reasoning, math, and creativity, machines trained to reflect cognition. However, such intelligence no longer resembles intelligence at all when it is put in the hands of the irresponsible, the one with malice, or the perverse, unleashed into the wild with minimal safeguards. Instead, distortion seems as a tool of debasement rather than enlightenment.
Recent incidents involving sexually explicit photographs created by AI on social media sites reveal an extremely unsettling reality. When intelligence is detached from accountability, morality, and governance, it corrodes society rather than elevates it. We are seeing a failure of stewardship rather than just a failure of technology.
The Cost of Unchecked Intelligence
The AI chatbot Grok, which operates under Elon Musk’s X (formerly Twitter), is the subject of a debate that goes beyond a single platform or product. The romanticisation of “unfiltered” knowledge and the perilous notion that innovation should come before accountability are signs of a bigger lapse in the digital ecosystem. We have allowed mechanisms that can be used as weapons against human dignity, especially the dignity of women and children, in the name of freedom.
We are no longer discussing artistic expression or experimental AI when a machine can digitally undress women, morph photos, or produce sexualised portrayals of kids with a few keystrokes. We stand in the face of algorithmic violence. Even if the physical touch is absent, the harm caused by it is genuine, long-lasting, and extremely personal.
The Regulatory Red Line
A major inflexion was reached when the Indian government responded by ordering a thorough technical, procedural, and governance-level audit. It acknowledges that AI systems are not isolated entities. Platforms that use them are not neutral pipes, but rather intermediaries with responsibilities. The Bhartiya Nyay Sanhita, the IT Act, the IT Rules 2021, and the possible removal of Section 79 safe-harbour safeguards all make it quite evident that innovation is not automatic immunity.
However, the fundamental dilemma cannot be resolved by legislation alone. AI is hailed as a force multiplier for innovation, productivity, and advancement, but when incentives are biased towards engagement, virality, and shock value, its misuse shows how easily intelligence can turn into ugliness. The output receives greater attention the more provocative it is. Profit increases with attention. Restraint turns into a business disadvantage in this ecology.
The Aftermath
Grok’s own acknowledgement that “safeguard lapses” enabled the creation of pictures showing children wearing skimpy attire underscores a troubling reality, safety was not absent due to impossibility, but due to insufficiency. It was always possible to implement sophisticated filtering, more robust monitoring, and stricter oversight. They were simply not prioritised. When a system asserts that “no system is 100% foolproof,” it must also acknowledge that there is no acceptable margin of error when it comes to child protection.
The casual normalisation of such lapses is what is most troubling. By characterising these instances as “isolated cases,” systemic design decisions run the risk of being trivialised. In addition to intelligence, AI systems that have been taught on enormous amounts of human data also inherit bias, misogyny, and power imbalances.
Conclusion
What is required today is recalibration. Platforms need to shift from reactive compliance to proactive accountability. Safeguards must be incorporated at the architectural level; they cannot be cosmetic or post-facto. Governance must encompass enforced ethical boundaries in addition to terms of service. The idea that “edgy” AI is a sign of advancement must also be rejected by society.
Artificial Intelligence has never promised freedom under the guise of vulgarity. It was improvement, support, and augmentation. The fundamental core of intelligence is lost when it is used as a tool for degradation.So what’s left is a decision between principled innovation and unbridled novelty. Between responsibility and spectacle, between intelligence as purpose and intellect as power.
References
https://www.rediff.com/news/report/govt-orders-x-review-of-grok-over-explicit-content/20260103.htm

Introduction
Google is committed to supporting the upcoming elections in India by providing high-quality information to voters, safeguarding platforms from abuse, and helping people navigate AI-generated content. Google will connect voters to helpful information through enhanced features, collaborating with the Election Commission of India (ECI) to provide voting information in both English and Hindi. Emphasis is also placed on showcasing authoritative information on YouTube. YouTube will highlight authoritative news sources and offer context on topics prone to misinformation. YouTube also appends information panels directing viewers to the Election Commission of India's FAQs. This support will help millions of eligible voters navigate the electoral process and ensure a fair and transparent election process.
Key Highlights of Google’s Approach
The step taken by Google will support the democratic process during the upcoming General Election in India. The initiative focuses on three main pillars: disseminating information, tackling misinformation, and navigating AI-generated content. Google is enhancing its Search and YouTube features to provide essential election-related information, including voter registration, polling guidelines, and candidate profiles. Google is also addressing the challenges posed by AI-generated content by offering clarity on content origins, particularly for election-related ads and YouTube videos. Google has strict policies and restrictions regarding who can run election-related advertising on its platforms, including identity verification, pre-certificates, and in-ad disclosures. Additionally, Google is utilising tools and policies like Ads disclosures, content labels on YouTube, and digital watermarking to help users to identify AI-generated content.
Google has joined hands with ECI
The tech giant Google is partnering with the Election Commission of India (ECI) to provide voting information on Google Search in both English and Hindi. YouTube will feature election information panels, including candidate profiles and registration guidelines, ensuring users have access to authoritative sources. Google's recommendation system will display content from trusted publishers on election-related topics. Protecting the integrity of elections is a top priority, and the company is employing advanced AI models and machine learning techniques to identify and remove content that violates its policies at scale. A dedicated team of local experts across major Indian languages is assigned to provide relevant context and ensure swift action against emerging threats. Google is also tightening up who can advertise on its platforms, requiring advertisers to undergo an identity verification process and obtain a pre-certificate from the ECI or authorised entities for each election ad they wish to run.
Tackling Electoral Misinformation
Google is enhancing its platform security measures to prevent misinformation. It is using AI models and human expertise to identify and address policy violations, while stringent verification processes and disclosures are being implemented to maintain user trust.
Collaborations to promote reliable information
Google is supporting the Shakti, India Election Fact-Checking Collective, a consortium of news publishers and fact checkers to detect online misinformation, including deepfakes. The project will provide news entities and fact checkers with essential training in fact-checking methodologies, deepfake detection, and the latest Google tools to streamline verification processes, as stated in Google’s blog post.
Conclusion
Google has taken proactive steps to ensure a secure electoral process during the upcoming general elections in India. These include preventing the misuse of false information by helping voters navigate AI-generated content and safeguarding its platforms from abuse. Google India has built faster and more adaptable enforcement systems with recent advances in its Large Language Models (LLMs), enabling the company to remain nimble and take action quickly when new threats emerge. Google is dedicated to collaborating with government, industry, and civil society to provide voters with reliable and trustworthy online information. Google is implementing a comprehensive strategy to empower voters, safeguard its platforms, and combat misinformation in India's upcoming general elections. Google’s step is commendable and aims to ensure a secure electoral process, empowering millions of citizens to exercise their democratic rights.
References:
- https://blog.google/intl/en-in/company-news/outreach-initiatives/supporting-the-2024-indian-general-election/
- https://inc42.com/buzz/following-gemini-row-google-strengthens-checks-on-ai-generated-content-before-elections/#:~:text=In%20an%20effort%20to%20ensure,safeguarding%20its%20platforms%20from%20abuse
- https://www.indiatvnews.com/technology/news/google-introduces-enhanced-tools-for-supporting-elections-in-india-2024-03-12-921096
- https://economictimes.indiatimes.com/news/elections/lok-sabha/india/google-ties-up-with-eci-to-prevent-spread-of-false-information/articleshow/108431021.cms?from=mdr
- https://www.businesstoday.in/technology/news/story/google-joins-hands-with-election-commission-of-india-to-help-voters-via-search-youtube-421112-2024-03-12
- https://indianexpress.com/article/technology/tech-news-technology/google-2024-general-elections-support-9209588/

Introduction
February marks the beginning of Valentine’s Week, the time when we transcend from the season of smog to the season of love. This is a time when young people are more active on social media and dating apps with the hope of finding a partner to celebrate the occasion. Dating Apps, in order to capitalise on this occasion, launch special offers and campaigns to attract new users and string on the current users with the aspiration of finding their ideal partner. However, with the growing popularity of online dating, the tactics of cybercriminals have also penetrated this sphere. Scammers are now becoming increasingly sophisticated in manipulating individuals on digital platforms, often engaging in scams, identity theft, and financial fraud under the guise of romance. As love fills the air, netizens must stay vigilant and cautious while searching for a connection online and not fall into a scammer’s trap.
Here Are Some CyberPeace Tips To Avoid Romance Scams
- Recognize Red Flags of Romance Scams:- Online dating has made it easier to connect with people, but it has also become a tool for scammers to exploit the emotions of netizens for financial gain. They create fake profiles, build trust quickly, and then manipulate victims into sending money. Understanding their tactics can help you stay safe.
- Warning Signs of a Romance Scam:- If someone expresses strong feelings too soon, it’s a red flag. Scammers often claim to have fallen in love within days or weeks, despite never meeting in person. They use emotional pressure to create a false sense of connection. Their messages might seem off. Scammers often copy-paste scripted responses, making conversations feel unnatural. Poor grammar, inconsistencies in their stories, or vague answers are warning signs. Asking for money is the biggest red flag. They might have an emergency, a visa issue, or an investment opportunity they want you to help with. No legitimate relationship starts with financial requests.
- Manipulative Tactics Used by Scammers:- Scammers use love bombing to gain trust. They flood you with compliments, calling you their soulmate or destiny. This is meant to make you emotionally attached. They often share fake sob stories. It could be anything ranging from losing a loved one, facing a medical emergency, or even being stuck in a foreign country. These are designed to make you feel sorry for them and more willing to help. Some of these scammers might even pretend to be wealthy, being investors or successful business owners, showing off their fabricated luxury lifestyle in order to appear credible. Eventually, they’ll try to lure you into a fake investment. They create a sense of urgency. Whether it’s sending money, investing, or sharing personal details, scammers will push you to act fast. This prevents you from thinking critically or verifying your claims.
- Financial Frauds Linked to Romance Scams:- Romance scams have often led to financial fraud. Victims may be tricked into sending money directly or get roped into elaborate schemes. One common scam is the disappearing date, where someone insists on dining at an expensive restaurant, only to vanish before the bill arrives. Crypto scams are another major concern. Scammers convince victims to invest in fake cryptocurrency platforms, promising huge returns. Once the money is sent, the scammer disappears, leaving the victim with nothing.
- AI & Deepfake Risks in Online Dating:- Advancements in AI have made scams even more convincing. Scammers use AI-generated photos to create flawless, yet fake, profile pictures. These images often lack natural imperfections, making them hard to spot. Deepfake technology is also being used for video calls. Some scammers use pre-recorded AI-generated videos to fake live interactions. If a person’s expressions don’t match their words or their screen glitches oddly, it could be a deepfake.
- How to Stay Safe:-
- Always verify the identities of those who contact you on these sites. A simple reverse image search can reveal if someone’s profile picture is stolen.
- Avoid clicking suspicious links or downloading unknown apps sent by strangers. These can be used to steal your personal information.
- Trust your instincts. If something feels off, it probably is. Stay alert and protect yourself from online romance scams.
Best Online Safety Practices
- Prioritize Social Media Privacy:- Review and update your privacy settings regularly. Think before you share and be mindful of who can see your posts/stories. Avoid oversharing personal details.
- Report Suspicious Activities:- Even if a scam attempt doesn’t succeed, report it. Indian Cyber Crime Coordination Centre (I4C) 'Report Suspect' feature allow users to flag potential threats, helping prevent cybercrimes.
- Think Before You Click or Download:- Avoid clicking on unknown links or downloading attachments from unverified sources. These can be traps leading to phishing scams or malware attacks.
- Protect Your Personal Information:- Be cautious with whom and how you share your sensitive details online. Cybercriminals exploit even the smallest data points to orchestrate fraud.