#FactCheck: Viral Video Showing Pakistan Shot Down Indian Air Force' MiG-29 Fighter Jet
Executive Summary
Recent claims circulating on social media allege that an Indian Air Force MiG-29 fighter jet was shot down by Pakistani forces during "Operation Sindoor." These reports suggest the incident involved a jet crash attributed to hostile action. However, these assertions have been officially refuted. No credible evidence supports the existence of such an operation or the downing of an Indian aircraft as described. The Indian Air Force has not confirmed any such event, and the claim appears to be misinformation.

Claim
A social media rumor has been circulating, suggesting that an Indian Air Force MiG-29 fighter jet was shot down by Pakistani Air forces during "Operation Sindoor." The claim is accompanied by images purported to show the wreckage of the aircraft.

Fact Check
The social media posts have falsely claimed that a Pakistani Air Force shot down an Indian Air Force MiG-29 during "Operation Sindoor." This claim has been confirmed to be untrue. The image being circulated is not related to any recent IAF operations and has been previously used in unrelated contexts. The content being shared is misleading and does not reflect any verified incident involving the Indian Air Force.

After conducting research by extracting key frames from the video and performing reverse image searches, we successfully traced the original post, which was first published in 2024, and can be seen in a news article from The Hindu and Times of India.
A MiG-29 fighter jet of the Indian Air Force (IAF), engaged in a routine training mission, crashed near Barmer, Rajasthan, on Monday evening (September 2, 2024). Fortunately, the pilot safely ejected and escaped unscathed, hence the claim is false and an act to spread misinformation.

Conclusion
The claims regarding the downing of an Indian Air Force MiG-29 during "Operation Sindoor" are unfounded and lack any credible verification. The image being circulated is outdated and unrelated to current IAF operations. There has been no official confirmation of such an incident, and the narrative appears to be misleading. Peoples are advised to rely on verified sources for accurate information regarding defence matters.
- Claim: Pakistan Shot down an Indian Fighter Jet, MIG-29
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
Regulatory agencies throughout Europe have stepped up their monitoring of digital communication platforms because of the increased use of Artificial Intelligence in the digital domain. Messaging services have evolved into being more than just messaging systems, they now serve as a gateway for Artificial Intelligence services, Business Tools and Digital Marketplaces. In light of this evolution, the Competition Authority in Italy has taken action against Meta Platforms and ordered Meta to cease activities on WhatsApp that are deemed to restrict the ability of other companies to sell AI-based chatbots. This action highlights the concerns surrounding Gatekeeping Power, Market Foreclosure and Innovation Suppression. This proceeding will also raise questions regarding the application of Competition Law to the actions of Dominant Digital Platforms, where they leverage their own ecosystems to promote their own AI products to the detriment of competitors.
Background of the Case
In December 2025, Italy’s competition authority, the Autorità Garante della Concorrenza e del Mercato (AGCM), ordered Meta Platforms to suspend certain contractual terms governing WhatsApp. These terms allegedly prevented or restricted the operation of third-party AI chatbots on WhatsApp’s platform.
The decision was issued as an interim measure during an ongoing antitrust investigation. According to the AGCM, the disputed terms risked excluding competing AI chatbot providers from accessing a critical digital channel, thereby distorting competition and harming consumer choice.
Why WhatsApp Matters as a Digital Gateway
WhatsApp is situated uniquely within the European digital landscape. It has hundreds of millions of users throughout the entire European Union; therefore, it is an integral part of the communication infrastructure that supports communications between individual consumers and companies as well as between companies and their service providers. AI chatbot developers depend heavily upon WhatsApp as it provides the ability to connect directly with consumers in real-time, which is critical to their success as business offers.
According to the Italian regulator's opinion, a corporation that controls the ability to communicate via such a popular platform has a tremendous influence over innovation within that market as it essentially operates as a gatekeeper between the company creating an innovative service and the consumer using that service. If Meta is permitted to stop competing AI chatbot developers while providing more productive and useful offers than those offered by competing developers, it is likely that competing developers will be unable to market and distribute their innovative products at sufficient scale to remain competitive.
Alleged Abuse of Dominant Position
Under EU and national competition law, companies holding a dominant market position bear a special responsibility not to distort competition. The AGCM’s concern is that Meta may have abused WhatsApp’s dominance by:
- Restricting market access for rival AI chatbot providers
- Limiting technical development by preventing interoperability
- Strengthening Meta’s own AI ecosystem at the expense of competitors
Such conduct, if proven, could amount to an abuse under Article 102 of the Treaty on the Functioning of the European Union (TFEU). Importantly, the authority emphasised that even contractual terms—rather than explicit bans—can have exclusionary effects when imposed by a dominant platform.
Meta’s Response and Infrastructure Argument
Meta has openly condemned the Italian ruling as “fundamentally flawed,” arguing that third-party AI chatbots represent a major economic burden to the infrastructure and risk the performance, safety, and user enjoyment of WhatsApp.
Although the protection of infrastructure is a valid issue of concern, competition authorities commonly look at whether the justifications for such restrictions are appropriate and non-discriminatory. One of the principal legal issues is whether the restrictions imposed by Meta were applied in a uniform manner or whether they were selectively imposed in favour of Meta's AI services. If the restrictions are asymmetrical in application, they may be viewed as anti-competitive rather than as legitimate technical safeguards.
Link to the EU’s Digital Markets Framework
The Italian case fits into a wider EU context in relation to their efforts to regulate the actions of large technology companies with the use of prior (ex-ante) regulation as contained in the Digital Markets Act (DMA). The DMA has put in place obligations on a set of gatekeepers to make available to third parties on a non-discriminatory basis in order to maintain equitable access, interoperability and no discrimination against those parties.
While the Italian case has been brought pursuant to an Italian competition law, its philosophy is consistent with that of the DMA in that dominant digital platforms should not undertake actions that use their control over their core products and services to prevent other companies from being able to innovate. The trend with some EU national regulators is to be increasingly willing to take swift action through the application of interim measures rather than await many years for final decisions.
Implications for AI Developers and Platforms
The Italian order signifies to developers of AI-based chatbots that competitive access to AI technology via messaging services is an important factor for regulatory bodies. The order also serves as a warning to the large incumbent organisations that are establishing a foothold in the messaging services market to integrate AI into their already established platforms that they will not be protected from competition laws.
Additionally, the overall case showcases the growing consensus amongst regulatory agencies regarding the role of competition in the development of AI. If a handful of large companies are allowed to control both the infrastructure and the AI technology being operated on top of that infrastructure, the result will likely be the development of closed ecosystems that eliminate or greatly reduce the potential for technology diversity.
Conclusion
Italy's move against Meta highlights a significant intersection between competitive laws and artificial intelligence. The Italian antitrust authority has reinforced the principle that digital gatekeepers cannot use contractual methods to block off access to competition in targeting WhatsApp's restrictive terms. As AI becomes a larger part of our day to day digital services, regulatory bodies will likely continue to increase their scrutiny on platform behaviour. The result of this investigation will impact not just the Metaverse's AI strategy, but also create a baseline for future European regulators' methods of balancing innovation versus competition versus consumer choice in an increasingly AI-driven digital marketplace.
References
- https://www.reuters.com/sustainability/boards-policy-regulation/italy-watchdog-orders-meta-halt-whatsapp-terms-barring-rival-ai-chatbots-2025-12-24/
- https://techcrunch.com/2025/12/24/italy-tells-meta-to-suspend-its-policy-that-bans-rival-ai-chatbots-from-whatsapp/
- https://www.communicationstoday.co.in/italy-watchdog-orders-meta-to-halt-whatsapp-terms-barring-rival-ai-chatbots/
- https://www.techinasia.com/news/italy-watchdog-orders-meta-halt-whatsapp-terms-ai-bot

Introduction
The courts in India have repeatedly emphasised the importance of “enhanced customer protection” and “limited liability” on their part. The rationale behind such imperatives is to extend security against exploitation by institutions that are equipped with all the means to manipulate customers. India, with its looming financial literacy gaps that have to be addressed, needs to curb any manipulation on the part of banking institutions. Various studies have highlighted this gap in recent times; for example, according to the National Centre for Financial Education, only 27% of Indian people are financially literate, which is much less than the 42% global average. With only 19% of millennials exhibiting sufficient financial awareness yet expressing high trust in their financial skills, the issue is very worrisome. Thus, the increasing number of financial frauds intensifies the issue.
Zero Liability in Cyber Frauds: Regulatory Safeguards for Digital Banking Customers
In light of the growing emphasis on financial inclusion and consumer protection, and in response to the recent rise in complaints regarding unauthorised debits from customer accounts and cards, the framework for assessing customer liability in such cases has been re-evaluated. The RBI’s circular dated July 6, 2017 titled “Customer Protection-Limited Liability of Customers in Unauthorised Electronic Banking Transactions” serves as the foundation for regulatory protections for Indian customers of digital banking. A clear and organised framework for determining customer accountability is outlined in the circular, which acknowledges the exponential increase in electronic transactions and related scams. It assigns proportional obligations for unauthorised transactions resulting from system-level breaches, client carelessness, and bank contributory negligence. Most importantly it establishes the zero responsibility concept, which protects clients from monetary losses in cases when the bank or another system component is at fault and the client promptly reports the breach.
This directive’s sophisticated approach to consumer protection is what makes it unique. It requires banks to set up strong fraud prevention systems, proactive alerting systems, and round-the-clock reporting systems. Furthermore, it significantly alters the power dynamics between financial institutions and customers by placing the onus of demonstrating customer negligence completely on the bank. The circular emphasises prompt reversal of funds to impacted customers and requires banks to implement Board-approved policies on liability to redress. As a result, it is a consumer rights charter rather than just a compliance document, promoting confidence and financial accountability in India’s digital banking sector.
Judicial Endorsement in Reinforcing the Zero Liability Principle
In the case of Suresh Chandra Negi & Anr. v. Bank of Baroda & Ors. (Writ (C) No. 24192 of 2022) The Allahabad High Court reaffirmed that the burden of proving consumer accountability rests firmly on the banking institution, hence reaffirming the zero liability concept in circumstances of unapproved electronic banking transactions. The Division bench emphasised the regulatory requirement that banks provide adequate proof before assigning blame to customers, citing Clause 12 of the RBI’s circular dated June 6, 2017, Customer Protection—Limited Liability of Customers in Unauthorised Electronic Banking Transactions. In a similar scenario, the Bombay HC held that a customer is entitled to zero liability when an authorized transaction occurs due to a third-party breach, where the deficiency lies neither with the bank nor the customer, provided the fraud is promptly reported.
The zero liability principle, as envisaged under Clause 8 of the RBI circular, has emerged as a cornerstone of consumer protection in India’s digital banking ecosystem.
Another landmark judgment that has given this principle the front stage in addressing banking frauds is Hare Ram Singh vs RBI &Ors. (W.P. (C) 13497/2022) laid down by Delhi HC which is an important legal turning point in the development of the zero liability principle under the RBI’s 2017 framework. The court reiterated the need to evaluate customer diligence in light of new fraud tactics like phishing and vishing by holding the State Bank of India (SBI) liable for a cyber fraud incident even though the transactions were authenticated by OTP. The ruling made it clear that when complex social engineering or technical manipulation is used, banks are nonetheless accountable even if they only rely on OTP validation. The legal protection provided to victims of unauthorised electronic banking transactions is strengthened by the court’s emphasis on the bank having the burden of evidence in accordance with RBI standards.
Importantly, this ruling lays the full burden of securing digital banking systems on financial organisations and supports the judiciary’s increasing acknowledgement of the digital asymmetry between banks and consumers. It emphasises that prompt consumer reporting, banks’ failure to disclose important credentials, and their own operational errors must all be taken into consideration when determining culpability. As a result, this decision establishes a strong precedent that will increase consumer confidence, promote systemic advancements in digital risk management, and better integrate the zero liability standard into Indian digital banking law. In a time when cyber vulnerabilities are growing, it acts as a beacon for financial accountability.
Conclusion
The Zero Liability Principle serves as a vital safety net for customers navigating an increasingly intricate and precarious financial environment in a time when digital transactions are the foundation of contemporary banking. In addition to codifying strong safeguards against unauthorized electronic transactions, the RBI’s 2017 framework rebalanced the fiduciary relationship by putting financial institutions squarely in charge. Through significant rulings, the courts have upheld this protective culture and emphasised that banks, not the victims of cybercrime, bear the burden of proof.
It would be crucial to execute these principles consistently, review them frequently, and raise public awareness as India transitions to a more digital economy. In order to ensure that consumers are not only protected but also empowered must become more than just a policy on paper.
References
- https://www.business-standard.com/content/specials/making-money-vs-managing-money-india-s-critical-financial-literacy-gap-125021900786_1.html
- https://www.livelaw.in/high-court/allahabad-high-court/allahabad-high-court-ruling-bank-liability-unauthorized-electronic-transaction-and-customer-fault-297962
- https://www.mondaq.com/india/white-collar-crime-anti-corruption-fraud/1635616/cyber-law-series-2-issue-10-the-zero-liability-principle-in-cyber-fraud-hare-ram-singh-v-reserve-bank-of-india-ors-case

Introduction
In the digital landscape, there is a rapid advancement of technologies such as generative AI(Artificial Intelligence), deepfake, machine learning, etc. Such technologies offer convenience to users in performing several tasks and are capable of assisting individuals and business entities. Certain regulatory mechanisms are also established for the ethical and reasonable use of such advanced technologies. However, these technologies are easily accessible; hence, cyber-criminals leverage AI tools and technologies for malicious activities or for committing various cyber frauds. By such misuse of advanced technologies, new cyber threats have emerged.
Deepfake Scams
Deepfake is an AI-based technology. Deepfake is capable of creating realistic images or videos which in actuality are created by machine algorithms. Deepfake technology, since easily accessible, is misused by fraudsters to commit various cyber crimes or deceive and scam people through fake images or videos that look realistic. By using the Deepfake technology, cybercriminals manipulate audio and video content, which looks very realistic but, in actuality, is fake.
Voice cloning
To create a voice clone of anyone's, audio can be deepfaked too, which closely resembles a real one but, in actuality, is a fake voice created through deepfake technology. Recently, in Kerala, a man fell victim to an AI-based video call on WhatsApp. He received a video call from a person claiming to be his former colleague. The scammer, using AI deepfake technology, impersonated the face of his former colleague and asked for financial help of 40,000.
Uttarakhand Police issues warning admitting the rising trend of AI-based scams
Recently, Uttarakhand police’s Special Task Force (STF) has issued a warning admitting the widespread of AI technology-based scams such as deepfake or voice cloning scams targeting innocent people. Police expressed concern that several incidents have been reported where innocent people are lured by cybercriminals. Cybercriminals exploit advanced technologies and manipulate innocent people to believe that they are talking to their close ones or friends, but in actuality, they are fake voice clones or deepfake video calls. In this way, cybercriminals ask for immediate financial help, which ultimately leads to financial losses for victims of such scams.
Tamil Nadu Police Issues advisory on deepfake scams
To deceive people, cyber criminals misuse deepfake technologies and target them for financial gain. Recently, Tamilnadu Police Cyberwing have issued an advisory on rising deepfake scams. Fraudsters are creating highly convincing images, videos or voice clones to defraud innocent people and make them victims of financial fraud. The advisory states that you limit the personal data you share you share online and adjust privacy settings. Advisory says to promptly report any suspicious activity or cyber crimes to 1930 or the National Cyber Crime Reporting portal.
Best practices
- Pay attention if you notice compromised video quality because deepfake videos often have compromised or poor video quality and unusual blur resolution, which poses a question to its genuineness. Deepfake videos often loop or unusually freeze, which indicates that the video content might be fabricated.
- Whenever you receive requests for any immediate financial help, act responsively and verify the situation by directly contacting the person on his primary contact number.
- You need to be vigilant and cautious, since scammers often possess a sense of urgency, leading to giving no time for the victim to think about it and deceiving them by making a quick decision. Scammers pose sudden emergencies and demand financial support on an urgent basis.
- Be aware of the recent scams and follow the best practices to stay protected from rising cyber frauds.
- Verify the identity of unknown callers.
- Utilise privacy settings on your social media.
- Pay attention if you notice any suspicious nature, and avoid sharing voice notes with unknown users because scammers might use them as voice samples and create your voice clone.
- If you fall victim to such frauds, one powerful resource available is the National Cyber Crime Reporting Portal (www.cybercrime.gov.in) and the 1930 toll-free helpline number where you can report cyber fraud, including any financial crimes.
Conclusion
AI-powered technologies are leveraged by cybercriminals to commit cyber crimes such as deepfake scams, voice clone scams, etc. Where innocent people are lured by scammers. Hence there is a need for awareness and caution among the people. We should be vigilant and aware of the growing incidents of AI-based cyber scams. Must follow the best practices to stay protected.
References:
- https://www.the420.in/ai-voice-cloning-cyber-crime-alert-uttarakhand-police/
- https://www.trendmicro.com/vinfo/us/security/news/cybercrime-and-digital-threats/exploiting-ai-how-cybercriminals-misuse-abuse-ai-and-ml#:~:text=AI%20and%20ML%20Misuses%20and%20Abuses%20in%20the%20Future&text=Through%20the%20use%20of%20AI,and%20business%20processes%20are%20compromised.
- https://www.ndtv.com/india-news/kerala-man-loses-rs-40-000-to-ai-based-deepfake-scam-heres-what-it-is-4217841
- https://news.bharattimes.co.in/t-n-cybercrime-police-issue-advisory-on-deepfake-scams/