#FactCheck: Viral Deepfake Video of Modi, Shah, Jaishankar Apologize for Operation Sindoor Blunder
Executive Summary:
Recently, we came upon some AI-generated deep fake videos that have gone viral on social media, purporting to show Indian political figures Prime Minister Narendra Modi, Home Minister Amit Shah, and External Affairs Minister Dr. S. Jaishankar apologizing in public for initiating "Operation Sindoor." The videos are fake and use artificial intelligence tools to mimic the leaders' voices and appearances, as concluded by our research. The purpose of this report is to provide a clear understanding of the facts and to reveal the truth behind these viral videos.
Claim:
Multiple videos circulating on social media claim to show Prime Minister Narendra Modi, Central Home Minister Amit Shah, and External Affairs Minister Dr. S. Jaishankar publicly apologised for launching "Operation Sindoor." The videos, which are being circulated to suggest a political and diplomatic failure, feature the leaders speaking passionately and expressing regret over the operation.



Fact Check:
Our research revealed that the widely shared videos were deepfakes made with artificial intelligence tools. Following the 22 April 2025 Pahalgam terror attack, after “Operation Sindoor”, which was held by the Indian Armed Forces, this video emerged, intending to spread false propaganda and misinformation.
Finding important frames and visual clues from the videos that seemed suspicious, such as strange lip movements, misaligned audio, and facial distortions, was the first step in the fact-checking process. By putting audio samples and video frames in Hive AI Content Moderation, a program for detecting AI-generated content. After examining audio, facial, and visual cues, Hive's deepfake detection system verified that all three of the videos were artificial intelligence (AI) produced.
Below are three Hive Moderator result screenshots that clearly flag the videos as synthetic content, confirming that none of them are authentic or released by any official government source.



Conclusion:
The artificial intelligence-generated videos that claim Prime Minister Narendra Modi, Home Minister Amit Shah, and External Affairs Minister Dr. S. Jaishankar apologized for the start of "Operation Sindoor" are completely untrue. A purposeful disinformation campaign to mislead the public and incite political unrest includes these deepfake videos. No such apology has been made by the Indian government, and the operation in question does not exist in any official or verified capacity. The public must exercise caution, avoid disseminating videos that have not been verified, and rely on reliable fact-checking websites. Such disinformation can seriously affect national discourse and security in addition to eroding public trust.
- Claim: India's top executives apologize publicly for Operation Sindoor blunder.
- Claimed On: Social Media
- Fact Check: AI Misleads
Related Blogs
.webp)
Introduction
Conversations surrounding the scourge of misinformation online typically focus on the risks to social order, political stability, economic safety and personal security. An oft-overlooked aspect of this phenomenon is the fact that it also takes a very real emotional and mental toll on people. Even as we grapple with the big picture questions about financial fraud or political rumors or inaccurate medical information online, we must also appreciate the fact that being exposed to misinformation and becoming aware of one’s own vulnerability are both significant sources of mental stress in today’s digital ecosystem.
Inaccurate information causes confusion and worry, which has negative consequences for mental health. Misinformation may also impair people's sense of well-being by undermining their trust in institutions, authority figures, and their own judgment. The constant bombardment of misinformation can lead to information overload, wherein people are unable to discriminate between legitimate sources and misleading content, resulting in mental exhaustion and a sense of being overwhelmed by the sheer volume of information available. Vulnerable groups such as children, the elderly, and those with pre-existing health conditions are more sensitive or susceptible to the negative effects of misinformation.
How Does Misinformation Endanger Mental Health?
Misinformation on social media platforms is a matter of public health because it has the potential to confuse people, lead to poor decision-making and result in cognitive dissonance, anxiety and unwanted behavioural changes.
Unconstrained misinformation can also lead to social disorder and the prevalence of negative emotions amongst larger numbers, ultimately causing a huge impact on society. Therefore, understanding the spread and diffusion characteristics of misinformation on Internet platforms is crucial.
The spread of misinformation can elicit different emotions of the public, and the emotions also change with the spread of misinformation. Factors such as user engagement, number of comments, and time of discussion all have an impact on the change of emotions in misinformation. Active users tend to make more comments, engage longer in discussions, and display more dominant negative emotions when triggered by misinformation. Understanding the evolution pattern of emotions triggered by misinformation is also important in view of the public’s emotional fluctuations under the influence of misinformation, and social media often magnifies the impact of emotions and makes emotions spread rapidly in social networks. For example, the sentiment of misinformation increases when there are sensitive topics such as political elections, viral trending topics, health-related information, communal and local information, information about natural disasters and more. Active misinformation on the Internet not only affects the public's psychology, mental health and behavior, but also has an impact on the stability of social order and the maintenance of social security.
Prebunking and Debunking To Build Mental Guards Against Misinformation
As the spread of misinformation and disinformation rises, so do the techniques aimed to tackle their spread. Prebunking or attitudinal inoculation is a technique for training individuals to recogniseand resist deceptive communications before they can take root. Prebunking is a psychological method for mitigating the effects of misinformation, strengthening resilience and creating cognitive defenses against future misinformation. Debunking provides individuals with accurate information to counter false claims and myths, correcting misconceptions and preventing the spread of misinformation. By presenting evidence-based refutations, debunking helps individuals distinguish fact from fiction.
What do health experts say about online misinformation?
“In the21st century, mental health is crucial due to the overwhelming amount of information available online. The COVID-19 pandemic-related misinformation was a prime example of this, with misinformation spreading online, leading to increased anxiety, panic buying, fear of leaving home, and mistrust in health measures. To protect our mental health, it is essential to cultivate a discerning mindset, question sources, and verify information before consumption. Fostering a supportive community that encourages open dialogue and fact-checking can help navigate the digital information landscape with confidence and emotional support. Prioritising self-care routines, mindfulness practices, and seeking professional guidance are also crucial for safeguarding mental health in the digital information era.”
In conversation with CyberPeace ~ Says Dubai-based psychologist, Aishwarya Menon, (BA,in Psychology and Criminology from the University of Westen Ontario, London and MA in Mental Health and Addictions (Humber College, University of Guelph),Toronto.
CyberPeace Policy Recommendations:
1) Countering misinformation is everyone's shared responsibility. To mitigate the negative effects of infodemics online, we must look at developing strong legal policies, creating and promoting awareness campaigns, relying on authenticated content on mass media, and increasing people's digital literacy.
2) Expert organisations actively verifying the information through various strategies including prebunking and debunking efforts are among those best placed to refute misinformation and direct users to evidence-based information sources. It is recommended that countermeasures for users on platforms be increased with evidence-based data or accurate information.
3) The role of social media platforms is crucial in the misinformation crisis, hence it is recommended that social media platforms actively counter the production of misinformation on their platforms. Local, national, and international efforts and additional research are required to implement the robust misinformation counterstrategies.
4) Netizens are advised or encouraged to follow official sources to check the reliability of any news or information. They must recognise the red flags by recognising the signs such as questionable facts, poorly written texts, surprising or upsetting news, fake social media accounts and fake websites designed to look like legitimate ones. Netizens are also encouraged to develop cognitive skills to discern fact and reality. Netizens are advised to approach information with a healthy dose of skepticism and curiosity.
Final Words:
It is crucial to protect mental health by escalating and disturbing the rise of misinformation incidents on various subjects, safeguarding our minds requires cognitive skills, building media literacy and verifying the information from trusted sources, prioritising mental health by self-care practices and staying connected with supportive authenticated networks. Promoting prebunking and debunking initiatives is necessary. Netizen scan protect themselves against the negative effects of misinformation and cultivate a resilient mindset in the digital information age.
References:
- https://www.hindawi.com/journals/scn/2021/7999760/
- https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8502082/
.webp)
Introduction
Meta Platforms is experiencing a long-term surge of lawsuits that not only question particular practices, but also the very design and governance of its platforms, across the United States and beyond. This range of privacy breaches to youth mental health damages and antitrust issues are all indicative of a new era of judicial, regulatory, and civil society scrutiny of the duties of big tech firms. The main question is no longer whether harmful content is placed on platforms, but to what extent they are actively creating harm-producing environments.
From Content to Conduct: A Turning Point in Legal Strategy
Over the years, Meta and other sites have depended on legal safeguards like the US Communications Decency Act, Section 230, which protects companies against liability due to user-created content. New ways of testing that protection are now being tried.
Recent incidents have shifted off the blame of particular content and has placed the emphasis on the design of the platform. Courts are becoming more receptive to consider whether the characteristics of infinite scroll, algorithmic amplification, and engagement-based ranking systems are contributing to quantifiable harm.
In March 2026, a California jury declared that Meta and Google were negligent in creating platforms that led to youth addiction and mental health problems. The jury decided that Meta and Google were to pay off a joint sum of 6 million dollars in damages, with 70 percent of the sum being charged on Meta. It is a bellwether case, which means that it is related to about 2,000 other pending cases by parents and school districts. This change is important as it avoids legal barriers. When the liability is linked to the design decisions instead of user-created content, accountability begins to shift.
The Youth Harm Cases: A Big Tobacco Moment
Social media are becoming the subject of increased scrutiny by courts and regulators as products that have quantifiable psychological impacts. The most impactful group of lawsuits against Meta is, perhaps, the one concerning youth mental health.
A day prior to the California verdict, a New Mexico jury ordered Meta to pay $375 million in damages due to failure to safeguard young users against child predators on Instagram and Facebook, and found that the company had lied to consumers about the safety of its products and violated state consumer protection laws.
Similar arguments have been presented in other lawsuits filed by attorneys general in over 30 states, and the cases reflect previous regulatory turning points in other industries such as tobacco. The question that courts are not merely asking is whether there is harm or not. They are questioning whether businesses were aware of creating systems that capitalize on behavioral weaknesses. It has been reported in internal documents and accounts of former employees that Meta made a profit by intentionally turning its platforms into addictions to children, with algorithmic functions tailored to drive users into engagement loops, maximising time on platform to the detriment of wellbeing.
Meta has refuted these characterisations, claiming that teen mental health is multifaceted and cannot be blamed on an individual app. The companies have indicated that they will appeal the verdicts.
Privacy and Data Misuse: An Ongoing Fault Line
Platform design is not the only issue that Meta faces in legal matters. Cases centered on privacy have been a recurrent problem in the last ten years, and previous cases have claimed that Facebook monitored users even after they have logged out, scanned personal messages, and utilized personal data in a manner that was beyond user expectations. In more recent times, in April 2026, a class action suit was filed claiming that WhatsApp messages were accessed by Meta employees and third-party contractors, despite the long-standing end-to-end encryption guarantees of the platform.
These instances indicate a structural problem that is consistent. Consent mechanisms and privacy policies tend to be out of date with the reality of data use, and the gap between legal compliance and what users actually know or expect.
Antitrust: A Win, But Not a Clean One
One of the legal fronts was Meta all the way. In November 2025, a judge in the US District Court, James Boasberg, declared that Meta was not a social networking monopoly, finding that the FTC did not demonstrate that the acquisitions of Instagram and WhatsApp by the company were against the antitrust law. The decision has since been appealed by the FTC, which continues to argue that "Meta broke our antitrust laws by acquiring Instagram and WhatsApp, and that American consumers have been harmed by it.
The case also demonstrates a significant drawback of the antitrust law as a form of regulation of tech companies. By the time the trial occurred five years after the lawsuit was initiated, the social media market had evolved such that Tik Tok was a major competitor, undermining the market definition claims of the FTC. The structural issue of whether a few platforms are too powerful in the communication of the masses is not answered, although the legal claim in this instance might have been unsuccessful.
Policy Takeaways: What This Means Going Forward
The accumulating number of lawsuits against Meta provides a number of valuable lessons to policymakers.
- Platform design has become a regulatory topic. Laws should go beyond content regulation and deal with the construction of systems. Engagement maximising features can also increase harm, and this trade-off must be governed explicitly.
- Transparency should be mandatory and not discretionary. Privacy policies and disclosures on platforms are usually too complicated or ambiguous. Regulators might be required to make more transparent and standardised disclosures regarding the use of data and the operation of recommendation systems.
- Section 230 safeguards are under reinterpretation. Courts are becoming open to restrict immunity in cases where the harm is associated with the conduct of the platform and not the content of the user. This would redefine the law of all digital platforms, and not only Meta.
- Cross-border coordination is needed. Meta is an international company, yet the regulatory reaction is still divided. This will require more coordination among jurisdictions to guarantee uniform enforcement and to eliminate regulatory arbitrage.
Conclusion
The lawsuits of Meta are not single cases. They are a more general reconsideration of the regulation of digital platforms and the accountability of those responsible when design decisions have harm at scale. In the wider context of the technology ecosystem, the implications are structural. Courts are starting to question not only what is hosted on them, but how they work and why they are constructed in the manner they are.
The age of minimal responsibility is being supplanted by a more challenging requirement: that platforms should foresee, quantify, and alleviate the harms they produce. The result of these cases will not only decide the future of Meta in terms of legal matters. They will influence the regulations of the digital economy in the years to come.
References
- https://www.npr.org/2026/03/25/nx-s1-5746125/meta-youtube-social-media-trial-verdict
- https://www.pbs.org/newshour/show/jury-finds-meta-and-youtube-liable-in-landmark-youth-addiction-case
- https://www.cbsnews.com/news/meta-ftc-whatsapp-instagram/
- https://www.cnbc.com/2026/01/20/ftc-appeals-metaruling-antitrust-instagram-whatsapp.html
- https://www.bbc.com/news/articles/czjw0zgz9zyo

Introduction:
The Ministry of Civil Aviation, GOI, established the initiative ‘DigiYatra’ to ensure hassle-free and health-risk-free journeys for travellers/passengers. The initiative uses a single token of face biometrics to digitally validate identity, travel, and health along with any other data needed to enable air travel.
Cybersecurity is a top priority for the DigiYatra platform administrators, with measures implemented to mitigate risks of data loss, theft, or leakage. With over 6.5 million users, DigiYatra is an important step forward for India, in the direction of secure digital travel with seamless integration of proactive cybersecurity protocols. This blog focuses on examining the development, challenges and implications that stand in the way of securing digital travel.
What is DigiYatra? A Quick Overview
DigiYatra is a flagship initiative by the Government of India to enable paperless travel, reducing identity checks for a seamless airport experience. This technology allows the entry of passengers to be automatically processed based on a facial recognition system at all the checkpoints at the airports, including main entry, security check areas, aircraft boarding, and more.
This technology makes the boarding process quick and seamless as each passenger needs less than three seconds to pass through every touchpoint. Passengers’ faces essentially serve as their documents (ID proof and if required, Vaccine Proof) and their boarding passes.
DigiYatra has also enhanced airport security as passenger data is validated by the Airlines Departure Control System. It allows only the designated passengers to enter the terminal. Additionally, the entire DigiYatra Process is non-intrusive and automatic. In improving long-standing security and operational airport protocols, the platform has also significantly improved efficiency and output for all airport professionals, from CISF personnel to airline staff members.
Policy Origins and Framework
Rooted in the Government of India's Digital India campaign and enabled by the National Civil Aviation Policy (NCAP) 2016, DigiYatra aims to modernise air travel by integrating Aadhaar-based passenger identification. While Aadhaar is currently the primary ID, efforts are underway to include other identification methods. The platform, supported by stakeholders like the Airports Authority of India (26%) and private airports (14.8% each), must navigate stringent cybersecurity demands. Compliance with the Digital Personal Data Protection Act, 2023, ensures the secure use of sensitive facial recognition data, while the Aircraft (Security) Rules, 2023, mandate robust interoperability and data protection mechanisms across stakeholders. DigiYatra also aspires to democratise digital travel, extending its reach to underserved airports and non-tech-savvy travellers. As India refines its cybersecurity and privacy frameworks, learning from global best practices is essential to safeguarding data and ensuring seamless, secure air travel operations.
International Practices
Global practices offer crucial lessons to strengthen DigiYatra's cybersecurity and streamline the seamless travel experience. Initiatives such as CLEAR in the USA and Seamless Traveller initiatives in Singapore offer actionable insights into further expanding the system to its full potential. CLEAR is operational in 58 airports and has more than 17 million users. Singapore has made Seamless Traveller active since the beginning of 2024 and aims to have a 95% shift to automated lanes by 2026.
Some additional measures that India can adopt from international initiatives are regular audits and updates to the cybersecurity policies. Further, India can aim for a cross-border policy for international travel. By implementing these recommendations, DigiYatra can not only improve data security and operational efficiency but also establish India as a leader in global aviation security standards, ensuring trust and reliability for millions of travellers
CyberPeace Recommendations
Some recommendations for further improving upon our efforts for seamless and secure digital travel are:
- Strengthen the legislation on biometric data usage and storage.
- Collaborate with global aviation bodies to develop standardised operations.
- Cybersecurity technologies, such as blockchain for immutable data records, should be adopted alongside encryption standards, data minimisation practices, and anonymisation techniques.
- A cybersecurity-first culture across aviation stakeholders.
Conclusion
DigiYatra represents a transformative step in modernising India’s aviation sector by combining seamless travel with robust cybersecurity. Leveraging facial recognition and secure data validation enhances efficiency while complying with the Digital Personal Data Protection Act, 2023, and Aircraft (Security) Rules, 2023.
DigiYatra must address challenges like secure biometric data storage, adopt advanced technologies like blockchain, and foster a cybersecurity-first culture to reach its full potential. Expanding to underserved regions and aligning with global best practices will further solidify its impact. With continuous innovation and vigilance, DigiYatra can position India as a global leader in secure, digital travel.
References
- https://government.economictimes.indiatimes.com/news/governance/digi-yatra-operates-on-principle-of-privacy-by-design-brings-convenience-security-ceo-digi-yatra-foundation/114926799
- https://www.livemint.com/news/india/explained-what-is-digiyatra-how-it-will-work-and-other-questions-answered-11660701094885.html
- https://www.civilaviation.gov.in/sites/default/files/2023-09/ASR%20Notification_published%20in%20Gazette.pdf