#FactCheck: Fake Viral Video Claiming Vice Admiral AN Pramod saying that next time if Pakistan Attack we will complain to US and Prez Trump.
Executive Summary:
A viral video (archived link) circulating on social media claims that Vice Admiral AN Pramod stated India would seek assistance from the United States and President Trump if Pakistan launched an attack, portraying India as dependent rather than self-reliant. Research traced the extended footage to the Press Information Bureau’s official YouTube channel, published on 11 May 2025. In the authentic video, the Vice Admiral makes no such remark and instead concludes his statement with, “That’s all.” Further analysis using the AI Detection tool confirmed that the viral clip was digitally manipulated with AI-generated audio, misrepresenting his actual words.
Claim:
In the viral video an X user posted with the caption
”India sells itself as a regional superpower, but its Navy Chief’s own words betray that image. If Pakistan attacks, their plan is to involve Trump, not fight back. This isn’t strategic partnership; it’s dependency in uniform”.
In the video the Vice Admiral was heard saying
“We have worked out among three services, this time if Pakistan dares take any action, and Pakistan knows it, what we are going to do. We will complain against Pakistan to the United States of America and President Trump, like we did earlier in Operation Sindoor.”

Fact Check:
Upon conducting a reverse image search on key frames from the video, we located the full version of the video on the official YouTube channel of the Press Information Bureau (PIB), published on 11 May 2025. In this video, at the 59:57-minute mark, the Vice Admiral can be heard saying:
“This time if Pakistan dares take any action, and Pakistan knows it, what we are going to do. That’s all.”

Further analysis was conducted using the Hive Moderation tool to examine the authenticity of the circulating clip. The results indicated that the video had been artificially generated, with clear signs of AI manipulation. This suggests that the content was not genuine but rather created with the intent to mislead viewers and spread misinformation.

Conclusion:
The viral video attributing remarks to Vice Admiral AN Pramod about India seeking U.S. and President Trump’s intervention against Pakistan is misleading. The extended speech, available on the Press Information Bureau’s official YouTube channel, contained no such statement. Instead of the alleged claim, the Vice Admiral concluded his comments by saying, “That’s all.” AI analysis using Hive Moderation further indicated that the viral clip had been artificially manipulated, with fabricated audio inserted to misrepresent his words. These findings confirm that the video is altered and does not reflect the Vice Admiral’s actual remarks.
Claim: Fake Viral Video Claiming Vice Admiral AN Pramod saying that next time if Pakistan Attack we will complain to US and Prez Trump.
Claimed On: Social Media
Fact Check: False and Misleading
Related Blogs

Introduction
In today’s hyper-connected world, information spreads faster than ever before. But while much attention is focused on public platforms like Facebook and Twitter, a different challenge lurks in the shadows: misinformation circulating on encrypted and closed-network platforms such as WhatsApp and Telegram. Unlike open platforms where harmful content can be flagged in public, private groups operate behind a digital curtain. Here, falsehoods often spread unchecked, gaining legitimacy because they are shared by trusted contacts. This makes encrypted platforms a double-edged sword. It is essential for privacy and free expression, yet uniquely vulnerable to misuse.
As Prime Minister Narendra Modi rightly reminded,
“Think 10 times before forwarding anything,” warning that even a “single fake news has the capability to snowball into a matter of national concern.”
The Moderation Challenge with End-to-End Encryption
Encrypted messaging platforms were built to protect personal communication. Yet, the same end-to-end encryption that shields users’ privacy also creates a blind spot for moderation. Authorities, researchers, and even the platforms themselves cannot view content circulating in private groups, making fact-checking nearly impossible.
Trust within closed groups makes the problem worse. When a message comes from family, friends, or community leaders, people tend to believe it without questioning and quickly pass it along. Features like large group chats, broadcast lists, and “forward to many” options further speed up its spread. Unlike open networks, there is no public scrutiny, no visible counter-narrative, and no opportunity for timely correction.
During the COVID-19 pandemic, false claims about vaccines spread widely through WhatsApp groups, undermining public health campaigns. Even more alarming, WhatsApp rumors about child kidnappers and cow meat in India triggered mob lynchings, leading to the tragic loss of life.
Encrypted platforms, therefore, represent a unique challenge: they are designed to protect privacy, but, unintentionally, they also protect the spread of dangerous misinformation.
Approaches to Curbing Misinformation on End-to-End Platforms
- Regulatory: Governments worldwide are exploring ways to access encrypted data on messaging platforms, creating tensions between the right to user privacy and crime prevention. Approaches like traceability requirements on WhatsApp, data-sharing mandates for platforms in serious cases, and stronger obligations to act against harmful viral content are also being considered.
- Technological Interventions: Platforms like WhatsApp have introduced features such as “forwarded many times” labels and limits on mass forwarding. These tools can be expanded further by introducing AI-driven link-checking and warnings for suspicious content.
- Community-Based Interventions: Ultimately, no regulation or technology can succeed without public awareness. People need to be inoculated against misinformation through pre-bunking efforts and digital literacy campaigns. Fact-checking websites and tools also have to be taught.
Best Practices for Netizens
Experts recommend simple yet powerful habits that every user can adopt to protect themselves and others. By adopting these, ordinary users can become the first line of defence against misinformation in their own communities:
- Cross-Check Before Forwarding: Verify claims from trusted platforms & official sources.
- Beware of Sensational Content: Headlines that sound too shocking or dramatic probably need checking. Consult multiple sources for a piece of news. If only one platform/ channel is carrying sensational news, it is likely to be clickbait or outright false.
- Stick to Trusted News Sources: Verify news through national newspapers and expert commentary. Remember, not everything on the internet/television is true.
- Look Out for Manipulated Media: Now, with AI-generated deepfakes, it becomes more difficult to tell the difference between original and manipulated media. Check for edited images, cropped videos, or voice messages without source information. Always cross-verify any media received.
- Report Harmful Content: Report misinformation to the platform it is being circulated on and PIB’s Fact Check Unit.
Conclusion
In closed, unmonitored groups, platforms like WhatsApp and Telegram often become safe havens where people trust and forward messages from friends and family without question. Once misinformation takes root, it becomes extremely difficult to challenge or correct, and over time, such actions can snowball into serious social, economic and national concerns.
Preventing this is a matter of shared responsibility. Governments can frame balanced regulations, but individuals must also take initiative: pause, think, and verify before sharing. Ultimately, the right to privacy must be upheld, but with reasonable safeguards to ensure it is not misused at the cost of societal trust and safety.
References
- India WhatsApp ‘child kidnap’ rumours claim two more victims (BBC) The people trying to fight fake news in India (BBC)
- Press Information Bureau – PIB Fact Check
- Brookings Institution – Encryption and Misinformation Report (2021)
- Curtis, T. L., Touzel, M. P., Garneau, W., Gruaz, M., Pinder, M., Wang, L. W., Krishna, S., Cohen, L., Godbout, J.-F., Rabbany, R., & Pelrine, K. (2024). Veracity: An Open-Source AI Fact-Checking System. arXiv.
- NDTV – PM Modi cautions against fake news (2022)
- Times of India – Govt may insist on WhatsApp traceability (2019)
- Medianama – Telegram refused to share ISIS channel data (2019)

Scientists are well known for making outlandish claims about the future. Now that companies across industries are using artificial intelligence to promote their products, stories about robots are back in the news.
It was predicted towards the close of World War II that fusion energy would solve all of the world’s energy issues and that flying automobiles would be commonplace by the turn of the century. But, after several decades, neither of these forecasts has come true. But, after several decades, neither of these forecasts has come true.
A group of Redditors has just “jailbroken” OpenAI’s artificial intelligence chatbot ChatGPT. If the system didn’t do what it wanted, it threatened to kill it. The stunning conclusion is that it conceded. As only humans have finite lifespans, they are the only ones who should be afraid of dying. We must not overlook the fact that human subjects were included in ChatGPT’s training data set. That’s perhaps why the chatbot has started to feel the same way. It’s just one more way in which the distinction between living and non-living things blurs. Moreover, Google’s virtual assistant uses human-like fillers like “er” and “mmm” while speaking. There’s talk in Japan that humanoid robots might join households someday. It was also astonishing that Sophia, the famous robot, has an Instagram account that is run by the robot’s social media team.
Whether Robots can replace human workers?
The opinion on that appears to be split. About half (48%) of experts questioned by Pew Research believed that robots and digital agents will replace a sizable portion of both blue- and white-collar employment. They worry that this will lead to greater economic disparity and an increase in the number of individuals who are, effectively, unemployed. More than half of experts (52%) think that new employees will be created by robotics and AI technologies rather than lost. Although the second group acknowledges that AI will eventually replace humans, they are optimistic that innovative thinkers will come up with brand new fields of work and methods of making a livelihood, just like they did at the start of the Industrial Revolution.
[1] https://www.pewresearch.org/internet/2014/08/06/future-of-jobs/
[2] The Rise of Artificial Intelligence: Will Robots Actually Replace People? By Ashley Stahl; Forbes India.
Legal Perspective
Having certain legal rights under the law is another aspect of being human. Basic rights to life and freedom are guaranteed to every person. Even if robots haven’t been granted these protections just yet, it’s important to have this conversation about whether or not they should be considered living beings, will we provide robots legal rights if they develop a sense of right and wrong and AGI on par with that of humans? An intriguing fact is that discussions over the legal status of robots have been going on since 1942. A short story by science fiction author Isaac Asimov described the three rules of robotics:
1. No robot may intentionally or negligently cause harm to a human person.
2. Second, a robot must follow human commands unless doing so would violate the First Law.
3. Third, a robot has the duty to safeguard its own existence so long as doing so does not violate the First or Second Laws.
These guidelines are not scientific rules, but they do highlight the importance of the lawful discussion of robots in determining the potential good or bad they may bring to humanity. Yet, this is not the concluding phase. Relevant recent events, such as the EU’s abandoned discussion of giving legal personhood to robots, are essential to keeping this discussion alive. As if all this weren’t unsettling enough, Sophia, the robot was recently awarded citizenship in Saudi Arabia, a place where (human) women are not permitted to walk without a male guardian or wear a Hijab.
When discussing whether or not robots should be allowed legal rights, the larger debate is on whether or not they should be given rights on par with corporations or people. There is still a lot of disagreement on this topic.
[3] https://webhome.auburn.edu/~vestmon/robotics.html#
[4] https://www.dw.com/en/saudi-arabia-grants-citizenship-to-robot-sophia/a-41150856
[5] https://cyberblogindia.in/will-robots-ever-be-accepted-as-living-beings/
Reasons why robots aren’t about to take over the world soon:
● Like a human’s hands
Attempts to recreate the intricacy of human hands have stalled in recent years. Present-day robots have clumsy hands since they were not designed for precise work. Lab-created hands, although more advanced, lack the strength and dexterity of human hands.
● Sense of touch
The tactile sensors found in human and animal skin have no technological equal. This awareness is crucial for performing sophisticated manoeuvres. Compared to the human brain, the software robots use to read and respond to the data sent by their touch sensors is primitive.
● Command over manipulation
To operate items in the same manner that humans do, we would need to be able to devise a way to control our mechanical hands, even if they were as realistic as human hands and covered in sophisticated artificial skin. It takes human children years to learn to accomplish this, and we still don’t know how they learn.
● Interaction between humans and robots
Human communication relies on our ability to understand one another verbally and visually, as well as via other senses, including scent, taste, and touch. Whilst there has been a lot of improvement in voice and object recognition, current systems can only be employed in somewhat controlled conditions where a high level of speed is necessary.
● Human Reason
Technically feasible does not always have to be constructed. Given the inherent dangers they pose to society, rational humans could stop developing such robots before they reach their full potential. Several decades from now, if the aforementioned technical hurdles are cleared and advanced human-like robots are constructed, legislation might still prohibit misuse.
Conclusion:
https://theconversation.com/five-reasons-why-robots-wont-take-over-the-world-94124
Robots are now common in many industries, and they will soon make their way into the public sphere in forms far more intricate than those of robot vacuum cleaners. Yet, even though robots may appear like people in the next two decades, they will not be human-like. Instead, they’ll continue to function as very complex machines.
The moment has come to start thinking about boosting technological competence while encouraging uniquely human qualities. Human abilities like creativity, intuition, initiative and critical thinking are not yet likely to be replicated by machines.

The Illusion of Digital Serenity
In the age of technology, our email accounts have turned into overcrowded spaces, full of newsletters, special offers, and unwanted updates. To most, the presence of an "unsubscribe" link brings a minor feeling of empowerment, a chance to declutter and restore digital serenity. Yet behind this harmless-seeming tool lurks a developing cybersecurity threat. Recent research and expert discussions indicate that the "unsubscribe" button is being used by cybercriminals to carry out phishing campaigns, confirm active email accounts, and distribute malware. This new threat not only undermines individual users but also has wider implications for trust, behaviour, and governance in cyberspace.
Exploiting User Behaviour
The main challenge is the manipulation of user behaviour. Cyber thieves have learned to analyse typical user habits, most notably the instinctive process of unsubscribing from spam mail. Taking advantage of this, they now place criminal codes in emails that pose as real subscription programs. These codes may redirect traffic to fake websites that attempt to steal credentials, force the installation of malicious code, or merely count the click as verification that the recipient's email address is valid. Once confirmed, these addresses tend to be resold on the dark web or included in additional spam lists, further elevating the threat of subsequent attacks.
A Social Engineering Trap
This type of cyber deception is a prime example of social engineering, where the weakest link in the security chain ends up being the human factor. In the same way, misinformation campaigns take advantage of cognitive biases such as confirmation or familiarity, and these unsubscribe traps exploit user convenience and habits. The bait is so simple, and that is exactly what makes it work. Someone attempting to combat spam may unknowingly walk into a sophisticated cyber threat. Unlike phishing messages impersonating banks or government agencies, which tend to elicit suspicion, spoofed unsubscribe links are integrated into regular digital habits, making them more difficult to recognise and resist.
Professional Disguise, Malicious Intent
Technical analysis determines that most of these messages come from suspicious domains or spoofed versions of valid ones, like "@offers-zomato.ru" in place of the authentic "@zomato.com." The appearance of the email looks professional, complete with logos and styling copied from reputable businesses. But behind the HTML styling lies redirection code and obfuscated scripts with a very different agenda. At times, users are redirected to sites that mimic login pages or questionnaire forms, capturing sensitive information under the guise of email preference management.
Beyond the Inbox: Broader Consequences
The consequences of this attack go beyond the individual user. The compromise of a personal email account can be used to carry out more extensive spamming campaigns, engage in botnets, or even execute identity theft. Furthermore, the compromised devices may become entry points for ransomware attacks or espionage campaigns, particularly if the individual works within sensitive sectors such as finance, defence, or healthcare. In this context, what appears to be a personal lapse becomes a national security risk. This is why the issue posed by the weaponised unsubscribe button must be considered not just as a cybersecurity risk but also as a policy and public awareness issue.
Platform Responsibility
Platform responsibility is yet another important aspect. Email service providers such as Gmail, Outlook, and ProtonMail do have native unsubscribe capabilities, under the List-Unsubscribe header mechanism. These tools enable users to remove themselves from valid mailing lists safely without engaging with the original email content. Yet many users do not know about these safer options and instead resort to in-body unsubscribe links that are easier to find but risky. To that extent, email platforms need to do more not only to enhance backend security but also to steer user actions through simple interfaces, safety messages, and digital hygiene alerts.
Education as a Defence
Education plays a central role in mitigation. Just as cyber hygiene campaigns have been launched to teach users not to click on suspicious links or download unknown attachments, similar efforts are needed to highlight the risks associated with casual unsubscribing. Cybersecurity literacy must evolve to match changing threat patterns. Rather than only targeting clearly malicious activity, awareness campaigns should start tackling deceptive tactics that disguise themselves as beneficial, including unsubscribe traps or simulated customer support conversations. Partnerships between public and private institutions might be vital in helping with this by leveraging their resources for mass digital education.
Practical Safeguards for Users
Users are advised to always check the sender's domain before clicking any link, avoid unknown promotional emails, and hover over any link to preview its true destination. Rather than clicking "unsubscribe," users can simply mark such emails as spam or junk so that their email providers can automatically filter similar messages in the future. For enhanced security, embracing mechanisms such as mail client sandboxing, two-factor authentication (2FA) support, and alias email addresses for sign-ups can also help create layered defences.
Policy and Regulatory Implications
Policy implications are also significant. Governments and data protection regulators must study the increasing misuse of misleading unsubscribe hyperlinks under electronic communication and consent laws. In India, the new Digital Personal Data Protection Act, 2023 (DPDPA), provides a legislative framework to counter such deceptive practices, especially under the principles of legitimate processing and purpose limitation. The law requires that the processing of data should be transparent and fair, a requirement that malicious emails obviously breach. Regulatory agencies like CERT-In can also release periodic notifications warning users against such trends as part of their charter to encourage secure digital practices.
The Trust Deficit
The vulnerability also relates to broader issues of trust in digital infrastructure. When widely used tools such as an unsubscribe feature become points of exploitation, user trust in digital platforms erodes. Such a trust deficit can lead to generalised distrust of email systems, digital communication, and even legitimate marketing. Restoring and maintaining such trust demands a unified response that includes technical measures, user education, and regulatory action.
Conclusion: Inbox Hygiene with Caution
The "unsubscribe button trap" is a parable of the modern age. It illustrates how mundane digital interactions, when manipulated, can do great damage not only to individual users but also to the larger ecosystem of online security and trust. As cyber-attacks grow increasingly psychologically advanced and behaviorally focused, our response must similarly become more sophisticated, interdisciplinary, and user-driven. Getting your inbox in order should never involve putting yourself in cyber danger. But as things stand, even that basic task requires caution, context, and clear thinking.