#FactCheck: Viral image shows the Maldives mocking India with a "SURRENDER" sign on photo of Prime Minister Narendra Modi
Executive Summary:
A manipulated viral photo of a Maldivian building with an alleged oversized portrait of Indian Prime Minister Narendra Modi and the words "SURRENDER" went viral on social media. People responded with fear, indignation, and anxiety. Our research, however, showed that the image was manipulated and not authentic.

Claim:
A viral image claims that the Maldives displayed a huge portrait of PM Narendra Modi on a building front, along with the phrase “SURRENDER,” implying an act of national humiliation or submission.

Fact Check:
After a thorough examination of the viral post, we got to know that it had been altered. While the image displayed the same building, it was wrong to say it included Prime Minister Modi’s portrait along with the word “SURRENDER” shown in the viral version. We also checked the image with the Hive AI Detector, which marked it as 99.9% fake. This further confirmed that the viral image had been digitally altered.

During our research, we also found several images from Prime Minister Modi’s visit, including one of the same building displaying his portrait, shared by the official X handle of the Maldives National Defence Force (MNDF). The post mentioned “His Excellency Prime Minister Shri @narendramodi was warmly welcomed by His Excellency President Dr.@MMuizzu at Republic Square, where he was honored with a Guard of Honor by #MNDF on his state visit to Maldives.” This image, captured from a different angle, also does not feature the word “surrender.


Conclusion:
The claim that the Maldives showed a picture of PM Modi with a surrender message is incorrect and misleading. The image is altered and is being spread to mislead people and stir up controversy. Users should check the authenticity of photos before sharing.
- Claim: Viral image shows the Maldives mocking India with a surrender sign
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Key points: Data collection, Protecting Children, and Awareness
Introduction
The evolution of technology has drastically changed over the period impacting mankind and their lifestyle. For every single smallest aspect, humans are reliable on the computers they have manufactured. The use of AI has almost hindered mankind, kids these days are more lethargic to work and write more sensibly on their own, but they are more likely interested in television, video games, mobile games, etc. School kids use AI just to complete their homework. Is it a good sign for the country’s future? The study suggests that Tools like ChatGPT is a threat to humans/a child’s potential to be creative and make original content requiring a human writer’s insight. Tools like ChatGPT can remove students’ artistic voices rather than using their unique writing style.
Does any of those browsers or search engines use your search history against you? or How do non-users tend to lose their private info on such a search engine?
Are there any safety measures that one’s the government of a particular country taking to protect their people’s rights?
Some of us might wonder how these two fancy-looking world merge and into, Arey they a boon or curse?
So here’s the top news getting flooded all over the world through the internet,
“Italian Agency impose strict measures on OpenAI’s ChatGPT”
Italy becomes the first Western European country to take serious measures about using Open AI ChatGPT. An Italian Data Protection agency named Garante has set mandates on ChatGPT. Garante has raised concerns about privacy violations and the inability to verify the age of users. Garate has also claimed that the AI ChatBot is violating the EU’s General Data Protection Regulation (GDPR). In a press release, Garante demanded OpenAI take necessary actions.
To begin with, Garante has demanded that OpenAI’s ChatGPT should increase its transparency and give a comprehensive statement about its data processing practices. OpenAI must specify between obtaining user consent for processing users’ data to train its AI model or may rely on a legitimate basis. OpenAI must maintain the privacy of users’ data.
In addition, ChatGPT should also take measures to prevent minors from accessing the technology at such an early stage of life, which could hinder their brain power. ChatGPT should add some age verification system to prevent minors from accessing explicit content. Moreover, Garante suggests that OpenAI should spread awareness among its users about their data being processed to train its AI model. Garante has set a deadline of April 30 for ChatGPT to complete the given tasks. Until then, its service should be banned in the country.
Child safety while surfing on ChatGpt
Italian agency demands age limitation to surf and an age verification method to exclude users under the age of 13, and parental authority should be required for users between the ages of 13 and 18. As this is a matter of security. Children might get exposed to explicit content invalidated to their age or explore illegitimate content. The AI chatbot doesn’t have the sense to determine which content is appropriate for the underage audience. Due to tools like chatbots, subjective things/information are already available to young students, leading to endangered irrespective of their future. As ChatGpt can hinder their potential and ability to create original and creative content for young minds. It is a threat motivation to humans’ motivation to write. Moreover, when students need time to think and analyze they get lethargic due to tools like ChatGPT, and the practice they need fades away.
Collection of User’s Data
According to some reports from the company’s privacy policy, OpenAI ChatGpt collects an assortment of additional data. The first two questions are for a free trial when a session starts. It asks for your Login, and SignUp through your Gmail account collects your IP address, browser type, and the data you put in the form of input, i.e. it collects data on the user’s interaction with the website, It also collects the user’s data like session time, cookies through third party may tend to sell it to an unspecified third party.
This snapshot shows that they have added a few things after Garante’s draft.
Conclusion
AI chatbot – Chatgpt is an advanced technology tool that makes work a little easier, but one surfing on such tools must stay aware of the information they are asking for. Such AI bots are trained to understand mankind, its job is to give a helping hand and not doltish. In case of this, some people tend to provide sensitive information unknowingly, young minds get exposed to explicit information. Such bots need to put some age limitations. Such innovations keep taking place, but it’s individuals’ responsibility what actions to be allowed to access their online connected device. Unlike the Italian Agency, which has taken some preventive measures to keep their user’s data safe, also looking at the adverse effect of such chatbots on a young mind.

Executive Summary:
Recently, we came upon some AI-generated deep fake videos that have gone viral on social media, purporting to show Indian political figures Prime Minister Narendra Modi, Home Minister Amit Shah, and External Affairs Minister Dr. S. Jaishankar apologizing in public for initiating "Operation Sindoor." The videos are fake and use artificial intelligence tools to mimic the leaders' voices and appearances, as concluded by our research. The purpose of this report is to provide a clear understanding of the facts and to reveal the truth behind these viral videos.
Claim:
Multiple videos circulating on social media claim to show Prime Minister Narendra Modi, Central Home Minister Amit Shah, and External Affairs Minister Dr. S. Jaishankar publicly apologised for launching "Operation Sindoor." The videos, which are being circulated to suggest a political and diplomatic failure, feature the leaders speaking passionately and expressing regret over the operation.



Fact Check:
Our research revealed that the widely shared videos were deepfakes made with artificial intelligence tools. Following the 22 April 2025 Pahalgam terror attack, after “Operation Sindoor”, which was held by the Indian Armed Forces, this video emerged, intending to spread false propaganda and misinformation.
Finding important frames and visual clues from the videos that seemed suspicious, such as strange lip movements, misaligned audio, and facial distortions, was the first step in the fact-checking process. By putting audio samples and video frames in Hive AI Content Moderation, a program for detecting AI-generated content. After examining audio, facial, and visual cues, Hive's deepfake detection system verified that all three of the videos were artificial intelligence (AI) produced.
Below are three Hive Moderator result screenshots that clearly flag the videos as synthetic content, confirming that none of them are authentic or released by any official government source.



Conclusion:
The artificial intelligence-generated videos that claim Prime Minister Narendra Modi, Home Minister Amit Shah, and External Affairs Minister Dr. S. Jaishankar apologized for the start of "Operation Sindoor" are completely untrue. A purposeful disinformation campaign to mislead the public and incite political unrest includes these deepfake videos. No such apology has been made by the Indian government, and the operation in question does not exist in any official or verified capacity. The public must exercise caution, avoid disseminating videos that have not been verified, and rely on reliable fact-checking websites. Such disinformation can seriously affect national discourse and security in addition to eroding public trust.
- Claim: India's top executives apologize publicly for Operation Sindoor blunder.
- Claimed On: Social Media
- Fact Check: AI Misleads

Introduction
In today’s digital age, everyone is online, so is the healthcare sector worldwide. The latest victim of a data breach is Hong Kong healthcare provider OT&P Healthcare, which has recently suffered a data loss of 100,000 patients that exposed their medical history, and caused concern to the patients and their families. This breach has highlighted the vulnerability in the healthcare sector /industry and the importance of cybersecurity measures to protect sensitive information. This blog will explore the data breach, its impact on patients and families, and the best practices for safeguarding sensitive data.
Background: On 13 March 2023, an incident took place where the Cybercriminals deployed a variety of methods to breach the data, which included phishing attacks, malware, and exploiting software vulnerabilities. OT&P Health Care exploits the sensitive data of the patients. According to OT&P Healthcare, it is working together with law enforcement and has hired a cybersecurity firm to investigate the incident and tighten its security procedures. Like other data breaches, the inquiry will most certainly take some time to uncover the actual source and scope of the intrusion. Regardless of the cause of the breach, this event emphasises the significance of frequent cybersecurity assessments, vulnerability testing, and proactive data protection measures. Considering the dangers in the healthcare sector must be cautious in preserving the personal and medical records of the patients as they are sensitive in nature.
Is confidentiality at stake due to data breaches?
Medical data breaches represent a huge danger to patients, with serious ramifications for their privacy, financial security, and physical health. Some of the potential hazards and effects of medical data breaches are as follows:
- Compromise of patient data: Medical data breaches can expose patients’ sensitive information, such as their medical history, diagnoses, treatment, and medication regimens. If history is highly personal and reaches the wrong hands, it could harm someone’s reputation.
- Identity theft: the data stolen by the cybercriminals may be used by them to open credit accounts and apply for loans, Patients can suffer severe financial and psychological stress because of identity theft since they may spend years attempting to rebuild their credit and regain their good name.
- Medical Fraud: Medical data breaches can also result in medical fraud, which occurs when hackers use stolen medical information to charge insurance companies for services that were not performed or for bogus treatments or procedures. Medical fraud may result in financial losses for patients, insurance companies, and individuals obtaining ineffective or risky medical care.
Impact on patients
Data breach does not cause financial loss but may also profoundly impact their mental health and emotional well-being. let’s understand some psychological impacts:
- Anxiety and Stress: Patients whose medical data has been affected may experience feelings of stress and anxiety as they worry about the potential consequences of the data loss can be misused.
- Loss of faith: Patients may lose faith in their healthcare providers if they believe their personal and medical information needs to be properly As a result, patients may be reluctant to disclose sensitive information to their healthcare professionals, compromising the quality of their medical care.
- Sense of Embarrassment: Patients may feel disregarded or ashamed if their sensitive medical information is revealed, particularly if it relates to a sensitive or stigmatised This might lead to social isolation and a reluctance to seek further medical treatment.
- Post-Traumatic Stress Disorder (PTSD): Patients who have experienced a data breach may have PTSD symptoms such as nightmares, flashbacks, and avoidance behaviour. This can have long-term consequences for their mental health and quality of life.
Legal Implications of Data Breach
Patients have certain legal rights and compensations when a healthcare data breach occurs. Let’s have a look at them: –
- Legal Liability: Healthcare providers have a legal obligation to protect data under various privacy and security laws if they fail to take appropriate measures to protect patient data, they may be held legally liable for resulting harm.
- Legal recourse: Patients whose healthcare data leak has impacted them have the legal right to seek compensation and hold healthcare providers and organisations This could involve suing the healthcare practitioner or organisationresponsible for the breach.
- Right to seek compensation: the patients who have suffered from the data loss are liable to seek compensation.
- Notifications: As soon as a data breach takes place, it impacts the organisation and its customers. In this case, it is the responsibility of the OT&P to
- notify their patients about the data breach and inform them about the consequences.
- Take Away from OT &P Healthcare Data Breach: with the growing data breaches in the healthcare industry, here are some lessons that can be learned from the Hong Kong data breach.
- Cybersecurity: The OT&P Healthcare data breach points to the vital need to prioritisecybersecurity in healthcare. To secure themselves, hospitals and the healthcare sector must use the latest software to protect their data.
- Regular risk assessments: These assessments help find system vulnerabilities and security issues. This can assist healthcare providers and organisationsin taking the necessary actions to avoid data breaches and boost their cybersecurity defences.
- Staff Training: Healthcare workers should be taught cybersecurity best practices, such as detecting and responding to phishing attempts, handling sensitive data, and reporting suspected security breaches. This training should be continued to keep workers updated on the newest cybersecurity trends and threats.
- Incident Response Strategy: Healthcare providers and organisations should have an incident response policy in place to deal with data breaches and other security concerns. This strategy should include protocols for reporting instances, limiting the breach, and alerting patients and verified authorities.
Conclusion
The recent data breach in Hong Kong healthcare impact not only the patients but also their trust is shaken. As we continue to rely on digital technology for medical records and healthcare delivery, it is essential that healthcare providers and organisations take proactive steps to protect patient data from cyber-attacks and data breaches.