#FactCheck - Out-of-Context Clip of PM Modi Misused to Claim He Insulted India
Executive Summary:
A short video clip of Prime Minister Narendra Modi is going viral on social media. In the clip, he can be heard saying, “What sins did we commit in our previous life that we were born in India?” Users are sharing this video claiming that the Prime Minister insulted India and its people during a foreign visit. However, an research by the CyberPeace found that the claim is misleading. The viral clip is taken out of context from a longer speech delivered by Modi during his visit to Shanghai, China, in 2015
Claim:
A Facebook user named “Bittu Yadav” shared the reel, portraying the statement as anti-India. The caption reads:“Look at this, and you supporters—see how your ‘leader’ is praising the country.”
Post link and archive link:

Fact Check:
To verify the claim, we searched relevant keywords on Google and found the full video uploaded on May 16, 2015, on the official YouTube channel of the Bharatiya Janata Party. The video shows Prime Minister Narendra Modi addressing the Indian community in Shanghai, China.

In the 57-minute speech, at around 51 minutes 25 seconds, Modi was referring to the pessimistic atmosphere in India before 2014. He said: “Within a year… people used to say, ‘Leave it, nothing will happen now. Who knows what sins we committed in our previous life that we were born in India’… From that mindset, today the world says that if there is a country growing at the fastest pace, it is India.”
This clearly shows that Modi was citing a past sentiment to highlight how perceptions about India have changed over time, not expressing his personal view. Media reports from his May 2015 China visit also noted that he addressed around 5,000 members of the Indian community in Shanghai, where he spoke about India’s economic growth and initiatives like “Make in India.”

Conclusion:
The viral claim is false. The video has been edited and shared out of context. In reality, Prime Minister Narendra Modi was referring to a past mindset before 2014 while highlighting the change in India’s global perception.
Related Blogs

Introduction
Considering the development of technology, Voice cloning schemes are one such issue that has recently come to light. Scammers are moving forward with AI, and their methods and plans for deceiving and scamming people have also altered. Deepfake technology creates realistic imitations of a person’s voice that can be used to conduct fraud, dupe a person into giving up crucial information, or even impersonate a person for illegal purposes. We will look at the dangers and risks associated with AI voice cloning frauds, how scammers operate and how one might protect themselves from one.
What is Deepfake?
Artificial intelligence (AI), known as “deepfake,” can produce fake or altered audio, video, and film that pass for the real thing. The words “deep learning” and “fake” are combined to get the name “deep fake”. Deepfake technology creates content with a realistic appearance or sound by analysing and synthesising diverse volumes of data using machine learning algorithms. Con artists employ technology to portray someone doing something that has never been in audio or visual form. The best example is the American President, who used deep voice impersonation technology. Deep voice impersonation technology can be used maliciously, such as in deep voice fraud or disseminating false information. As a result, there is growing concerned about the potential influence of deep fake technology on society and the need for effective tools to detect and mitigate the hazards it may provide.
What exactly are deepfake voice scams?
Artificial intelligence (AI) is sometimes utilised in deepfake speech frauds to create synthetic audio recordings that seem like real people. Con artists can impersonate someone else over the phone and pressure their victims into providing personal information or paying money by using contemporary technology. A con artist may pose as a bank employee, a government official, or a friend or relative by utilising a deep false voice. It aims to earn the victim’s trust and raise the likelihood that they will fall for the hoax by conveying a false sense of familiarity and urgency. Deep fake speech frauds are increasing in frequency as deep fake technology becomes more widely available, more sophisticated, and harder to detect. In order to avoid becoming a victim of such fraud, it is necessary to be aware of the risks and take appropriate measures.
Why do cybercriminals use AI voice deep fake?
In order to mislead users into providing private information, money, or system access, cybercriminals utilise artificial intelligence (AI) speech-deep spoofing technology to claim to be people or entities. Using AI voice-deep fake technology, cybercriminals can create audio recordings that mimic real people or entities, such as CEOs, government officials, or bank employees, and use them to trick victims into taking activities that are advantageous to the criminals. This can involve asking victims for money, disclosing login credentials, or revealing sensitive information. In phishing assaults, where fraudsters create audio recordings that impersonate genuine messages from organisations or people that victims trust, deepfake AI voice technology can also be employed. These audio recordings can trick people into downloading malware, clicking on dangerous links, or giving out personal information. Additionally, false audio evidence can be produced using AI voice-deep fake technology to support false claims or accusations. This is particularly risky regarding legal processes because falsified audio evidence may lead to wrongful convictions or acquittals. Artificial intelligence voice deep fake technology gives con artists a potent tool for tricking and controlling victims. Every organisation and the general population must be informed of this technology’s risk and adopt the appropriate safety measures.
How to spot voice deepfake and avoid them?
Deep fake technology has made it simpler for con artists to edit audio recordings and create phoney voices that exactly mimic real people. As a result, a brand-new scam called the “deep fake voice scam” has surfaced. In order to trick the victim into handing over money or private information, the con artist assumes another person’s identity and uses a fake voice. What are some ways to protect oneself from deepfake voice scams? Here are some guidelines to help you spot them and keep away from them:
- Steer clear of telemarketing calls
- One of the most common tactics used by deep fake voice con artists, who pretend to be bank personnel or government officials, is making unsolicited phone calls.
- Listen closely to the voice
- Anyone who phones you pretending to be someone else should pay special attention to their voice. Are there any peculiar pauses or inflexions in their speech? Something that doesn’t seem right can be a deep voice fraud.
- Verify the caller’s identity
- It’s crucial to verify the caller’s identity in order to avoid falling for a deep false voice scam. You might ask for their name, job title, and employer when in doubt. You can then do some research to be sure they are who they say they are.
- Never divulge confidential information
- No matter who calls, never give out personal information like your Aadhar, bank account information, or passwords over the phone. Any legitimate companies or organisations will never request personal or financial information over the phone; if they do, it’s a warning sign that they’re a scammer.
- Report any suspicious activities
- Inform the appropriate authorities if you think you’ve fallen victim to a deep voice fraud. This may include your bank, credit card company, local police station, or the nearest cyber cell. By reporting the fraud, you could prevent others from being a victim.
Conclusion
In conclusion, the field of AI voice deep fake technology is fast expanding and has huge potential for beneficial and detrimental effects. While deep fake voice technology has the potential to be used for good, such as improving speech recognition systems or making voice assistants sound more realistic, it may also be used for evil, such as deep fake voice frauds and impersonation to fabricate stories. Users must be aware of the hazard and take the necessary precautions to protect themselves as AI voice deep fake technology develops, making it harder to detect and prevent deep fake schemes. Additionally, it is necessary to conduct ongoing research and develop efficient techniques to identify and control the risks related to this technology. We must deploy AI appropriately and ethically to ensure that AI voice-deep fake technology benefits society rather than harming or deceiving it.
Reference

Introduction
Misinformation regarding health is sensitive and can have far-reaching consequences. These include its effect on personal medical decisions taken by individuals, lack of trust in conventional medicine, delay in seeking treatments, and even loss of life. The fast-paced nature and influx of information on social media can aggravate the situation further. Recently, a report titled Health Misinformation Vectors in India was presented at the Health of India Summit, 2024. It provided certain key insights into health-related misinformation circulating online.
The Health Misinformation Vectors in India Report
The analysis was conducted by the doctors at First Check, a global health fact-checking initiative alongside DataLEADS, a Delhi-based digital media and technology company. The report covers health-related social media content that was posted online from October 2023 to November 2024. It mentions that among all the health scares, misinformation regarding reproductive health, cancer, vaccines, and lifestyle diseases such as diabetes and obesity is the most prominent type that is spread through social media. Misinformation regarding reproductive health includes illegal abortion methods that often go unchecked and even tips on conceiving a male child, among other things.
In order to combat this misinformation, the report encourages stricter regulations regarding health-related content on digital media, inculcating technology for health literacy and misinformation management in public health curricula and recommending tech platforms to work on algorithms that prioritise credible information and fact-checks. Doctors state that people affected by life-threatening diseases are particularly vulnerable to such misinformation, as they are desperate to seek options for treatment for themselves and their family members to have a chance at life. In a diverse society, with the lack of clear and credible information, limited access to or awareness about tools that cross-check content, and low digital literacy, people gravitate towards alternate sources of information which also fosters a sense of disengagement among the public overall. The diseases mentioned in the report, which are prone to misinformation, are life-altering and require attention from healthcare professionals.
CyberPeace Outlook
Globally, there are cases of medically-unqualified social media influencers who disperse false/mis- information regarding various health matters. The topics covered are mostly associated with stigma and are still undergoing research. This gap allows for misinformation to be fostered. An example is misinformation regarding PCOS( Polycystic Ovary Syndrome) which is circulating online.
In the midst of all of this, YouTube has released a new feature that aligns with combating health misinformation, trying to bridge the gap between healthcare professionals and Indians who look for trustworthy health-related information online. The initiative includes a feature that allows doctors, nurses, and other healthcare professionals to sign up for a health information source license. This would help by labeling all their informative videos, as addressed- from a healthcare professional. Earlier, this feature was available only for health organisations including a health source information panel and health content shelves, but this step broadens the scope for verification of licenses of individual healthcare professionals.
As digital literacy continues to grow, methods of seeking credible information, especially regarding sensitive topics such as health, require a combined effort on the part of all the stakeholders involved. We need a robust strategy for battling health-related misinformation online, including more awareness programmes and proactive participation from the consumers as well as medical professionals regarding such content.
References
- https://timesofindia.indiatimes.com/india/misinformation-about-cancer-reproductive-health-is-widespread-in-india-impacting-medical-decisions-says-report/articleshow/115931612.cms
- https://www.ndtv.com/india-news/cancer-misinformation-prevalent-in-india-trust-in-medicine-crucial-report-7165458
- https://www.newindian.in/ai-driven-health-misinformation-poses-threat-to-indias-public-health-report/
- https://www.etvbharat.com/en/!health/youtube-latest-initiative-combat-health-misinformation-india-enn24121002361
- https://blog.google/intl/en-in/products/platforms/new-ways-for-registered-healthcare-professionals-in-india-to-reach-people-on-youtube/
- https://www.bbc.com/news/articles/ckgz2p0999yo

Introduction:
This Op-ed sheds light on the perspectives of the US and China regarding cyber espionage. Additionally, it seeks to analyze China's response to the US accusation regarding cyber espionage.
What is Cyber espionage?
Cyber espionage or cyber spying is the act of obtaining personal, sensitive, or proprietary information from individuals without their knowledge or consent. In an increasingly transparent and technological society, the ability to control the private information an individual reveals on the Internet and the ability of others to access that information are a growing concern. This includes storage and retrieval of e-mail by third parties, social media, search engines, data mining, GPS tracking, the explosion of smartphone usage, and many other technology considerations. In the age of big data, there is a growing concern for privacy issues surrounding the storage and misuse of personal data and non-consensual mining of private information by companies, criminals, and governments.
Cyber espionage aims for economic, political, and technological gain. Fox example Stuxnet (2010) cyber-attack by the US and its allies Israel against Iran’s Nuclear facilities. Three espionage tools were discovered connected to Stuxnet, such as Gauss, FLAME and DuQu, for stealing data such as passwords, screenshots, Bluetooth, Skype functions, etc.
Cyber espionage is one of the most significant and intriguing international challenges globally. Many nations and international bodies, such as the US and China, have created their definitions and have always struggled over cyber espionage norms.
The US Perspective
In 2009, US officials (along with other allied countries) mentioned that cyber espionage was acceptable if it safeguarded national security, although they condemned economically motivated cyber espionage. Even the Director of National Intelligence said in 2013 that foreign intelligence capabilities cannot steal foreign companies' trade secrets to benefit their firms. This stance is consistent with the Economic Espionage Act (EEA) of 1996, particularly Section 1831, which prohibits economic espionage. This includes the theft of a trade secret that "will benefit any foreign government, foreign agent or foreign instrumentality.
Second, the US advocates for cybersecurity market standards and strongly opposes transferring personal data extracted from the US Office of Personnel Management (OPM) to cybercrime markets. Furthermore, China has been reported to sell OPM data on illicit markets. It became a grave concern for the US government when the Chinese government managed to acquire sensitive details of 22.1 million US government workers through cyber intrusions in 2014.
Third, Cyber-espionage is acceptable unless it’s utilized for Doxing, which involves disclosing personal information about someone online without their consent and using it as a tool for political influence operations. However, Western academics and scholars have endeavoured to distinguish between doxing and whistleblowing. They argue that whistleblowing, exemplified by events like the Snowden Leaks and Vault 7 disclosures, serves the interests of US citizens. In the US, being regarded as an open society, certain disclosures are not promoted but rather required by mandate.
Fourth, the US argues that there is no cyber espionage against critical infrastructure during peacetime. According to the US, there are 16 critical infrastructure sectors, including chemical, nuclear, energy, defence, food, water, and so on. These sectors are considered essential to the US, and any disruption or harm would impact security, national public health and national economic security.
The US concern regarding China’s cyber espionage
According to James Lewis (a senior vice president at the Center for US-China Economic and Security Review Commission), the US faces losses between $ 20 billion and $30 billion annually due to China’s cyberespionage. The 2018 U.S. Trade Representative (USTR) Section 301 report highlighted instances, where the Chinese government and executives from Chinese companies engaged in clandestine cyber intrusions to obtaining commercially valuable information from the U.S. businesses, such as in 2018 where officials from China’s Ministry of State Security, stole trade from General Electric aviation and other aerospace companies.
China's response to the US accusations of cyber espionage
China's perspective on cyber espionage is outlined by its 2014 anti-espionage law, which was revised in 2023. Article 1 of this legislation is formulated to prevent, halt, and punish espionage actions to maintain national security. Article 4 addresses the act of espionage and does not differentiate between state-sponsored cyber espionage for economic purposes and state-sponsored cyber espionage for national security purposes. However, China doesn't make a clear difference between government-to-government hacking (spying) and government-to-corporate sector hacking, unlike the US. This distinction is less apparent in China due to its strong state-owned enterprise (SOE) sector. However, military spying is considered part of the national interest in the US, while corporate spying is considered a crime.
China asserts that the US has established cyber norms concerning cyber espionage to normalize public attribution as acceptable conduct. This is achieved by targeting China for cyber operations, imposing sanctions on accused Chinese individuals, and making political accusations, such as blaming China and Russia for meddling in US elections. Despite all this, Washington D.C has never taken responsibility for the infamous Flame and Stuxnet cyber operations, which were widely recognized as part of a broader collaborative initiative known as Operation Olympic Games between the US and Israel. Additionally, the US takes the lead in surveillance activities conducted against China, Russia, German Chancellor Angela Merkel, the United Nations (UN) Secretary-General, and several French presidents. Surveillance programs such as Irritant Horn, Stellar Wind, Bvp47, the Hive, and PRISM are recognized as tools used by the US to monitor both allies and adversaries to maintain global hegemony.
China urges the US to cease its smear campaign associated with Volt Typhoon’s cyberattack for cyber espionage, citing the publication of a report titled “Volt Typhoon: A Conspiratorial Swindling Campaign Targets with U.S. Congress and Taxpayers Conducted by U.S. Intelligence Community” by China's National Computer Virus Emergency Response Centre and the 360 Digital Security Group on 15 April. According to the report, 'Volt Typhoon' is a ransomware cyber criminal group self-identified as the 'Dark Power' and is not affiliated with any state or region. Multiple cybersecurity authorities in the US collaborated to fabricate this story just for more budgets from Congress. In the meantime, Microsoft and other U.S. cybersecurity firms are seeking more big contracts from US cybersecurity authorities. The reality behind “Volt Typhoon '' is a conspiratorial swindling campaign to achieve two objectives by amplifying the "China threat theory" and cheating money from the U.S. Congress and taxpayers.
Beijing condemned the US claims of cyber espionage without any solid evidence. China also blames the US for economic espionage by citing the European Parliament report that the National Security Agency (NSA) was also involved in assisting Boeing in beating Airbus for a multi-billion dollar contract. Furthermore, Brazilian President Dilma Rousseff also accused the US authorities of spying against the state-owned oil company “Petrobras” for economic reasons.
Conclusion
In 2015, the US and China marked a milestone as both President Xi Jinping and Barack Obama signed an agreement, committing that neither country's government would conduct or knowingly support cyber-enabled theft of trade secrets, intellectual property, or other confidential business information to grant competitive advantages to firms or commercial sectors. However, the China Cybersecurity Industry Alliance (CCIA) published a report titled 'US Threats and Sabotage to the Security and Development of Global Cyberspace' in 2024, highlighting the US escalating cyber-attack and espionage activities against China and other nations. Additionally, there has been a considerable increase in the volume and sophistication of Chinese hacking since 2016. According to a survey by the Center for International and Strategic Studies, out of 224 cyber espionage incidents reported since 2000, 69% occurred after Xi assumed office. Therefore, China and the US must address cybersecurity issues through dialogue and cooperation, utilizing bilateral and multilateral agreements.