#FactCheck -AI-Manipulated Clip Misrepresents PM Modi’s Remarks on Iran-Israel Conflict
Executive Summary
Amid the ongoing conflict between the US-Israel and Iran, a video of Indian Prime Minister Narendra Modi is being widely circulated on social media. In the clip, he is allegedly heard supporting Israel and calling Iran a “terrorist state.” The video also appears to show him speaking about the idea of “Akhand Bharat.” Many users are sharing this video as genuine. However, a detailed research by the CyberPeacefound that the claim is false. The viral video is a deepfake created using AI technology.
Claim:
A Facebook page named “Pushpendra Kulshreshtha” shared the video on March 23, 2026, with a caption suggesting that PM Modi made strong remarks in support of Israel and against Iran.

Fact Check:
To verify the claim, we first conducted a keyword search to find any credible reports or official statements where PM Modi made such remarks. However, no reliable news reports or authentic videos supporting the claim were found. We then extracted keyframes from the viral video and performed a reverse image search using Google Lens. This led us to the original video posted on the X (formerly Twitter) handle of ANI on March 12, 2026.

The visuals, including PM Modi’s attire and the stage setup, matched the viral clip—indicating that the fake video was created using this original footage. However, in the authentic video, PM Modi did not make any statements about Iran, Israel, or “Akhand Bharat” as seen in the viral version. In the original footage, PM Modi is seen addressing the NXT Summit in Delhi, where he spoke about the global energy crisis arising from ongoing conflicts and highlighted the expansion of LPG and PNG facilities in India. Additionally, a customised keyword search led us to a press release issued by the Prime Minister's Office regarding his address at the summit. The statement heard in the viral clip was not found there either.

Conclusion:
The viral video of PM Modi is a deepfake. He did not make any statement calling Iran a “terrorist state” or expressing support for Israel in the manner shown. The original video is from a summit held in Delhi and has been manipulated using AI to spread misleading claims.
Related Blogs

Introduction
The rapid advancement of artificial intelligence (AI) technology has sparked intense debates and concerns about its potential impact on humanity. Sam Altman, CEO of AI research laboratory OpenAI, and Altman, known as the father of ChatGPT, an AI chatbot, hold a complex position, recognising both the existential risks AI poses and its potential benefits. In a world tour to raise awareness about AI risks, Altman advocates for global cooperation to establish responsible guidelines for AI development. Artificial intelligence has become a topic of increasing interest and concern as technology advances. Developing sophisticated AI systems raises many ethical questions, including whether they will ultimately save or destroy humanity.
Addressing Concerns
Altman engages with various stakeholders, including protesters who voice concerns about the race toward artificial general intelligence (AGI). Critics argue that focusing on safety rather than pushing AGI development would be a more responsible approach. Altman acknowledges the importance of safety progress but believes capability progress is necessary to ensure safety. He advocates for a global regulatory framework similar to the International Atomic Energy Agency, which would coordinate research efforts, establish safety standards, monitor computing power dedicated to AI training, and possibly restrict specific approaches.
Risks of AI Systems
While AI holds tremendous promise, it also presents risks that must be carefully considered. One of the major concerns is the development of artificial general intelligence (AGI) without sufficient safety precautions. AGI systems with unchecked capabilities could potentially pose existential risks to humanity if they surpass human intelligence and become difficult to control. These risks include the concentration of power, misuse of technology, and potential for unintended consequences.
There are also fears surrounding AI systems’ impact on employment. As machines become more intelligent and capable of performing complex tasks, there is a risk that many jobs will become obsolete. This could lead to widespread unemployment and economic instability if steps are not taken to prepare for this shift in the labour market.
While these risks are certainly caused for concern, it is important to remember that AI systems also have tremendous potential to do good in the world. By carefully designing these technologies with ethics and human values in mind, we can mitigate many of the risks while still reaping the benefits of this exciting new frontier in technology.

Open AI Systems and Chatbots
Open AI systems like ChatGPT and chatbots have gained popularity due to their ability to engage in natural language conversations. However, they also come with risks. The reliance on large-scale training data can lead to biases, misinformation, and unethical use of AI. Ensuring open AI systems’ safety and responsible development mitigates potential harm and maintains public trust.
The Need for Global Cooperation
Sam Altman and other tech leaders emphasise the need for global cooperation to address the risks associated with AI development. They advocate for establishing a global regulatory framework for superintelligence. Superintelligence refers to AGI operating at an exceptionally advanced level, capable of solving complex problems that have eluded human comprehension. Such a framework would coordinate research efforts, enforce safety standards, monitor computing power, and potentially restrict specific approaches. International collaboration is essential to ensure responsible and beneficial AI development while minimising the risks of misuse or unintended consequences.
Can AI Systems Make the World a Better Place: Benefits of AI Systems
AI systems hold many benefits that can greatly improve human life. One of the most significant advantages of AI is its ability to process large amounts of data at a rapid pace. In industries such as healthcare, this has allowed for faster diagnoses and more effective treatments. Another benefit of AI systems is their capacity to learn and adapt over time. This allows for more personalised experiences in areas such as customer service, where AI-powered chatbots can provide tailored solutions based on an individual’s needs. Additionally, AI can potentially increase efficiency in various industries, from manufacturing to transportation. By automating repetitive tasks, human workers can focus on higher-level tasks that require creativity and problem-solving skills. Overall, the benefits of AI systems are numerous and promising for improving human life in various ways.
We must remember the impact of AI on education. It has already started to show its potential by providing personalised learning experiences for students at all levels. With the help of AI-driven systems like intelligent tutoring systems (ITS), adaptive learning technologies (ALT), and educational chatbots, students can learn at their own pace without feeling overwhelmed or left behind.
While there are certain risks associated with the development of AI systems, there are also numerous opportunities for them to make our world a better place. By harnessing the power of these technologies for good, we can create a brighter future for ourselves and generations to come.

Conclusion
The AI revolution presents both extraordinary opportunities and significant challenges for humanity. The benefits of AI, when developed responsibly, have the potential to uplift societies, improve quality of life, and address long-standing global issues. However, the risks associated with AGI demand careful attention and international cooperation. Governments, researchers, and industry leaders must work together to establish guidelines, safety measures, and ethical standards to navigate the path toward AI systems that serve humanity’s best interests and safeguard against potential risks. By taking a balanced approach, we can strive for a future where AI systems save humanity rather than destroy it.
.webp)
Introduction
Deepfake have become a source of worry in an age of advanced technology, particularly when they include the manipulation of public personalities for deceitful reasons. A deepfake video of cricket star Sachin Tendulkar advertising a gaming app recently went popular on social media, causing the sports figure to deliver a warning against the widespread misuse of technology.
Scenario of Deepfake
Sachin Tendulkar appeared in the deepfake video supporting a game app called Skyward Aviator Quest. The app's startling quality has caused some viewers to assume that the cricket legend is truly supporting it. Tendulkar, on the other hand, has resorted to social media to emphasise that these videos are phony, highlighting the troubling trend of technology being abused for deceitful ends.
Tendulkar's Reaction
Sachin Tendulkar expressed his worry about the exploitation of technology and advised people to report such videos, advertising, and applications that spread disinformation. This event emphasises the importance of raising knowledge and vigilance about the legitimacy of material circulated on social media platforms.
The Warning Signs
The deepfake video raises questions not just for its lifelike representation of Tendulkar, but also for the material it advocates. Endorsing gaming software that purports to help individuals make money is a significant red flag, especially when such endorsements come from well-known figures. This underscores the possibility of deepfakes being utilised for financial benefit, as well as the significance of examining information that appears to be too good to be true.
How to Protect Yourself Against Deepfakes
As deepfake technology advances, it is critical to be aware of potential signals of manipulation. Here are some pointers to help you spot deepfake videos:
- Look for artificial facial movements and expressions, as well as lip sync difficulties.
- Body motions and Posture: Take note of any uncomfortable body motions or discrepancies in the individual's posture.
- Lip Sync and Audio Quality: Look for mismatches between the audio and lip motions.
- background and Content: Consider the video's background, especially if it has a popular figure supporting something in an unexpected way.
- Verify the legitimacy of the video by verifying the official channels or accounts of the prominent person.
Conclusion
The popularity of deepfake videos endangers the legitimacy of social media material. Sachin Tendulkar's response to the deepfake in which he appears serves as a warning to consumers to remain careful and report questionable material. As technology advances, it is critical that individuals and authorities collaborate to counteract the exploitation of AI-generated material and safeguard the integrity of online information.
Reference
- https://www.news18.com/tech/sachin-tendulkar-disturbed-by-his-new-deepfake-video-wants-swift-action-8740846.html
- https://www.livemint.com/news/india/sachin-tendulkar-becomes-latest-victim-of-deepfake-video-disturbing-to-see-11705308366864.html

Introduction
When a tragedy strikes, moments are fragile, people are vulnerable, emotions run high, and every second is important. In such critical situations, information becomes as crucial as food, water, shelter, and medication. As soon as any information is received, it often leads to stampedes and chaos. Alongside the tragedy, whether natural or man-made, emerges another threat: misinformation. People, desperate for answers, cling to whatever they can find.
Tragedies can take many forms. These may include natural disasters, mass accidents, terrorist activities, or other emergencies. During the 2023 earthquakes in Turkey, misinformation spread on social media claiming that the Yarseli Dam had cracked and was about to burst. People believed it and began migrating from the area. Panic followed, and search and rescue teams stopped operations in that zone. Precious hours were lost. Later, it was confirmed to be a rumour. By then, the damage was already done.
Similarly, after the recent plane crash in Ahmedabad, India, numerous rumours and WhatsApp messages spread rapidly. One message claimed to contain the investigation report on the crash of Air India flight AI-171. It was later called out by PIB and declared fake.
These examples show how misinformation can take control of already painful moments. During emergencies, when emotions are intense and fear is widespread, false information spreads faster and hits harder. Some people share it unknowingly, while others do so to gain attention or push a certain agenda. But for those already in distress, the effect is often the same. It brings ore confusion, heightens anxiety, and adds to their suffering.
Understanding Disasters and the Role of Media in Crisis
Disaster can be defined as a natural or human-caused situation that causes a transformation from a usual life of society into a crisis that is far beyond its existing response capacity. It can have minimal or maximum effects, from mere disruption in daily life practices to as adverse as inability to meet basic requirements of life like food, water and shelter. Hence, the disaster is not just a sudden event. It becomes a disaster when it overwhelms a community’s ability to cope with it.
To cope with such situations, there is an organised approach called Disaster Management. It includes preventive measures, minimising damages and helping communities recover. Earlier, public institutions like governments used to be the main actors in disaster management, but today, with every small entity having a role, academic institutions, media outlets and even ordinary people are involved.
Communication is an important element in disaster management. It saves lives when done correctly. People who are vulnerable need to know what’s happening, what they should do and where to seek help. It involves risk in today’s instantaneous communication.
Research shows that the media often fails to focus on disaster preparedness. For example, studies found that during the 2019 Istanbul earthquake, the media focused more on dramatic scenes than on educating people. Similar trends were seen during the 2023 Turkey earthquakes. Rather than helping people prepare or stay calm, much of the media coverage amplified fear and sensationalised suffering. This shows a shift from preventive, helpful reporting to reactive, emotional storytelling. In doing so, the media sometimes fails in its duty to support resilience and worse, can become a channel for spreading misinformation during already traumatic events. However, fighting misinformation is not just someone’s liability. It is penalised in the official disaster management strategy. Section 54 of the Disaster Management Act, 2005 mentions that "Whoever makes or circulates a false alarm or warning as to disaster or its severity or magnitude, leading to panic, shall, on conviction, be punishable with imprisonment which may extend to one year or with a fine."
AI as a Tool in Countering Misinformation
AI has emerged as a powerful mechanism to fight against misinformation. AI technologies like Natural Language Processing (NLP) and Machine Learning (ML) are effective in spotting and classifying misinformation with up to 97% accuracy. AI flags unverified content, leading to a 24% decrease in shares and 7% drop in likes on platforms like TikTok. Up to 95% fewer people view content on Facebook when fact-checking labels are used. Facebook AI also eliminates 86% of graphic violence, 96% of adult nudity, 98.5% of fake accounts and 99.5% of content related to terrorism. These tools help rebuild public trust in addition to limiting the dissemination of harmful content. In 2023, support for tech companies acting to combat misinformation rose to 65%, indicating a positive change in public expectations and awareness.
How to Counter Misinformation
Experts should step up in such situations. Social media has allowed many so-called experts to spread fake information without any real knowledge, research, or qualification. In such conditions, real experts such as authorities, doctors, scientists, public health officials, researchers, etc., need to take charge. They can directly address the myths and false claims and stop misinformation before it spreads further and reduce confusion.
Responsible journalism is crucial during crises. In times of panic, people look at the media for guidance. Hence, it is important to fact-check every detail before publishing. Reporting that is based on unclear tips, social media posts, or rumours can cause major harm by inciting mistrust, fear, or even dangerous behaviour. Cross-checking information, depending on reliable sources and promptly fixing errors are all components of responsible journalism. Protecting the public is more important than merely disseminating the news.
Focus on accuracy rather than speed. News spreads in a blink in today's world. Media outlets and influencers often come under pressure to publish it first. But in tragic situations like natural disasters and disease outbreaks, rushing to come first is not as important as accuracy is, as a single piece of misinformation can spark mass-scale panic and can slow down emergency efforts and lead people to make rash decisions. Taking a little more time to check the facts ensures that the information being shared is helpful, not harmful. Accuracy may save numerous lives during tragedies.
Misinformation spreads quickly it can only be prevented if people learn to critically evaluate what they hear and see. This entails being able to spot biased or deceptive headlines, cross-check claims and identify reliable sources. Digital literacy is of utmost importance; it makes people less susceptible to fear-based rumours, conspiracy theories and hoaxes.
Disaster preparedness programs should include awareness about the risks of spreading unverified information. Communities, schools and media platforms must educate people on how to respond responsibly during emergencies by staying calm, checking facts and sharing only credible updates. Spreading fake alerts or panic-inducing messages during a crisis is not only dangerous, but it can also have legal consequences. Public communication must focus on promoting trust, calm and clarity. When people understand the weight their words can carry during a crisis, they become part of the solution, not the problem.
References:
- https://dergipark.org.tr/en/download/article-file/3556152
- https://www.dhs.gov/sites/default/files/publications/SMWG_Countering-False-Info-Social-Media-Disasters-Emergencies_Mar2018-508.pdf
- https://english.mathrubhumi.com/news/india/fake-whatsapp-message-air-india-crash-pib-fact-check-fcwmvuyc
- https://www.dhs.gov/sites/default/files/publications/SMWG_Countering-False-Info-Social-Media-Disasters-Emergencies_Mar2018-508.pdf