#FactCheck - Viral Video Falsely Claims RSS Chief Mohan Bhagwat Called for ‘Saffronisation’ of Indian Army
A video purportedly showing Rashtriya Swayamsevak Sangh (RSS) chief Mohan Bhagwat making remarks about the “saffronisation” of the Indian Army has been widely circulated on social media. The clip claims that Bhagwat called for the removal of non-Hindus from the armed forces and linked the issue to future political leadership changes in the country.
Claim
However, a verification by the Cyber Peace Foundation has established that the video is misleading and has been digitally manipulated.
In the video, Bhagwat is allegedly heard saying that unless more than 50 percent of non-Hindus are removed from the Indian Army by 2028, Prime Minister Narendra Modi would be replaced by Uttar Pradesh Chief Minister Yogi Adityanath. The clip further attributes another statement to him, suggesting that he would resign if the Prime Minister were to demand Nitish Kumar’s resignation.
By the time of publication, the video had been viewed over 7,000 times.( lINK, ARCHIVE Link, Screenshot

Fact Check:
The reverse image search also directed the Desk to a video uploaded on CNN-News18’s official YouTube channel on December 21, 2025. The footage was found to be a longer version of the viral clip and was recorded at the RSS centenary event held in Kolkata on the same date. A comparison of both videos confirmed that the background visuals, stage setup and camera angles were identical.
However, a careful review of the original CNN-News18 video revealed that Mohan Bhagwat did not make any of the statements attributed to him in the viral clip.
In his original address, Bhagwat spoke about unity and referred to concerns over increasing atrocities against Hindus in Bangladesh. He made no reference to the Indian Army, nor did he comment on its composition or alleged saffronisation. Here is the link to the original video, along with a screenshot: https://www.youtube.com/watch?v=KnsAUGfBQBk&t=1s

In the next phase of the investigation, the audio track from the viral video was extracted and analysed using the AI audio detection tool Aurigin. The tool’s assessment indicated that the voice heard in the clip was artificially generated, confirming that the audio did not originate from the original speech.

Conclusion
The claim that RSS chief Mohan Bhagwat called for the saffronisation of the Indian Army is false. PTI Fact Check found that the viral video was digitally manipulated, using genuine footage from an RSS centenary event but pairing it with an AI-generated audio track. The altered video was shared online to mislead viewers by falsely attributing statements Bhagwat never made.
Related Blogs
.webp)
Introduction
India has always been celebrated as the land of abundance, once known as the ‘golden bird’ that attracted the world with its prosperity and wisdom. In the current century, as the world moves deeper into the age where every nation is redefining its strength through advancements in every sector, including technology, India is preparing for a powerful transformation. “Viksit Bharat 2047” is an initiative aimed at achieving India's aspiration of becoming a developed nation by its centennial year of independence. India’s growth story is shifting as it takes a step towards development in every field and advances progress both in terms of generating economic growth and breakthroughs in technologies across industries.
Today, when technology touches every aspect of our lives, ‘Cyber Security’ becomes a key area that will significantly drive progress and hold strong importance under the Viksit Bharat vision, especially with the rise of emerging technologies such as AI, quantum computing, cryptography, 5G & 6G, robotics and automation, Internet of Things (IoT), augmented reality (AR) & virtual reality (VR) etc.
Key Initiatives Taken by the Centre
Indian Cyber Crime Coordination Centre:
The Indian Cybercrime Coordination Centre (I4C) was established by the Ministry of Home Affairs (MHA) to provide a framework for law enforcement agencies (LEAs) to deal with cybercrime in a coordinated and comprehensive manner. I4C is actively working on initiatives to combat emerging threats in cyberspace, and it has become a strong pillar of India’s cybersecurity and cybercrime prevention. The ‘National Cyber Crime Reporting Portal’, equipped with a 24x7 cybercrime helpline number 1930, is one of the key components of the I4C.
Recently under I4C, key initiatives were launched to strengthen cybersecurity. The Cyber Fraud Mitigation Centre (CFMC) has been incorporated to bring together banks, financial institutions, telecom companies, Internet Service Providers, and law enforcement agencies on a single platform to tackle online financial crimes efficiently. The Cyber Commandos Program will establish a specialised wing of trained Cyber Commandos in states, Union Territories, and Central Police Organisations to counter rising cyber threats. The Samanvay platform, a web-based Joint Cybercrime Investigation Facility System, has been introduced as a one-stop data repository for cybercrime to foster data sharing and collaboration. The Suspect Registry Portal, connected to the National Cybercrime Reporting Portal (NCRP), has been designed to track cybercriminals and strengthen fraud risk management.
India’s AI Mission:-
The Indian Cabinet has approved a comprehensive national-level IndiaAI Mission. The mission aims to strengthen the Indian AI innovation ecosystem by democratizing computing access, improving data quality, developing indigenous AI capabilities, attracting top AI talent, enabling industry collaboration, providing startup risk capital, ensuring socially impactful AI projects, and bolstering ethical AI. Through India AI Mission, the government is facilitating the development of India’s own foundational models, including Large Language Models (LLMs) and problem-specific AI solutions tailored to Indian needs.
The mission is implemented by the 'IndiaAI' Independent Business Division (IBD) under the Digital India Corporation (DIC) and consists of several components, such as IndiaAI Compute Capacity, IndiaAI Innovation Centre (IAIC), IndiaAI Datasets Platform, IndiaAI Application Development Initiative, IndiaAI Future Skills, IndiaAI Startup Financing, and Safe & Trusted AI. The main objective is to create and nurture an ecosystem for India’s AI innovation.
Startup India:-
With more than 1.59 lakh startups recognised by the Department for Promotion of Industry and Internal Trade (DPIIT) as of January 15, 2025, India has firmly established itself as the third-largest startup ecosystem in the world. Startup India is a flagship initiative launched by the Government of India on 16th January 2016 to build a strong ecosystem for nurturing innovation and startups in the country, which will drive economic growth and generate large-scale employment opportunities.
Key Regulations:-
The Centre, in order to better regulate the cyber domain, has come up with significant regulations. To protect the personal data of citizens, the Digital Personal Data Protection Act, 2023 has been enacted. The Intermediary Guidelines 2021 lay down obligations on social media platforms and intermediaries to ensure accountability and user safety. The Telecommunications Act 2023 has also been enacted. Further, the Promotion and Regulation of Online Gaming Bill 2025, passed by Parliament on 21st August 2025, aims to address related concerns. In addition, Cert-In issues guidelines & advisories from time to time, in order to strengthen cybersecurity.
CyberPeace Outlook
CyberPeace has been at the forefront in transforming policy, technology, and ethical growth in the cyber landscape through its key initiatives. In 2023, CyberPeace hosted the Global CyberPeace Summit in collaboration with Civil 20 and G20 India, with knowledge support from the United Service Institution of India and participation from MeitY, NCIIPC, CERT-In, Zoom, Meta, InMobi, ICANN, Internet Society, MANRS, APNIC, and leading universities, which helped shape critical global conversations on trust, safety, and collaboration in cyberspace.
Viksit Bharat 2047 is more than just a vision for economic success; it is a pledge to create a nation that is technologically secure, resilient, and globally competitive. In this journey, cybersecurity will be at the heart of India's digital reboot, securing its innovation, empowering its citizens, and ensuring its future.
References
- https://www.cyberpeace.org/resources/blogs/i4c-foundation-day-celebration-shri-amit-shah-launches-key-initiatives-to-tackle-cybercrime
- https://www.cyberpeace.org/resources/blogs/indiaai-mission
- https://bharatarticles.com/viksit-bharat-2047-vision-challenges-and-roadmap-to-a-developed-india/
- https://www.pib.gov.in/PressReleasePage.aspx?PRID=2012355
- https://www.pib.gov.in/PressReleasePage.aspx?PRID=2093125

Introduction
Twitter Inc.’s appeal against barring orders for specific accounts issued by the Ministry of Electronics and Information Technology was denied by a single judge on the Karnataka High Court. Twitter Inc. was also given an Rs. 50 lakh fine by Justice Krishna Dixit, who claimed the social media corporation had approached the court defying government directives.
As a foreign corporation, Twitter’s locus standi had been called into doubt by the government, which said they were ineligible to apply Articles 19 and 21 to their situation. Additionally, the government claimed that because Twitter was only designed to serve as an intermediary, there was no “jural relationship” between Twitter and its users.
The Issue
In accordance with Section 69A of the Information Technology Act, the Ministry issued the directives. Nevertheless, Twitter had argued in its appeal that the orders “fall foul of Section 69A both substantially and procedurally.” Twitter argued that in accordance with 69A, account holders were to be notified before having their tweets and accounts deleted. However, the Ministry failed to provide these account holders with any notices.
On June 4, 2022, and again on June 6, 2022, the government sent letters to Twitter’s compliance officer requesting that they come before them and provide an explanation for why the Blocking Orders were not followed and why no action should be taken against them.
Twitter replied on June 9 that the content against which it had not followed the blocking orders does not seem to be a violation of Section 69A. On June 27, 2022, the Government issued another notice stating Twitter was violating its directions. On June 29, Twitter replied, asking the Government to reconsider the direction on the basis of the doctrine of proportionality. On June 30, 2022, the Government withdrew blocking orders on ten account-level URLs but gave an additional list of 27 URLs to be blocked. On July 10, more accounts were blocked. Compiling the orders “under protest,” Twitter approached the HC with the petition challenging the orders.
Legality
Additionally, the government claimed that because Twitter was only designed to serve as an intermediary, there was no “jural relationship” between Twitter and its users.
Government attorney Additional Solicitor General R Sankaranarayanan argued that tweets mentioning “Indian Occupied Kashmir” and the survival of LTTE commander Velupillai Prabhakaran were serious enough to undermine the integrity of the nation.
Twitter, on the other hand, claimed that its users have pushed for these rights. Additionally, Twitter maintained that under Article 14 of the Constitution, even as a foreign company, they were entitled to certain rights, such as the right to equality. They also argued that the reason for the account blocking in each case was not stated and that Section 69a’s provision for blocking a URL should only apply to the offending URL rather than the entire account because blocking the entire account would prevent the creation of information while blocking the offending tweet only applied to already-created information.
Conclusion
The evolution of cyberspace has been substantiated by big tech companies like Facebook, Google, Twitter, Amazon and many more. These companies have been instrumental in leading the spectrum of emerging technologies and creating a blanket of ease and accessibility for users. Compliance with laws and policies is of utmost priority for the government, and the new bills and policies are empowering the Indian cyberspace. Non Compliance will be taken very seriously, and the same is legalised under the Intermediary Guidelines 2021 and 2022 by Meity. Referring to Section 79 of the Information Technology Act, which pertains to an exemption from liability of intermediary in some instances, it was said, “Intermediary is bound to obey the orders which the designate authority/agency which the government fixes from time to time.”

Introduction
When a tragedy strikes, moments are fragile, people are vulnerable, emotions run high, and every second is important. In such critical situations, information becomes as crucial as food, water, shelter, and medication. As soon as any information is received, it often leads to stampedes and chaos. Alongside the tragedy, whether natural or man-made, emerges another threat: misinformation. People, desperate for answers, cling to whatever they can find.
Tragedies can take many forms. These may include natural disasters, mass accidents, terrorist activities, or other emergencies. During the 2023 earthquakes in Turkey, misinformation spread on social media claiming that the Yarseli Dam had cracked and was about to burst. People believed it and began migrating from the area. Panic followed, and search and rescue teams stopped operations in that zone. Precious hours were lost. Later, it was confirmed to be a rumour. By then, the damage was already done.
Similarly, after the recent plane crash in Ahmedabad, India, numerous rumours and WhatsApp messages spread rapidly. One message claimed to contain the investigation report on the crash of Air India flight AI-171. It was later called out by PIB and declared fake.
These examples show how misinformation can take control of already painful moments. During emergencies, when emotions are intense and fear is widespread, false information spreads faster and hits harder. Some people share it unknowingly, while others do so to gain attention or push a certain agenda. But for those already in distress, the effect is often the same. It brings ore confusion, heightens anxiety, and adds to their suffering.
Understanding Disasters and the Role of Media in Crisis
Disaster can be defined as a natural or human-caused situation that causes a transformation from a usual life of society into a crisis that is far beyond its existing response capacity. It can have minimal or maximum effects, from mere disruption in daily life practices to as adverse as inability to meet basic requirements of life like food, water and shelter. Hence, the disaster is not just a sudden event. It becomes a disaster when it overwhelms a community’s ability to cope with it.
To cope with such situations, there is an organised approach called Disaster Management. It includes preventive measures, minimising damages and helping communities recover. Earlier, public institutions like governments used to be the main actors in disaster management, but today, with every small entity having a role, academic institutions, media outlets and even ordinary people are involved.
Communication is an important element in disaster management. It saves lives when done correctly. People who are vulnerable need to know what’s happening, what they should do and where to seek help. It involves risk in today’s instantaneous communication.
Research shows that the media often fails to focus on disaster preparedness. For example, studies found that during the 2019 Istanbul earthquake, the media focused more on dramatic scenes than on educating people. Similar trends were seen during the 2023 Turkey earthquakes. Rather than helping people prepare or stay calm, much of the media coverage amplified fear and sensationalised suffering. This shows a shift from preventive, helpful reporting to reactive, emotional storytelling. In doing so, the media sometimes fails in its duty to support resilience and worse, can become a channel for spreading misinformation during already traumatic events. However, fighting misinformation is not just someone’s liability. It is penalised in the official disaster management strategy. Section 54 of the Disaster Management Act, 2005 mentions that "Whoever makes or circulates a false alarm or warning as to disaster or its severity or magnitude, leading to panic, shall, on conviction, be punishable with imprisonment which may extend to one year or with a fine."
AI as a Tool in Countering Misinformation
AI has emerged as a powerful mechanism to fight against misinformation. AI technologies like Natural Language Processing (NLP) and Machine Learning (ML) are effective in spotting and classifying misinformation with up to 97% accuracy. AI flags unverified content, leading to a 24% decrease in shares and 7% drop in likes on platforms like TikTok. Up to 95% fewer people view content on Facebook when fact-checking labels are used. Facebook AI also eliminates 86% of graphic violence, 96% of adult nudity, 98.5% of fake accounts and 99.5% of content related to terrorism. These tools help rebuild public trust in addition to limiting the dissemination of harmful content. In 2023, support for tech companies acting to combat misinformation rose to 65%, indicating a positive change in public expectations and awareness.
How to Counter Misinformation
Experts should step up in such situations. Social media has allowed many so-called experts to spread fake information without any real knowledge, research, or qualification. In such conditions, real experts such as authorities, doctors, scientists, public health officials, researchers, etc., need to take charge. They can directly address the myths and false claims and stop misinformation before it spreads further and reduce confusion.
Responsible journalism is crucial during crises. In times of panic, people look at the media for guidance. Hence, it is important to fact-check every detail before publishing. Reporting that is based on unclear tips, social media posts, or rumours can cause major harm by inciting mistrust, fear, or even dangerous behaviour. Cross-checking information, depending on reliable sources and promptly fixing errors are all components of responsible journalism. Protecting the public is more important than merely disseminating the news.
Focus on accuracy rather than speed. News spreads in a blink in today's world. Media outlets and influencers often come under pressure to publish it first. But in tragic situations like natural disasters and disease outbreaks, rushing to come first is not as important as accuracy is, as a single piece of misinformation can spark mass-scale panic and can slow down emergency efforts and lead people to make rash decisions. Taking a little more time to check the facts ensures that the information being shared is helpful, not harmful. Accuracy may save numerous lives during tragedies.
Misinformation spreads quickly it can only be prevented if people learn to critically evaluate what they hear and see. This entails being able to spot biased or deceptive headlines, cross-check claims and identify reliable sources. Digital literacy is of utmost importance; it makes people less susceptible to fear-based rumours, conspiracy theories and hoaxes.
Disaster preparedness programs should include awareness about the risks of spreading unverified information. Communities, schools and media platforms must educate people on how to respond responsibly during emergencies by staying calm, checking facts and sharing only credible updates. Spreading fake alerts or panic-inducing messages during a crisis is not only dangerous, but it can also have legal consequences. Public communication must focus on promoting trust, calm and clarity. When people understand the weight their words can carry during a crisis, they become part of the solution, not the problem.
References:
- https://dergipark.org.tr/en/download/article-file/3556152
- https://www.dhs.gov/sites/default/files/publications/SMWG_Countering-False-Info-Social-Media-Disasters-Emergencies_Mar2018-508.pdf
- https://english.mathrubhumi.com/news/india/fake-whatsapp-message-air-india-crash-pib-fact-check-fcwmvuyc
- https://www.dhs.gov/sites/default/files/publications/SMWG_Countering-False-Info-Social-Media-Disasters-Emergencies_Mar2018-508.pdf