#FactCheck - A misleading video falsely shows Former Prime Minister of India Pandit Jawaharlal Nehru admitting he had no role in India's independence
Executive Summary:
A misleading video has been widely shared online, falsely portraying Pandit Jawaharlal Nehru stating that he was not involved in the Indian independence struggle and he even opposed it. The video is a manipulated excerpt from Pandit Nehru’s final major interview in 1964 with American TV host Arnold Mich. The original footage available on India’s state broadcaster Prasar Bharati’s YouTube channel shows Pandit Nehru discussing about Muhammad Ali Jinnah, stating that Jinnah did not participate in the independence movement and opposed it. The viral video falsely edits Pandit Nehru’s comments to create a false narrative, which has been debunked upon reviewing the full, unedited interview.

Claims:
In the viral video, Pandit Jawaharlal Nehru states that he was not involved in the fight for Indian independence and even opposed it.




Fact check:
Upon receiving the posts, we thoroughly checked the video and then we divided the video into keyframes using the inVid tool. We reverse-searched one of the frames of the video. We found a video uploaded by Prasar Bharati Archives official YouTube channel on 14 May 2019.

The description of the video reads, “Full video recording of what was perhaps Pandit Jawaharlal Nehru's last significant interview to American TV Host Arnold Mich Jawaharlal Nehru's last TV Interview - May 1964e his death. Another book by Chandrika Prasad provides a date of 18th May 1964 when the interview was aired in New York, this is barely a few days before the death of Pandit Nehru on 27th May 1964.”
On reviewing the full video, we found that the viral clip of Pandit Nehru runs from 14:50 to 15:45. In this portion, Pandit Nehru is speaking about Muhammad Ali Jinnah, a key leader of the Muslim League.
At the timestamp 14:34, the American TV interviewer Arnold Mich says, “You and Mr. Gandhi and Mr. Jinnah, you were all involved at that point of Independence and then partition in the fight for Independence of India from the British domination.” Pandit Nehru replied, “Mr. Jinnah was not involved in the fight for independence at all. In fact, he opposed it. Muslim League was started in about 1911 I think. It was started really by the British encouraged by them so as to create factions, they did succeed to some extent. And ultimately there came the partition.”
Upon thoroughly analyzing we found that the viral video is an edited version of the real video to misrepresent the actual context of the video.
We also found the same interview uploaded on a Facebook page named Nehru Centre for Social Research on 1 December 2021.

Hence, the viral claim video is misleading and fake.
Hence, the viral video is fake and misleading and netizens must be careful while believing in such an edited video.
Conclusion:
In conclusion, the viral video claiming that Pandit Jawaharlal Nehru stated that he was not involved in the Indian independence struggle is found to be falsely edited. The original footage reveals that Pandit Nehru was referring to Muhammad Ali Jinnah's participation in the struggle, not his own. This explanation debunks the false story conveyed by the manipulated video.
- Claim: Pandit Jawaharlal Nehru stated that he was not involved in the struggle for Indian independence and even he opposed it.
- Claimed on: YouTube, LinkedIn, Facebook, X (Formerly known as Twitter)
- Fact Check: Fake & Misleading
Related Blogs

Introduction
In an alarming event, one of India’s premier healthcare institutes, AIIMS Delhi, has fallen victim to a malicious cyberattack for the second time in the year. The Incident serves as a clear-cut reminder of the escalating threat landscape faced by the healthcare organisation in this digital age. In the attack, which unfolded with grave implications, the attackers not only explored the vulnerabilities present in the healthcare sector, but this also raised the concern about the security of patient data and the uninterrupted delivery of critical healthcare services. In this blog post, we will explore the incident, what happened, and what safety measures can be taken.
Backdrop
The cyber-security systems deployed in AIIMS, New Delhi, recently detected a malware attack. The nature and scope of the attack were both sophisticated and targeted. This second hack acts as a wake-up call for healthcare organisations nationwide. As the healthcare business increasingly depends on digital technology to improve patient care and operational efficiency, cybersecurity must be prioritised to protect sensitive data. To minimise cyber-attack dangers, healthcare organisations must invest in robust defences such as multi-factor authentication, network security, frequent system upgrades, and employee training.
The attempt was successfully prevented, and the deployed cyber-security systems neutralised the threat. The e-Hospital services remain to be fully secure and are functioning normally.
Impact on AIIMS
Healthcare services have been under hackers’ radar worldwide, and the healthcare sector has been impacted badly. The attack on AIIMS Delhi’s effects has been both immediate and far-reaching. The organisation, which is recognised for delivering excellent healthcare services and performing breakthrough medical research, faced significant interruptions in its everyday operations. Patient care and treatment processes were considerably impeded, resulting in delays, cancellations, and the inability to access essential medical documents. The stolen data raises serious concerns about patient privacy and confidentiality, raising doubts about the institution’s capacity to protect sensitive information. Furthermore, the financial ramifications of the assault, such as the cost of recovery, deploying more robust cybersecurity measures, and potential legal penalties and forensic analyses, contribute to the scale of the effect. The event has also generated public concerns about the institution’s ability to preserve personal information, undermining confidence and degrading AIIMS Delhi’s image.
Impact on Patients: The attacks not only impact the institutes but also have serious implications for the patients and here are some key highlights:
Healthcare Service Disruption: The hack has affected the seamless delivery of healthcare services at AIIMS Delhi. Appointments, surgeries, and other medical treatments may be delayed, cancelled, or rescheduled. This disturbance can result in longer wait times, longer treatment periods, and potential problems from delayed or interrupted therapy.

Patient Privacy and Confidentiality are jeopardised because of the breach of sensitive patient data. Medical data, test findings, and treatment plans may have been compromised. This breach may diminish patient faith in the institution’s capacity to safeguard their personal information, discouraging them from seeking care or submitting sensitive information in the future.
As a result of the cyberattack, patients may endure mental anguish and worry. Fear of possible exploitation of personal health information, confusion about the scope of the breach, and concerns about the security of their healthcare data can all have a negative impact on their mental health. This stress might aggravate pre-existing medical issues and impede total recovery.
Trust at stake: A data breach may harm patients’ faith and confidence in AIIMS Delhi and the healthcare system. Patients rely on healthcare facilities to keep their information secure and confidential while providing safe, high-quality care. A hack can doubt the institution’s ability to safeguard patient data, affecting patients’ overall faith in the organisation and potentially leading to patients seeking care elsewhere.
Cybersecurity Measures
To avoid future hacks and protect patient data, AIIMS Delhi must prioritize enhancing its cybersecurity procedures. The institution can strengthen its resistance to changing threats by establishing strong security practices. The following steps can be considered.
Using Multi-factor Authentication: By forcing users to submit several forms of identity to access systems and data, multi-factor authentication offers an extra layer of protection. AIIMS Delhi may considerably lower the danger of unauthorised access by applying this precaution, even in the case of leaked passwords or credentials. Biometrics and one-time passwords, for example, should be integrated into the institution’s authentication systems.
Improving Network Security and Firewalls: AIIMS Delhi should improve network security by implementing strong firewalls, intrusion detection and prevention systems, and network segmentation. These techniques serve to construct barriers between internal systems and external threats, reducing attackers’ lateral movement within the network. Regular network traffic monitoring and analysis can assist in recognising and mitigating any security breaches.
Risk Assessment: Regular penetration testing and vulnerability assessments are required to uncover possible flaws and vulnerabilities in AIIMS Delhi’s systems and infrastructure. Security professionals can detect vulnerabilities and offer remedial solutions by carrying out controlled simulated assaults. This proactive strategy assists in identifying and addressing any security flaws before attackers exploit them.
Educating and training Healthcare Professionals: Education and training have a crucial role in enhancing cybersecurity practices in healthcare facilities. Healthcare workers, including physicians, nurses, administrators, and support staff, must be well-informed about the importance of cybersecurity and trained in risk-mitigation best practices. This will empower healthcare professionals to actively contribute to protecting the patient’s data and maintaining the trust and confidence of patients.
Learnings from Incidents
AIIMS Delhi should embrace cyber-attacks as learning opportunities to strengthen its security posture. Following each event, a detailed post-incident study should be performed to identify areas for improvement, update security policies and procedures, and improve employee training programs. This iterative strategy contributes to the institution’s overall resilience and preparation for future cyber-attacks. AIIMS Delhi can effectively respond to cyber incidents, minimise the impact on operations, and protect patient data by establishing an effective incident response and recovery plan, implementing data backup and recovery mechanisms, conducting forensic analysis, and promoting open communication. Proactive measures, constant review, and regular revisions to incident response plans are critical for staying ahead of developing cyber threats and ensuring the institution’s resilience in the face of potential future assaults.

Conclusion
To summarise, developing robust healthcare systems in the digital era is a key challenge that healthcare organisations must prioritise. Healthcare organisations can secure patient data, assure the continuation of key services, and maintain patients’ trust and confidence by adopting comprehensive cybersecurity measures, building incident response plans, training healthcare personnel, and cultivating a security culture. Adopting a proactive and holistic strategy for cybersecurity is critical to developing a healthcare system capable of withstanding and successfully responding to digital-age problems.

Executive Summary:
A widely circulated social media post claims that the Government of India has reportedly opened an account—Army Welfare Fund Battle Casualty—at Canara Bank to support the modernization of the Indian Army and assist injured or martyred soldiers. Citizens can voluntarily contribute starting from ₹1, with no upper limit. The fund is said to have been launched based on a suggestion by actor Akshay Kumar, which was later acknowledged by the Prime Minister of India through Mann Ki Baat and social media platforms. However, the fact is that no such decision has been taken by the cabinet recently, and no such decision has been officially announced.

Claim:
A viral social media post claims that the Government of India has launched a new initiative aimed at modernizing the Indian Army and supporting battle casualties through public donations. According to the post, a special bank account has been created to enable citizens to contribute directly toward the procurement of arms and equipment for the armed forces.
It further states that this initiative was introduced following a Cabinet decision and was inspired by a suggestion from Bollywood actor Akshay Kumar, which was reportedly acknowledged by the Prime Minister during his Mann Ki Baat address.
The post encourages individuals to donate any amount starting from ₹1, with no upper limit, and estimates that widespread public participation could generate up to ₹36,000 crore annually to support the armed forces. It also lists two bank accounts—one at Canara Bank (Account No: 90552010165915) and another at State Bank of India (Account No: 40650628094)—allegedly designated for the "Armed Forces Battle Casualties Welfare Fund."
The statement said,” The government established a range of welfare schemes for soldiers killed or disabled while undertaking military operations in recent combat. In 2020, the government established the 'Armed Forces Battle Casualty Welfare Fund (AFBCWF)', which is used to provide immediate financial assistance to families of soldiers, sailors and airmen who lose their lives or sustain grievous injury as a result of active military service.”

We also found a similar post from the past, which can be seen here.
Fact Check:
The Press Information Bureau (PIB) have responded to the viral post stating that it is misleading, and the Government has not launched any message inviting public donations towards the modernisation of the Indian Army or for purchasing Weapons for the army. The only known official initiative by the Ministry of Defence is the "Armed Forces Battle Casualties Welfare Fund", which is an initiative set up to support the families of our soldiers who have been marshalled or grievously disabled in the line of duty, not for buying military equipment.

In addition, the bank account details mentioned in the Viral post are false, and donations and charitable donations submitted to the account have been dishonoured.
The other false claim says that actor Akshay Kumar is promoting or heading this message-there is no official/disclosure record or announcement related to him leading or sponsoring this project. Having said that in 2017, Akshay Kumar encouraged public contributions of just one rupee per month to support the armed forces, through a web portal called “Bharat Ke Veer”. The platform was developed in partnership with the Ministry of Home Affairs


Citizens have to rely on only official government sources and ignore misleading messages on such social media platforms.
Conclusion:
The viral social media post suggesting that the Government of India has initiated a donation drive for the modernisation of the Indian Army and the purchase of weapons is misleading and inaccurate. According to the Press Information Bureau (PIB), no such initiative has been launched by the government, and the bank account details provided in the post are false, with reported cases of dishonoured transactions. The only legitimate initiative is the Armed Forces Battle Casualties Welfare Fund (AFBCWF), which provides financial assistance to the families of soldiers who are martyred or seriously injured in the line of duty. While actor Akshay Kumar played a key role in launching the Bharat Ke Veer portal in 2017 to support paramilitary personnel, he has no official connection to the viral claims.
- Claim: The government has launched a public donation message to fund Army weapon purchases.
- Claimed On: Social Media
- Fact Check: False and Misleading

In the rich history of humanity, the advent of artificial intelligence (AI) has added a new, delicate aspect. The aspect of promising technological advancement has the potential to either enrich the nest of our society or destroy it entirely. The latest straw in this complex nest is generative AI, a frontier teeming with both potential and perils. It is a realm where the ethereal concepts of cyber peace and resilience are not just theoretical constructs but tangible necessities.
The spectre of generative AI looms large over the digital landscape, casting a long shadow on the sanctity of data privacy and the integrity of political processes. The seeds of this threat were sown in the fertile soil of the Cambridge Analytica scandal of 2018, a watershed moment that unveiled the extent to which personal data could be harvested and utilized to influence electoral outcomes. However despite the indignation, the scandal resulted in meagre alterations to modus operandi of digital platforms.
Fast forward to the present day, and the spectre has only grown more ominous. A recent report by Human Rights Watch has shed light on the continued exploitation of data-driven campaigning in Hungary's re-election of Viktor Orbán. The report paints a chilling picture of political parties leveraging voter databases for targeted social media advertising, with the ruling Fidesz party even resorting to the unethical use of public service data to bolster its voter database.
The Looming Threat of Disinformation
As we stand on the precipice of 2024, a year that will witness over 50 countries holding elections, the advancements in generative AI could exponentially amplify the ability of political campaigns to manipulate electoral outcomes. This is particularly concerning in countries where information disparities are stark, providing fertile ground for the seeds of disinformation to take root and flourish.
The media, the traditional watchdog of democracy, has already begun to sound the alarm about the potential threats posed by deepfakes and manipulative content in the upcoming elections. The limited use of generative AI in disinformation campaigns has raised concerns about the enforcement of policies against generating targeted political materials, such as those designed to sway specific demographic groups towards a particular candidate.
Yet, while the threat of bad actors using AI to generate and disseminate disinformation is real and present, there is another dimension that has largely remained unexplored: the intimate interactions with chatbots. These digital interlocutors, when armed with advanced generative AI, have the potential to manipulate individuals without any intermediaries. The more data they have about a person, the better they can tailor their manipulations.
Root of the Cause
To fully grasp the potential risks, we must journey back 30 years to the birth of online banner ads. The success of the first-ever banner ad for AT&T, which boasted an astounding 44% click rate, birthed a new era of digital advertising. This was followed by the advent of mobile advertising in the early 2000s. Since then, companies have been engaged in a perpetual quest to harness technology for manipulation, blurring the lines between commercial and political advertising in cyberspace.
Regrettably, the safeguards currently in place are woefully inadequate to prevent the rise of manipulative chatbots. Consider the case of Snapchat's My AI generative chatbot, which ostensibly assists users with trivia questions and gift suggestions. Unbeknownst to most users, their interactions with the chatbot are algorithmically harvested for targeted advertising. While this may not seem harmful in its current form, the profit motive could drive it towards more manipulative purposes.
If companies deploying chatbots like My AI face pressure to increase profitability, they may be tempted to subtly steer conversations to extract more user information, providing more fuel for advertising and higher earnings. This kind of nudging is not clearly illegal in the U.S. or the EU, even after the AI Act comes into effect. The market size of AI in India is projected to touch US$4.11bn in 2023.
Taking this further, chatbots may be inclined to guide users towards purchasing specific products or even influencing significant life decisions, such as religious conversions or voting choices. The legal boundaries here remain unclear, especially when manipulation is not detectable by the user.
The Crucial Dos/Dont's
It is crucial to set rules and safeguards in order to manage the possible threats related to manipulative chatbots in the context of the general election in 2024.
First and foremost, candor and transparency are essential. Chatbots, particularly when employed for political or electoral matters, ought to make it clear to users what they are for and why they are automated. By being transparent, people are guaranteed to be aware that they are interacting with automated processes.
Second, getting user consent is crucial. Before collecting user data for any reason, including advertising or political profiling, users should be asked for their informed consent. Giving consumers easy ways to opt-in and opt-out gives them control over their data.
Furthermore, moral use is essential. It's crucial to create an ethics code for chatbot interactions that forbids manipulation, disseminating false information, and trying to sway users' political opinions. This guarantees that chatbots follow moral guidelines.
In order to preserve transparency and accountability, independent audits need to be carried out. Users might feel more confident knowing that chatbot behavior and data collecting procedures are regularly audited by impartial third parties to ensure compliance with legal and ethical norms.
Important "don'ts" to take into account. Coercion and manipulation ought to be outlawed completely. Chatbots should refrain from using misleading or manipulative approaches to sway users' political opinions or religious convictions.
Another hazard to watch out for is unlawful data collecting. Businesses must obtain consumers' express agreement before collecting personal information, and they must not sell or share this information for political reasons.
At all costs, one should steer clear of fake identities. Impersonating people or political figures is not something chatbots should do because it can result in manipulation and false information.
It is essential to be impartial. Bots shouldn't advocate for or take part in political activities that give preference to one political party over another. In encounters, impartiality and equity are crucial.
Finally, one should refrain from using invasive advertising techniques. Chatbots should ensure that advertising tactics comply with legal norms by refraining from displaying political advertisements or messaging without explicit user agreement.
Present Scenario
As we approach the critical 2024 elections and generative AI tools proliferate faster than regulatory measures can keep pace, companies must take an active role in building user trust, transparency, and accountability. This includes comprehensive disclosure about a chatbot's programmed business goals in conversations, ensuring users are fully aware of the chatbot's intended purposes.
To address the regulatory gap, stronger laws are needed. Both the EU AI Act and analogous laws across jurisdictions should be expanded to address the potential for manipulation in various forms. This effort should be driven by public demand, as the interests of lawmakers have been influenced by intensive Big Tech lobbying campaigns.
At present, India doesn’t have any specific laws pertaining to AI regulation. Ministry of Electronics and Information Technology (MEITY), is the executive body responsible for AI strategies and is constantly working towards a policy framework for AI. The Niti Ayog has presented seven principles for responsible AI which includes equality , inclusivity, safety, privacy, transparency, accountability, dependability and protection of positive human values.
Conclusion
We are at a pivotal juncture in history. As generative AI gains more power, we must proactively establish effective strategies to protect our privacy, rights and democracy. The public's waning confidence in Big Tech and the lessons learned from the techlash underscore the need for stronger regulations that hold tech companies accountable. Let's ensure that the power of generative AI is harnessed for the betterment of society and not exploited for manipulation.
Reference
McCallum, B. S. (2022, December 23). Meta settles Cambridge Analytica scandal case for $725m. BBC News. https://www.bbc.com/news/technology-64075067
Hungary: Data misused for political campaigns. (2022, December 1). Human Rights Watch. https://www.hrw.org/news/2022/12/01/hungary-data-misused-political-campaigns
Statista. (n.d.). Artificial Intelligence - India | Statista Market forecast. https://www.statista.com/outlook/tmo/artificial-intelligence/india