#Fact Check: Viral Video Falsely Claims Israel Launched Nuclear Attack on Iran
Executive Summary:
A viral video circulating on social media inaccurately suggests that it shows Israel moving nuclear weapons in preparation for an assault on Iran, but a detailed research has established that it instead shows a SpaceX Starship rocket (Starship 36) being towed for a pre-planned test in Texas, USA, and the footage does not provide any evidence to back-up the claim of an Israeli action or a nuclear missile.

Claim:
Multiple posts on social media sharing a video clip of what appeared to be a large, missile-like object being towed to an unknown location by a very large vehicle and stated it is Israel preparing for a nuclear attack on Iran.
The caption of the video said: "Israel is going to launch a nuclear attack on Iran! #Israel”. The viral post received lots of engagement, helpingClaim: to spread misinformation and unfounded fear about the rising conflicts in the Middle East.

Fact check:
By doing reverse image search using the key frames of the viral footage, this landed us at a Facebook post dated June 16, 2025.

A YouTube livestream from NASASpaceflight is dated 15th June 2025. Both sources make it clear that the object was clearly identified as SpaceX Starship 36. This rocket was being towed at SpaceX's Texas facility in advance of a static fire test and as part of the overall preparation for the 10th test flight. In the video, there is clearly no military ordinance or personnel, or Israel’s nuclear attack on Iran markings.
More support for our conclusions came from several articles from SPACE.com, which briefly reported on the Starship's explosion shortly thereafter during various testing iterations.



Also, there was no mention of any Israeli nuclear mobilization by any reputable media or defence agencies. The resemblance between a large rocket and a missile likely added some confusion. Below is a video describing the difference, but the context and upload location have no relation to the State of Israel or Iran.

Conclusion:
The viral video alleging that the actual video showed Israel getting ready to launch a nuclear attack on Iran is false and misleading. In fact, the video was from Texas, showing the civilian transport of SpaceX’s Starship 36. This highlighted how easily unrelated videos can be used to create panic and spread misinformation. If you plan on sharing claims like this, verify them instead using trusted websites and tools.
- Claim: Misleading video on Israel is ready to go nuclear on Iran
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs
.webp)
Introduction
India's broadcasting sector has undergone significant changes in recent years with technological advancements such as the introduction of new platforms like Direct-to-Home (DTH), Internet Protocol television (IPTV), Over-The-Top (OTT), and integrated models. Platform changes, emerging technologies and advancements in the advertising space have all necessitated the need for new governing laws that take these developments into account.
The Union Government and concerned ministry have realised there is a pressing need to develop a robust regulatory framework for the Indian broadcasting sector in the country and consequently, a draft Broadcasting Services (Regulation) Bill, 2023, was released in November 2023 and the Union Ministry of Information and Broadcasting (MIB) had invited feedback and comments from different stakeholders. The draft Bill aims to establish a unified framework for regulating broadcasting services in the country, replacing the current Cable Television Networks (Regulation) Act, 1995 and other policy guidelines governing broadcasting.
Recently a new draft of an updated ‘Broadcasting Services (Regulation) Bill, 2024,’ was shared with selected broadcasters, associations, streaming services, and tech firms, each marked with their identifier to prevent leaks.
Key Highlights of the Updated Broadcasting Bill
As per the recent draft of the Broadcasting Services (Regulation) Bill, 2024, social media accounts could be identified as ‘Digital News Broadcasters’ and can be classified within the ambit of the regulation. Some of the major aspects of the new bill were first reported by Hindustan Times.
The new draft of the Broadcasting Services (Regulation) Bill, 2024, proposes that individuals who regularly upload videos to social media, make podcasts, or write about current affairs online could be classified as Digital News Broadcasters. This entails that YouTubers and Instagrammers who receive a share of advertising revenue or monetize their social media presence through affiliate activities will be regulated as Digital News Broadcasters. This includes channels, podcasts, and blogs that cover news and utilise Google AdSense. They must comply with a Programme Code and Advertising Code.
Online content creators who do not provide news or current affairs but provide programming and curated programs beyond a certain threshold will be treated as OTT broadcasters in case they provide content licensed or live through a website or social media platform.
The new version also introduces new obligations for intermediaries and social media intermediaries related to streaming services and digital news broadcasters, and, in contrast to the last version circulated in 2023, the latest also carries provisions targeting online advertising. In the context of streaming services, OTT broadcasting services are no longer a part of the definition of "internet broadcasting services." The definition of OTT broadcasting service has also been revised, allowing content creators who regularly upload their content to social media to be considered as OTT broadcasting services.
The new definition of an 'intermediary' includes social media intermediaries, advertisement intermediaries, internet service providers, online search engines, and online marketplaces.
The new Bill allows the government to prescribe different due diligence guidelines for social media platforms and online advertisement intermediaries and requires all intermediaries to provide appropriate information, including information pertaining to the OTT broadcasters and Digital News Broadcasters on their platform, to the central government to ensure compliance with the act. This entails the liability provisions for social media intermediaries which do not provide information “pertaining to OTT Broadcasters and Digital News Broadcasters” on its platforms for compliance. This suggests that when information is sought about a YouTube, Instagram or X/Twitter user, the platform will need to provide this information to the Indian government.
A new draft bill contains specific provisions governing ‘Online Advertising’ and to do so it creates the category of 'advertising intermediaries'. These intermediaries enable the buying or selling of advertisement space on the internet or placing advertisements on online platforms without endorsing the advertisement.
Final Words
The Indian Ministry of Information and Broadcasting (MIB) is making efforts to propose robust regulatory changes to the country's new-age broadcast sector, which would cover the specific provisions for Digital News Broadcasters, OTT Broadcasters and Intermediaries. The proposed bill defining the scope and obligation of each.
However, these changes will have significant implications for press and creative freedom. The changes in the new version of the updated bill from its previous draft expanded the applicability of the bill to a larger number of key actors, this move brought ‘content creators’ under the definition of OTT or digital news broadcasters, which raises concerns about overly rigid provisions and might face criticism from media representative perspectives.
According to recent media reports, the Broadcasting Services (Regulation) Bill, 2024 version has been withdrawn by the I&B ministry facing criticism from relevant stakeholders.
The ministry must take due consideration and feedback from concerned stakeholders and place reliance on balancing individual rights while promoting a healthy regulated landscape considering the needs of the new-age broadcasting sector.
References:
- https://www.medianama.com/2024/07/223-india-broadcast-bill-online-creators/#:~:text=Online%20content%20creators%20that%20do,or%20a%20social%20media%20platform.
- https://www.hindustantimes.com/india-news/new-draft-of-broadcasting-bill-news-influencers-may-be-classified-as-broadcasters-101721961764666.html
- https://www.hindustantimes.com/india-news/broadcasting-bill-still-in-drafting-stage-mib-tells-rs-101722058753083.html
- https://www.newslaundry.com/2024/07/29/indias-new-broadcast-bill-now-has-compliance-requirements-for-youtubers-and-instagrammers
- https://m.thewire.in/article/media/social-media-videos-text-digital-news-broadcasting-bill
- https://mib.gov.in/sites/default/files/Public%20Notice_07.12.2023.pdf
- https://news.abplive.com/news/india/centre-withdraws-draft-of-broadcasting-services-regulation-bill-1709770

Executive Summary:
A video of former Army Chief General Manoj Pande is going viral on social media with the claim that he attacked the Modi government, saying that supporting Israel is causing significant harm to the Indian Army. The research by CyberPeace revealed that the audio present in the viral video is AI-generated. No such statement was made in the original video.
Claim:
On social media platform X, while sharing the viral video, users wrote, “Delhi: Former Army Chief General Manoj Pande (Retd.) said, ‘Do you know what the biggest loss of supporting Israel is? Our Indian Army was always trained as a moral force, but the current situation is turning it into an ethnic force. Remember my words, this situation is moving towards a complete rebellion. We have all seen what is happening in Assam.’ ‘The Israeli army stands against humanity, and brutality has become its identity. Our army is becoming like them due to its association. The Modi government and the Sangh Parivar are responsible for this. For both, Israel is an ideal country, and they are running an agenda to turn India into Israel.’”

Fact Check:
In the research of the viral video claiming that former Army Chief General Manoj Pande attacked the Modi government, we conducted a reverse image search with the help of keyframes. During this process, we found a video uploaded on March 14 on the X account of the news agency Press Trust of India (PTI).
The visuals present in the video matched those in the viral video.
In this video, former Army Chief General Manoj Pande was seen delivering a speech in Marathi and English. However, during this, he was talking about increasing new kinds of capabilities in view of the current situation and not mentioning Israel, as claimed in the viral video. In the approximately 1 minute 15 seconds long video, he did not give any such statement as present in the viral video.

While taking the research forward, we found a report published on March 15, 2026, on the website of ThePrint. This report mentioned the speech delivered by former Army Chief General Manoj Pande, but no report mentioned the statement shown in the viral video.

Conclusion:
Our research found that the audio present in the viral video is AI-generated. In the original video, he did not make any such statement.

Introduction
Election misinformation poses a major threat to democratic processes all over the world. The rampant spread of misleading information intentionally (disinformation) and unintentionally (misinformation) during the election cycle can not only create grounds for voter confusion with ramifications on election results but also incite harassment, bullying, and even physical violence. The attack on the United States Capitol Building in Washington D.C., in 2021, is a classic example of this phenomenon, where the spread of dis/misinformation snowballed into riots.
Election Dis/Misinformation
Election dis/misinformation is false or misleading information that affects/influences public understanding of voting, candidates, and election integrity. The internet, particularly social media, is the foremost source of false information during elections. It hosts fabricated news articles, posts or messages containing incorrectly-captioned pictures and videos, fabricated websites, synthetic media and memes, and distorted truths or lies. In a recent example during the 2024 US elections, fake videos using the Federal Bureau of Investigation’s (FBI) insignia alleging voter fraud in collusion with a political party and claiming the threat of terrorist attacks were circulated. According to polling data collected by Brookings, false claims influenced how voters saw candidates and shaped opinions on major issues like the economy, immigration, and crime. It also impacted how they viewed the news media’s coverage of the candidates’ campaign. The shaping of public perceptions can thus, directly influence election outcomes. It can increase polarisation, affect the quality of democratic discourse, and cause disenfranchisement. From a broader perspective, pervasive and persistent misinformation during the electoral process also has the potential to erode public trust in democratic government institutions and destabilise social order in the long run.
Challenges In Combating Dis/Misinformation
- Platform Limitations: Current content moderation practices by social media companies struggle to identify and flag misinformation effectively. To address this, further adjustments are needed, including platform design improvements, algorithm changes, enhanced content moderation, and stronger regulations.
- Speed and Spread: Due to increasingly powerful algorithms, the speed and scale at which misinformation can spread is unprecedented. In contrast, content moderation and fact-checking are reactive and are more time-consuming. Further, incendiary material, which is often the subject of fake news, tends to command higher emotional engagement and thus, spreads faster (virality).
- Geopolitical influences: Foreign actors seeking to benefit from the erosion of public trust in the USA present a challenge to the country's governance, administration and security machinery. In 2018, the federal jury indicted 11 Russian military officials for alleged computer hacking to gain access to files during the 2016 elections. Similarly, Russian involvement in the 2024 federal elections has been alleged by high-ranking officials such as White House national security spokesman John Kirby, and Attorney General Merrick Garland.
- Lack of Targeted Plan to Combat Election Dis/Misinformation: In the USA, dis/misinformation is indirectly addressed through laws on commercial advertising, fraud, defamation, etc. At the state level, some laws such as Bills AB 730, AB 2655, AB 2839, and AB 2355 in California target election dis/misinformation. The federal and state governments criminalize false claims about election procedures, but the Constitution mandates “breathing space” for protection from false statements within election speech. This makes it difficult for the government to regulate election-related falsities.
CyberPeace Recommendations
- Strengthening Election Cybersecurity Infrastructure: To build public trust in the electoral process and its institutions, security measures such as updated data protection protocols, publicized audits of election results, encryption of voter data, etc. can be taken. In 2022, the federal legislative body of the USA passed the Electoral Count Reform and Presidential Transition Improvement Act (ECRA), pushing reforms allowing only a state’s governor or designated executive official to submit official election results, preventing state legislatures from altering elector appointment rules after Election Day and making it more difficult for federal legislators to overturn election results. More investments can be made in training, scenario planning, and fact-checking for more robust mitigation of election-related malpractices online.
- Regulating Transparency on Social Media Platforms: Measures such as transparent labeling of election-related content and clear disclosure of political advertising to increase accountability can make it easier for voters to identify potential misinformation. This type of transparency is a necessary first step in the regulation of content on social media and is useful in providing disclosures, public reporting, and access to data for researchers. Regulatory support is also required in cases where popular platforms actively promote election misinformation.
- Increasing focus on ‘Prebunking’ and Debunking Information: Rather than addressing misinformation after it spreads, ‘prebunking’ should serve as the primary defence to strengthen public resilience ahead of time. On the other hand, misinformation needs to be debunked repeatedly through trusted channels. Psychological inoculation techniques against dis/misinformation can be scaled to reach millions on social media through short videos or messages.
- Focused Interventions On Contentious Themes By Social Media Platforms: As platforms prioritize user growth, the burden of verifying the accuracy of posts largely rests with users. To shoulder the responsibility of tackling false information, social media platforms can outline critical themes with large-scale impact such as anti-vax content, and either censor, ban, or tweak the recommendations algorithm to reduce exposure and weaken online echo chambers.
- Addressing Dis/Information through a Socio-Psychological Lens: Dis/misinformation and its impact on domains like health, education, economy, politics, etc. need to be understood through a psychological and sociological lens, apart from the technological one. A holistic understanding of the propagation of false information should inform digital literacy training in schools and public awareness campaigns to empower citizens to evaluate online information critically.
Conclusion
According to the World Economic Forum’s Global Risks Report 2024, the link between misleading or false information and societal unrest will be a focal point during elections in several major economies over the next two years. Democracies must employ a mixed approach of immediate tactical solutions, such as large-scale fact-checking and content labelling, and long-term evidence-backed countermeasures, such as digital literacy, to curb the spread and impact of dis/misinformation.
Sources
- https://www.cbsnews.com/news/2024-election-misinformation-fbi-fake-videos/
- https://www.brookings.edu/articles/how-disinformation-defined-the-2024-election-narrative/
- https://www.fbi.gov/wanted/cyber/russian-interference-in-2016-u-s-elections
- https://indianexpress.com/article/world/misinformation-spreads-fear-distrust-ahead-us-election-9652111/
- https://academic.oup.com/ajcl/article/70/Supplement_1/i278/6597032#377629256
- https://www.brennancenter.org/our-work/policy-solutions/how-states-can-prevent-election-subversion-2024-and-beyond
- https://www.bbc.com/news/articles/cx2dpj485nno
- https://msutoday.msu.edu/news/2022/how-misinformation-and-disinformation-influence-elections
- https://misinforeview.hks.harvard.edu/article/a-survey-of-expert-views-on-misinformation-definitions-determinants-solutions-and-future-of-the-field/
- https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2023-06/Digital_News_Report_2023.pdf
- https://www.weforum.org/stories/2024/03/disinformation-trust-ecosystem-experts-curb-it/
- https://www.apa.org/topics/journalism-facts/misinformation-recommendations
- https://mythvsreality.eci.gov.in/
- https://www.brookings.edu/articles/transparency-is-essential-for-effective-social-media-regulation/
- https://www.brookings.edu/articles/how-should-social-media-platforms-combat-misinformation-and-hate-speech/