#FactCheck - AI-Generated Video Falsely Shared as Iran’s Attack on Israeli Nuclear Site
Executive Summary:
A video is going viral on social media linking it to the ongoing conflict between the US-Israel and Iran. The clip shows explosions on buildings and is being shared with the claim that it depicts an attack on Israel. It is further claimed that Iran targeted a nuclear site located near the sea in Israel, and this video shows that attack. However, an research by the CyberPeace found the claim to be false. The video is not from a real incident but has been created using AI.
Claim:
On social media platform X, a user shared the viral video on March 8, 2026, with the caption: “Iran attacked an Israeli nuclear site located near the sea.”

Fact Check:
To verify the viral claim, we searched relevant keywords on Google but found no credible news reports supporting it.On closely examining the video, we observed several technical inconsistencies. The person seen in the video appears robotic, raising suspicion that the content may be AI-generated. To confirm this, we analyzed the video using AI detection tools. The tool Hive Moderation indicated that the video is approximately 97.5 percent likely to be generated using artificial intelligence.

We also used the AI detection tool Matrix.Tencent. The results suggested that the video is likely AI-generated, with around a 77 percent probability.

Conclusion:
Our research found that the viral video claiming to show an Iranian attack on Israel is AI-generated and not related to any real incident.
Related Blogs

Social media has become far more than a tool of communication, engagement and entertainment. It shapes politics, community identity, and even shapes agendas. When misused, the consequences can be grave: communal disharmony, riots, false rumours, harassment or worse. Emphasising the need for digital Atmanirbhar, Prime Minister Narendra Modi recently urged India’s youth to develop the country’s own social media platforms, like Facebook, Instagram and X, to ensure that the nation’s technological ecosystems remain secure and independent, reinforcing digital autonomy. This growing influence of platforms has sharpened the tussle between government regulation, the independence of social media companies, and the protection of freedom of expression in most countries.
Why Government Regulation Is Especially Needed
While self-regulation has its advantages, ‘real-world harms’ show why state oversight cannot be optional:
- Incitement to violence and communal unrest: Misinformation and hate speech can inflame tensions. In Manipur (May 2023), false posts, including unverified sexual-violence claims, spread online, worsening clashes. Authorities shut down mobile internet on 3 May 2023 to curb “disinformation and false rumours,” showing how quickly harmful content can escalate and why enforceable moderation rules matter.
- Fake news and misinformation: False content about health, elections or individuals spreads far faster than corrections. During COVID-19, an “infodemic” of fake cures, conspiracy theories and religious discrimination went viral on WhatsApp and Facebook, starting with false claims that the virus came from eating bats. The WHO warned of serious knock-on effects, and a Reuters Institute study found that although such claims by public figures were fewer, they gained the highest engagement, showing why self-regulation alone often fails to stop it.
Nepal’s Example:
Nepal provides a clear example of the tension between government regulation and the self-regulation tussle of social media. In 2023, the government issued rules requiring all social media platforms, whether local or foreign, to register with the Ministry of Communication and Information Technology, appoint a local contact person, and comply with Nepali law. By 2025, major platforms such as Facebook, Instagram, and YouTube had not met the registration deadline. In response, the Nepal Telecommunications Authority began blocking unregistered platforms until they complied. While journalists, civil-rights groups and Gen Z criticised the move as potentially limiting free speech and exposing corruption against the government. The government argued it was necessary to stop harmful content and misinformation. The case shows that without enforceable obligations, self-regulation can leave platforms unaccountable, but it must also balance with protecting free speech.
Self-Regulation: Strengths and Challenges
Most social-media companies prefer to self-regulate. They write community rules, trust & safety guidelines, and give users ways to flag harmful posts, and lean on a mix of staff, outside boards and AI filters to handle content that crosses the line. The big advantage here is speed: when something dangerous appears, a platform can react within minutes, far quicker than a court or lawmaker. Because they know their systems inside out, from user habits to algorithmic quirks, they can adapt fast.
But there’s a downside. These platforms thrive on engagement, hence sensational or hateful posts often keep people scrolling longer. That means the very content that makes money can also be the content that most needs moderating , a built-in conflict of interest.
Government Regulation: Strengths and Risks
Public rules make platforms answerable. Laws can require illegal content to be removed, force transparency and protect user rights. They can also stop serious harms such as fake news that might spark violence, and they often feel more legitimate when made through open, democratic processes.
Yet regulation can lag behind technology. Vague or heavy-handed rules may be misused to silence critics or curb free speech. Global enforcement is messy, and compliance can be costly for smaller firms.
Practical Implications & Hybrid Governance
For users, regulation brings clearer rights and safer spaces, but it must be carefully drafted to protect legitimate speech. For platforms, self-regulation gives flexibility but less certainty; government rules provide a level playing field but add compliance costs. For governments, regulation helps protect public safety, reduce communal disharmony, and fight misinformation, but it requires transparency and safeguards to avoid misuse.
Hybrid Approach
A combined model of self-regulation plus government regulation is likely to be most effective. Laws should establish baseline obligations: registration, local grievance officers, timely removal of illegal content, and transparency reporting. Platforms should retain flexibility in how they implement these obligations and innovate with tools for user safety. Independent audits, civil society oversight, and simple user appeals can help keep both governments and platforms accountable.
Conclusion
Social media has great power. It can bring people together, but it can also spread false stories, deepen divides and even stir violence. Acting on their own, platforms can move fast and try new ideas, but that alone rarely stops harmful content. Good government rules can fill the gap by holding companies to account and protecting people’s rights.
The best way forward is to mix both approaches, clear laws, outside checks, open reporting, easy complaint systems and support for local platforms, so the digital space stays safer and more trustworthy.
References
- https://timesofindia.indiatimes.com/india/need-desi-social-media-platforms-to-secure-digital-sovereignty-pm/articleshow/123327780.cms#
- https://www.bbc.com/news/world-asia-india-66255989
- https://nepallawsunshine.com/social-media-registration-in-nepal/ https://www.newsonair.gov.in/nepal-bans-26-unregistered-social-media-sites-including-facebook-whatsapp-instagram/
- https://hbr.org/2021/01/social-media-companies-should-self-regulate-now
- https://www.drishtiias.com/daily-updates/daily-news-analysis/social-media-regulation-in-india

Introduction
The first activity one engages in while using social media is scrolling through their feed and liking or reacting to posts. Social media users' online activity is passive, involving merely reading and observing, while active use occurs when a user consciously decides to share information or comment after actively analysing it. We often "like" photos, posts, and tweets reflexively, hardly stopping to think about why we do it and what information it contains. This act of "liking" or "reacting" is a passive activity that can spark an active discourse. Frequently, we encounter misinformation on social media in various forms, which could be identified as false at first glance if we exercise caution and avoid validating it with our likes.
Passive engagement, such as liking or reacting to a post, triggers social media algorithms to amplify its reach, exposing it to a broader audience. This amplification increases the likelihood of misinformation spreading quickly as more people interact with it. As the content circulates, it gains credibility through repeated exposure, reinforcing false narratives and expanding its impact.
Social media platforms are designed to facilitate communication and conversations for various purposes. However, this design also enables the sharing, exchange, distribution, and reception of content, including misinformation. This can lead to the widespread spread of false information, influencing public opinion and behaviour. Misinformation has been identified as a contributing factor in various contentious events, ranging from elections and referenda to political or religious persecution, as well as the global response to the COVID-19 pandemic.
The Mechanics of Passive Sharing
Sharing a post without checking the facts mentioned or sharing it without providing any context can create situations where misinformation can be knowingly or unknowingly spread. The problem with sharing and forwarding information on social media without fact-checking is that it usually starts in small, trusted networks before going on to be widely seen across the internet. This web which begins is infinite and cutting it from the roots is necessary. The rapid spread of information on social media is driven by algorithms that prioritise engagement and often they amplify misleading or false content and contribute to the spread of misinformation. The algorithm optimises the feed and ensures that the posts that are most likely to engage with appear at the top of the timeline, thus encouraging a cycle of liking and posting that keeps users active and scrolling.
The internet reaches billions of individuals and enables them to tailor persuasive messages to the specific profiles of individual users. The internet because of its reach is an ideal medium for the fast spread of falsehoods at the expense of accurate information.
Recommendations for Combating Passive Sharing
The need to combat passive sharing that we indulge in is important and some ways in which we can do so are as follows:
- We need to critically evaluate the sources before sharing any content. This will ensure that the information source is not corrupted and used as a means to cause disruptions. The medium should not be used to spread misinformation due to the source's ulterior motives. Tools such as crowdsourcing and AI methods have been used in the past to evaluate the sources and have been successful to an extent.
- Engaging with fact-checking tools and verifying the information is also crucial. The information that has been shared on the post needs to be verified through authenticated sources before indulging in the practice of sharing.
- Being mindful of the potential impact of online activity, including likes and shares is important. The kind of reach that social media users have today is due to several reasons ranging from the content they create, the rate at which they engage with other users etc. Liking and sharing content might not seem much for an individual user but the impact it has collectively is huge.
Conclusion
Passive sharing of misinformation, like liking or sharing without verification, amplifies false information, erodes trust in legitimate sources, and deepens social and political divides. It can lead to real-world harm and ethical dilemmas. To combat this, critical evaluation, fact-checking, and mindful online engagement are essential to mitigating this passive spread of misinformation. The small act of “like” or “share” has a much more far-reaching effect than we anticipate and we should be mindful of all our activities on the digital platform.
References
- https://www.tandfonline.com/doi/full/10.1080/00049530.2022.2113340#summary-abstract
- https://timesofindia.indiatimes.com/city/thane/badlapur-protest-police-warn-against-spreading-fake-news/articleshow/112750638.cms

Introduction
The mysteries of the universe have been a subject of curiosity for humans over thousands of years. To solve these unfolding mysteries of the universe, astrophysicists are always busy, and with the growing technology this seems to be achievable. Recently, with the help of Artificial Intelligence (AI), scientists have discovered the depths of the cosmos. AI has revealed the secret equation that properly “weighs” galaxy clusters. This groundbreaking discovery not only sheds light on the formation and behavior of these clusters but also marks a turning point in the investigation and discoveries of new cosmos. Scientists and AI have collaborated to uncover an astounding 430,000 galaxies strewn throughout the cosmos. The large haul includes 30,000 ring galaxies, which are considered the most unusual of all galaxy forms. The discoveries are the first outcomes of the "GALAXY CRUISE" citizen science initiative. They were given by 10,000 volunteers who sifted through data from the Subaru Telescope. After training the AI on 20,000 human-classified galaxies, scientists released it loose on 700,000 galaxies from the Subaru data.
Brief Analysis
A group of astronomers from the National Astronomical Observatory of Japan (NAOJ) have successfully applied AI to ultra-wide field-of-view images captured by the Subaru Telescope. The researchers achieved a high accuracy rate in finding and classifying spiral galaxies, with the technique being used alongside citizen science for future discoveries.
Astronomers are increasingly using AI to analyse and clean raw astronomical images for scientific research. This involves feeding photos of galaxies into neural network algorithms, which can identify patterns in real data more quickly and less prone to error than manual classification. These networks have numerous interconnected nodes and can recognise patterns, with algorithms now 98% accurate in categorising galaxies.
Another application of AI is to explore the nature of the universe, particularly dark matter and dark energy, which make up over 95% energy of the universe. The quantity and changes in these elements have significant implications for everything from galaxy arrangement.
AI is capable of analysing massive amounts of data, as training data for dark matter and energy comes from complex computer simulations. The neural network is fed these findings to learn about the changing parameters of the universe, allowing cosmologists to target the network towards actual data.
These methods are becoming increasingly important as astronomical observatories generate enormous amounts of data. High-resolution photographs of the sky will be produced from over 60 petabytes of raw data by the Vera C. AI-assisted computers are being utilized for this.
Data annotation techniques for training neural networks include simple tagging and more advanced types like image classification, which classify an image to understand it as a whole. More advanced data annotation methods, such as semantic segmentation, involve grouping an image into clusters and giving each cluster a label.
This way, AI is being used for space exploration and is becoming a crucial tool. It also enables the processing and analysis of vast amounts of data. This advanced technology is fostering the understanding of the universe. However, clear policy guidelines and ethical use of technology should be prioritized while harnessing the true potential of contemporary technology.
Policy Recommendation
- Real-Time Data Sharing and Collaboration - Effective policies and frameworks should be established to promote real-time data sharing among astronomers, AI developers and research institutes. Open access to astronomical data should be encouraged to facilitate better innovation and bolster the application of AI in space exploration.
- Ethical AI Use - Proper guidelines and a well-structured ethical framework can facilitate judicious AI use in space exploration. The framework can play a critical role in addressing AI issues pertaining to data privacy, AI Algorithm bias and transparent decision-making processes involving AI-based tech.
- Investing in Research and Development (R&D) in the AI sector - Government and corporate giants should prioritise this opportunity to capitalise on the avenue of AI R&D in the field of space tech and exploration. Such as funding initiatives focusing on developing AI algorithms coded for processing astronomical data, optimising telescope operations and detecting celestial bodies.
- Citizen Science and Public Engagement - Promotion of citizen science initiatives can allow better leverage of AI tools to involve the public in astronomical research. Prominent examples include the SETI @ Home program (Search for Extraterrestrial Intelligence), encouraging better outreach to educate and engage citizens in AI-enabled discovery programs such as the identification of exoplanets, classification of galaxies and discovery of life beyond earth through detecting anomalies in radio waves.
- Education and Training - Training programs should be implemented to educate astronomers in AI techniques and the intricacies of data science. There is a need to foster collaboration between AI experts, data scientists and astronomers to harness the full potential of AI in space exploration.
- Bolster Computing Infrastructure - Authorities should ensure proper computing infrastructure should be implemented to facilitate better application of AI in astronomy. This further calls for greater investment in high-performance computing devices and structures to process large amounts of data and AI modelling to analyze astronomical data.
Conclusion
AI has seen an expansive growth in the field of space exploration. As seen, its multifaceted use cases include discovering new galaxies and classifying celestial objects by analyzing the changing parameters of outer space. Nevertheless, to fully harness its potential, robust policy and regulatory initiatives are required to bolster real-time data sharing not just within the scientific community but also between nations. Policy considerations such as investment in research, promoting citizen scientific initiatives and ensuring education and funding for astronomers. A critical aspect is improving key computing infrastructure, which is crucial for processing the vast amount of data generated by astronomical observatories.
References
- https://mindy-support.com/news-post/astronomers-are-using-ai-to-make-discoveries/
- https://www.space.com/citizen-scientists-artificial-intelligence-galaxy-discovery
- https://www.sciencedaily.com/releases/2024/03/240325114118.htm
- https://phys.org/news/2023-03-artificial-intelligence-secret-equation-galaxy.html
- https://www.space.com/astronomy-research-ai-future