#FactCheck: A digitally altered video of actor Sebastian Stan shows him changing a ‘Tell Modi’ poster to one that reads ‘I Told Modi’ on a display panel.
Executive Summary:
A widely circulated video claiming to feature a poster with the words "I Told Modi" has gone viral, improperly connecting it to the April 2025 Pahalgam attack, in which terrorists killed 26 civilians. The altered Marvel Studios clip is allegedly a mockery of Operation Sindoor, the counterterrorism operation India initiated in response to the attack. This misinformation emphasizes how crucial it is to confirm information before sharing it online by disseminating misleading propaganda and drawing attention away from real events.
Claim:
A man can be seen changing a poster that says "Tell Modi" to one that says "I Told Modi" in a widely shared viral video. This video allegedly makes reference to Operation Sindoor in India, which was started in reaction to the Pahalgam terrorist attack on April 22, 2025, in which militants connected to The Resistance Front (TRF) killed 26 civilians.


Fact check:
Further research, we found the original post from Marvel Studios' official X handle, confirming that the circulating video has been altered using AI and does not reflect the authentic content.

By using Hive Moderation to detect AI manipulation in the video, we have determined that this video has been modified with AI-generated content, presenting false or misleading information that does not reflect real events.

Furthermore, we found a Hindustan Times article discussing the mysterious reveal involving Hollywood actor Sebastian Stan.

Conclusion:
It is untrue to say that the "I Told Modi" poster is a component of a public demonstration. The text has been digitally changed to deceive viewers, and the video is manipulated footage from a Marvel film. The content should be ignored as it has been identified as false information.
- Claim: Viral social media posts confirm a Pakistani military attack on India.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
The term ‘super spreader’ is used to refer to social media and digital platform accounts that are able to quickly transmit information to a significantly large audience base in a short duration. The analogy references the medical term, where a small group of individuals is able to rapidly amplify the spread of an infection across a huge population. The fact that a few handful accounts are able to impact and influence many is attributed to a number of factors like large follower bases, high engagement rates, content attractiveness or virality and perceived credibility.
Super spreader accounts have become a considerable threat on social media because they are responsible for generating a large amount of low-credibility material online. These individuals or groups may create or disseminate low-credibility content for a number of reasons, running from social media fame to garnering political influence, from intentionally spreading propaganda to seeking financial gains. Given the exponential reach of these accounts, identifying, tracing and categorising such accounts as the sources of misinformation can be tricky. It can be equally difficult to actually recognise the content they spread for the misinformation that it actually is.
How Do A Few Accounts Spark Widespread Misinformation?
Recent research suggests that misinformation superspreaders, who consistently distribute low-credibility content, may be the primary cause of the issue of widespread misinformation about different topics. A study[1] by a team of social media analysts at Indiana University has found that a significant portion of tweets spreading misinformation are sent by a small percentage of a given user base. The researchers conducted a review of 2,397,388 tweets posted on Twitter (now X) that were flagged as having low credibility and details on who was sending them. The study found that it does not take a lot of influencers to sway the beliefs and opinions of large numbers. This is attributed to the impact of what they describe as superspreaders. The researchers collected 10 months of data, which added up to 2,397,388 tweets sent by 448,103 users, and then reviewed it, looking for tweets that were flagged as containing low-credibility information. They found that approximately a third of the low-credibility tweets had been posted by people using just 10 accounts, and that just 1,000 accounts were responsible for posting approximately 70% of such tweets.[2]
Case Study
- How Misinformation ‘Superspreaders’ Seed False Election Theories
During the 2020 U.S. presidential election, a small group of "repeat spreaders" aggressively pushed false election claims across various social media platforms for political gain, and this even led to rallies and radicalisation in the U.S.[3] Superspreaders accounts were responsible for disseminating a disproportionately large amount of misinformation related to the election, influencing public opinion and potentially undermining the electoral process.
In the domestic context, India was ranked highest for the risk of misinformation and disinformation according to experts surveyed for the World Economic Forum’s 2024 Global Risk Report. In today's digital age, misinformation, deep fakes, and AI-generated fakes pose a significant threat to the integrity of elections and democratic processes worldwide. With 64 countries conducting elections in 2024, the dissemination of false information carries grave implications that could influence outcomes and shape long-term socio-political landscapes. During the 2024 Indian elections, we witnessed a notable surge in deepfake videos of political personalities, raising concerns about the influence of misinformation on election outcomes.
- Role of Superspreaders During Covid-19
Clarity in public health communication is important when any grey areas or gaps in information can be manipulated so quickly. During the COVID-19 pandemic, misinformation related to the virus, vaccines, and public health measures spread rapidly on social media platforms, including Twitter (Now X). Some prominent accounts or popular pages on platforms like Facebook and Twitter(now X) were identified as superspreaders of COVID-19 misinformation, contributing to public confusion and potentially hindering efforts to combat the pandemic.
As per the Center for Countering Digital Hate Inc (US), The "disinformation dozen," a group of 12 prominent anti-vaccine accounts[4], were found to be responsible for a large amount of anti-vaccine content circulating on social media platforms, highlighting the significant role of superspreaders in influencing public perceptions and behaviours during a health crisis.
There are also incidents where users are unknowingly engaged in spreading misinformation by forwarding information or content which are not always shared by the original source but often just propagated by amplifiers, using other sources, websites, or YouTube videos that help in dissemination. The intermediary sharers amplify these messages on their pages, which is where it takes off. Hence such users do not always have to be the ones creating or deliberately popularising the misinformation, but they are the ones who expose more people to it because of their broad reach. This was observed during the pandemic when a handful of people were able to create a heavy digital impact sharing vaccine/virus-related misinformation.
- Role of Superspreaders in Influencing Investments and Finance
Misinformation and rumours in finance may have a considerable influence on stock markets, investor behaviour, and national financial stability. Individuals or accounts with huge followings or influence in the financial niche can operate as superspreaders of erroneous information, potentially leading to market manipulation, panic selling, or incorrect impressions about individual firms or investments.
Superspreaders in the finance domain can cause volatility in markets, affect investor confidence, and even trigger regulatory responses to address the spread of false information that may harm market integrity. In fact, there has been a rise in deepfake videos, and fake endorsements, with multiple social media profiles providing unsanctioned investing advice and directing followers to particular channels. This leads investors into dangerous financial decisions. The issue intensifies when scammers employ deepfake videos of notable personalities to boost their reputation and can actually shape people’s financial decisions.
Bots and Misinformation Spread on Social Media
Bots are automated accounts that are designed to execute certain activities, such as liking, sharing, or retweeting material, and they can broaden the reach of misinformation by swiftly spreading false narratives and adding to the virality of a certain piece of content. They can also artificially boost the popularity of disinformation by posting phony likes, shares, and comments, making it look more genuine and trustworthy to unsuspecting users. Bots can exploit social network algorithms by establishing false identities that interact with one another and with real users, increasing the spread of disinformation and pushing it to the top of users' feeds and search results.
Bots can use current topics or hashtags to introduce misinformation into popular conversations, allowing misleading information to acquire traction and reach a broader audience. They can lead to the construction of echo chambers, in which users are exposed to a narrow variety of perspectives and information, exacerbating the spread of disinformation inside restricted online groups. There are incidents reported where bot's were found as the sharers of content from low-credibility sources.
Bots are frequently employed as part of planned misinformation campaigns designed to propagate false information for political, ideological, or commercial gain. Bots, by automating the distribution of misleading information, can make it impossible to trace the misinformation back to its source. Understanding how bots work and their influence on information ecosystems is critical for combatting disinformation and increasing digital literacy among social media users.
CyberPeace Policy Recommendations
- Recommendations/Advisory for Netizens:
- Educating oneself: Netizens need to stay informed about current events, reliable fact-checking sources, misinformation counter-strategies, and common misinformation tactics, so that they can verify potentially problematic content before sharing.
- Recognising the threats and vulnerabilities: It is important for netizens to understand the consequences of spreading or consuming inaccurate information, fake news, or misinformation. Netizens must be cautious of sensationalised content spreading on social media as it might attempt to provoke strong reactions or to mold public opinions. Netizens must consider questioning the credibility of information, verifying its sources, and developing cognitive skills to identify low-credibility content and counter misinformation.
- Practice caution and skepticism: Netizens are advised to develop a healthy skepticism towards online information, and critically analyse the veracity of all information sources. Before spreading any strong opinions or claims, one must seek supporting evidence, factual data, and expert opinions, and verify and validate claims with reliable sources or fact-checking entities.
- Good netiquette on the Internet, thinking before forwarding any information: It is important for netizens to practice good netiquette in the online information landscape. One must exercise caution while sharing any information, especially if the information seems incorrect, unverified or controversial. It's important to critically examine facts and recognise and understand the implications of sharing false, manipulative, misleading or fake information/content. Netizens must also promote critical thinking and encourage their loved ones to think critically, verify information, seek reliable sources and counter misinformation.
- Adopting and promoting Prebunking and Debunking strategies: Prebunking and debunking are two effective strategies to counter misinformation. Netizens are advised to engage in sharing only accurate information and do fact-checking to debunk any misinformation. They can rely on reputable fact-checking experts/entities who are regularly engaged in producing prebunking and debunking reports and material. Netizens are further advised to familiarise themselves with fact-checking websites, and resources and verify the information.
- Recommendations for tech/social media platforms
- Detect, report and block malicious accounts: Tech/social media platforms must implement strict user authentication mechanisms to verify account holders' identities to minimise the formation of fraudulent or malicious accounts. This is imperative to weed out suspicious social media accounts, misinformation superspreader accounts and bots accounts. Platforms must be capable of analysing public content, especially viral or suspicious content to ascertain whether it is misleading, AI-generated, fake or deliberately misleading. Upon detection, platform operators must block malicious/ superspreader accounts. The same approach must apply to other community guidelines’ violations as well.
- Algorithm Improvements: Tech/social media platform operators must develop and deploy advanced algorithm mechanisms to detect suspicious accounts and recognise repetitive posting of misinformation. They can utilise advanced algorithms to identify such patterns and flag any misleading, inaccurate, or fake information.
- Dedicated Reporting Tools: It is important for the tech/social media platforms to adopt robust policies to take action against social media accounts engaged in malicious activities such as spreading misinformation, disinformation, and propaganda. They must empower users on the platforms to flag/report suspicious accounts, and misleading content or misinformation through user-friendly reporting tools.
- Holistic Approach: The battle against online mis/disinformation necessitates a thorough examination of the processes through which it spreads. This involves investing in information literacy education, modifying algorithms to provide exposure to varied viewpoints, and working on detecting malevolent bots that spread misleading information. Social media sites can employ similar algorithms internally to eliminate accounts that appear to be bots. All stakeholders must encourage digital literacy efforts that enable consumers to critically analyse information, verify sources, and report suspect content. Implementing prebunking and debunking strategies. These efforts can be further supported by collaboration with relevant entities such as cybersecurity experts, fact-checking entities, researchers, policy analysts and the government to combat the misinformation warfare on the Internet.
References:
- https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0302201 {1}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette {2}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette {3}
- https://counterhate.com/research/the-disinformation-dozen/ {4}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette
- https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0302201
- https://www.nytimes.com/2020/11/23/technology/election-misinformation-facebook-twitter.html
- https://www.wbur.org/onpoint/2021/08/06/vaccine-misinformation-and-a-look-inside-the-disinformation-dozen
- https://healthfeedback.org/misinformation-superspreaders-thriving-on-musk-owned-twitter/
- https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8139392/
- https://www.jmir.org/2021/5/e26933/
- https://www.yahoo.com/news/7-ways-avoid-becoming-misinformation-121939834.html

Executive Summary:
A video is circulating on social media claiming to be footage of the aftermath of Iran's missile strikes on Israel. The video shows destruction, damaged infrastructure, and panic among civilian casualties. After our own digital verification, visual inspection, and frame-by-frame inspection, we have determined that the video is fake. The video is just AI-generated clips and not related to any incident.

Claim:
The viral video claims that a recent military strike by Iran resulted in the destruction of parts of Israel, following an initial missile attack launched by Iran. The footage appears current and depicts significant destruction of buildings and widespread chaos in the streets.

FACT CHECK:
We conducted our research on the viral video to determine if it was AI-generated. During the research we broke the video into individual still frames, and upon closely examining the frames, several of the visuals he showed us had odd-shaped visual features, abnormal body proportions, and flickering movements that don't occur in real footage. We took several still frames and checked them in image search sites to see if they had appeared before. The search results revealed that several clips in the video had appeared previously, in separate and unrelated circumstances, which indicates that they are neither recent nor original.

While examining the Instagram profile, we noticed that the account frequently shares visually dramatic AI content that appears digitally created. Many earlier posts from the same page include scenes that are unrealistic, such as wrecked aircraft in desolate areas or buildings collapsing in unnatural ways. In the current video, for instance, the fighter jets shown have multiple wings, which is not technically or aerodynamically possible in real life. The profile’s bio, which reads "Resistance of Artificial Intelligence," suggests that the page intentionally focuses on sharing AI-generated or fictional content.

We also ran the viral post through Tenorshare.AI for Deep-Fake detection, and the result came 94% AI. All findings resulting from our research established that the video is synthetic and unrelated to any event occurring in Israel, and therefore debunked a false narrative propagated on social media.

Conclusion:
Our research found that the video is fake and contains AI-generated images and is not related to any real missile strike or destruction occurring in Israel. The source is specific to fuel the panic and misinformation in a context of already-heightened geopolitical tension. We call on viewers not to share this unverified information and to rely on trusted sources. When there are sensitive international developments, the dissemination of fake imagery can promote fear, confusion, and misinformation on a global scale.
- Claim: Real Footage of Iran’s Missile Strikes on Israel
- Claimed On: Social Media
- Fact Check: False and Misleading

Introduction
Data Breaches have taken over cyberspace as one of the rising issues, these data breaches result in personal data making its way toward cybercriminals who use this data for no good. As netizens, it's our digital responsibility to be cognizant of our data and the data of one's organization. The increase in internet and technology penetration has made people move to cyberspace at a rapid pace, however, awareness regarding the same needs to be inculcated to maximise the data safety of netizens. The recent AIIMS cyber breach has got many organisations worried about their cyber safety and security. According to the HIPPA Journal, 66% of healthcare organizations reported ransomware attacks on them. Data management and security is the prime aspect of clients all across the industry and is now growing into a concern for many. The data is primarily classified into three broad terms-
- Personal Identified Information (PII) - Any representation of information that permits the identity of an individual to whom the information applies to be reasonably inferred by either direct or indirect means.
- Non-Public Information (NPI) - The personal information of an individual that is not and should not be available to the public. This includes Social Security Numbers, bank information, other personal identifiable financial information, and certain transactions with financial institutions.
- Material Non-Public Information (MNPI) - Data relating to a company that has not been made public but could have an impact on its share price. It is against the law for holders of nonpublic material information to use the information to their advantage in trading stocks.
This classification of data allows the industry to manage and secure data effectively and efficiently and at the same time, this allows the user to understand the uses of their data and its intensity in case of breach of data. Organisations process data that is a combination of the above-mentioned classifications and hence in instances of data breach this becomes a critical aspect. Coming back to the AIIMS data breach, it is a known fact that AIIMS is also an educational and research institution. So, one might assume that the reason for any attack on AIIMS could be either to exfiltrate patient data or could be to obtain hands-on the R & D data including research-related intellectual properties. If we postulate the latter, we could also imagine that other educational institutes of higher learning such as IITs, IISc, ISI, IISERs, IIITs, NITs, and some of the significant state universities could also be targeted. In 2021, the Ministry of Home Affairs through the Ministry of Education sent a directive to IITs and many other institutes to take certain steps related to cyber security measures and to create SoPs to establish efficient data management practices. The following sectors are critical in terms of data protection-
- Health sector
- Financial sector
- Education sector
- Automobile sector
These sectors are generally targeted by bad actors and often data breach from these sectors result in cyber crimes as the data is soon made available on Darkweb. These institutions need to practice compliance like any other corporate house as the end user here is the netizen and his/her data is of utmost importance in terms of protection.Organisations in today's time need to be in coherence to the advancement in cyberspace to find out keen shortcomings and vulnerabilities they may face and subsequently create safeguards for the same. The AIIMS breach is an example to learn from so that we can protect other organisations from such cyber attacks. To showcase strong and impenetrable cyber security every organisation should be able to answer these questions-
- Do you have a centralized cyber asset inventory?
- Do you have human resources that are trained to model possible cyber threats and cyber risk assessment?
- Have you ever undertaken a business continuity and resilience study of your institutional digitalized business processes?
- Do you have a formal vulnerability management system that enumerates vulnerabilities in your cyber assets and a patch management system that patches freshly discovered vulnerabilities?
- Do you have a formal configuration assessment and management system that checks the configuration of all your cyber assets and security tools (firewalls, antivirus management, proxy services) regularly to ensure they are most securely configured?
- Do have a segmented network such that your most critical assets (servers, databases, HPC resources, etc.) are in a separate network that is access-controlled and only people with proper permission can access?
- Do you have a cyber security policy that spells out the policies regarding the usage of cyber assets, protection of cyber assets, monitoring of cyber assets, authentication and access control policies, and asset lifecycle management strategies?
- Do you have a business continuity and cyber crisis management plan in place which is regularly exercised like fire drills so that in cases of exigencies such plans can easily be followed, and all stakeholders are properly trained to do their part during such emergencies?
- Do you have multi-factor authentication for all users implemented?
- Do you have a supply chain security policy for applications that are supplied by vendors? Do you have a vendor access policy that disallows providing network access to vendors for configuration, updates, etc?
- Do you have regular penetration testing of the cyberinfrastructure of the organization with proper red-teaming?
- Do you have a bug-bounty program for students who could report vulnerabilities they discover in your cyber infrastructure and get rewarded?
- Do you have an endpoint security monitoring tool mandatory for all critical endpoints such as database servers, application servers, and other important cyber assets?
- Do have a continuous network monitoring and alert generation tool installed?
- Do you have a comprehensive cyber security strategy that is reflected in your cyber security policy document?
- Do you regularly receive cyber security incidents (including small, medium, or high severity incidents, network scanning, etc) updates from your cyber security team in order to ensure that top management is aware of the situation on the ground?
- Do you have regular cyber security skills training for your cyber security team and your IT/OT engineers and employees?
- Do your top management show adequate support, and hold the cyber security team accountable on a regular basis?
- Do you have a proper and vetted backup and restoration policy and practice?
If any organisation has definite answers to these questions, it is safe to say that they have strong cyber security, these questions should not be taken as a comparison but as a checklist by various organisations to be up to date in regard to the technical measures and policies related to cyber security. Having a strong cyber security posture does not drive the cyber security risk to zero but it helps to reduce the risk and improves the fighting chance. Further, if a proper risk assessment is regularly carried out and high-risk cyber assets are properly protected, then the damages resulting from cyber attacks can be contained to a large extent.