#FactCheck: A digitally altered video of actor Sebastian Stan shows him changing a ‘Tell Modi’ poster to one that reads ‘I Told Modi’ on a display panel.
Executive Summary:
A widely circulated video claiming to feature a poster with the words "I Told Modi" has gone viral, improperly connecting it to the April 2025 Pahalgam attack, in which terrorists killed 26 civilians. The altered Marvel Studios clip is allegedly a mockery of Operation Sindoor, the counterterrorism operation India initiated in response to the attack. This misinformation emphasizes how crucial it is to confirm information before sharing it online by disseminating misleading propaganda and drawing attention away from real events.
Claim:
A man can be seen changing a poster that says "Tell Modi" to one that says "I Told Modi" in a widely shared viral video. This video allegedly makes reference to Operation Sindoor in India, which was started in reaction to the Pahalgam terrorist attack on April 22, 2025, in which militants connected to The Resistance Front (TRF) killed 26 civilians.


Fact check:
Further research, we found the original post from Marvel Studios' official X handle, confirming that the circulating video has been altered using AI and does not reflect the authentic content.

By using Hive Moderation to detect AI manipulation in the video, we have determined that this video has been modified with AI-generated content, presenting false or misleading information that does not reflect real events.

Furthermore, we found a Hindustan Times article discussing the mysterious reveal involving Hollywood actor Sebastian Stan.

Conclusion:
It is untrue to say that the "I Told Modi" poster is a component of a public demonstration. The text has been digitally changed to deceive viewers, and the video is manipulated footage from a Marvel film. The content should be ignored as it has been identified as false information.
- Claim: Viral social media posts confirm a Pakistani military attack on India.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

As AI language models become more powerful, they are also becoming more prone to errors. One increasingly prominent issue is AI hallucinations, instances where models generate outputs that are factually incorrect, nonsensical, or entirely fabricated, yet present them with complete confidence. Recently, ChatGPT released two new models—o3 and o4-mini, which differ from earlier versions as they focus more on step-by-step reasoning rather than simple text prediction. With the growing reliance on chatbots and generative models for everything from news summaries to legal advice, this phenomenon poses a serious threat to public trust, information accuracy, and decision-making.
What Are AI Hallucinations?
AI hallucinations occur when a model invents facts, misattributes quotes, or cites nonexistent sources. This is not a bug but a side effect of how Large Language Models (LLMs) work, and it is only the probability that can be reduced, not their occurrence altogether. Trained on vast internet data, these models predict what word is likely to come next in a sequence. They have no true understanding of the world or facts, they simulate reasoning based on statistical patterns in text. What is alarming is that the newer and more advanced models are producing more hallucinations, not fewer. seemingly counterintuitive. This has been prevalent reasoning-based models, which generate answers step-by-step in a chain-of-thought style. While this can improve performance on complex tasks, it also opens more room for errors at each step, especially when no factual retrieval or grounding is involved.
As per reports shared on TechCrunch, it mentioned that when users asked AI models for short answers, hallucinations increased by up to 30%. And a study published in eWeek found that ChatGPT hallucinated in 40% of tests involving domain-specific queries, such as medical and legal questions. This was not, however, limited to this particular Large Language Model, but also similar ones like DeepSeek. Even more concerning are hallucinations in multimodal models like those used for deepfakes. Forbes reports that some of these models produce synthetic media that not only look real but are also capable of contributing to fabricated narratives, raising the stakes for the spread of misinformation during elections, crises, and other instances.
It is also notable that AI models are continually improving with each version, focusing on reducing hallucinations and enhancing accuracy. New features, such as providing source links and citations, are being implemented to increase transparency and reliability in responses.
The Misinformation Dilemma
The rise of AI-generated hallucinations exacerbates the already severe problem of online misinformation. Hallucinated content can quickly spread across social platforms, get scraped into training datasets, and re-emerge in new generations of models, creating a dangerous feedback loop. However, it helps that the developers are already aware of such instances and are actively charting out ways in which we can reduce the probability of this error. Some of them are:
- Retrieval-Augmented Generation (RAG): Instead of relying purely on a model’s internal knowledge, RAG allows the model to “look up” information from external databases or trusted sources during the generation process. This can significantly reduce hallucination rates by anchoring responses in verifiable data.
- Use of smaller, more specialised language models: Lightweight models fine-tuned on specific domains, such as medical records or legal texts. They tend to hallucinate less because their scope is limited and better curated.
Furthermore, transparency mechanisms such as source citation, model disclaimers, and user feedback loops can help mitigate the impact of hallucinations. For instance, when a model generates a response, linking back to its source allows users to verify the claims made.
Conclusion
AI hallucinations are an intrinsic part of how generative models function today, and such a side-effect would continue to occur until foundational changes are made in how models are trained and deployed. For the time being, developers, companies, and users must approach AI-generated content with caution. LLMs are, fundamentally, word predictors, brilliant but fallible. Recognising their limitations is the first step in navigating the misinformation dilemma they pose.
References
- https://www.eweek.com/news/ai-hallucinations-increase/
- https://www.resilience.org/stories/2025-05-11/better-ai-has-more-hallucinations/
- https://www.ekathimerini.com/nytimes/1269076/ai-is-getting-more-powerful-but-its-hallucinations-are-getting-worse/
- https://techcrunch.com/2025/05/08/asking-chatbots-for-short-answers-can-increase-hallucinations-study-finds/
- https://en.as.com/latest_news/is-chatgpt-having-robot-dreams-ai-is-hallucinating-and-producing-incorrect-information-and-experts-dont-know-why-n/
- https://www.newscientist.com/article/2479545-ai-hallucinations-are-getting-worse-and-theyre-here-to-stay/
- https://www.forbes.com/sites/conormurray/2025/05/06/why-ai-hallucinations-are-worse-than-ever/
- https://towardsdatascience.com/how-i-deal-with-hallucinations-at-an-ai-startup-9fc4121295cc/
- https://www.informationweek.com/machine-learning-ai/getting-a-handle-on-ai-hallucinations
.webp)
Introduction
The ongoing armed conflict between Israel and Hamas/ Palestine is in the news all across the world. The latest conflict was triggered by unprecedented attacks against Israel by Hamas militants on October 7, killing thousands of people. Israel has launched a massive counter-offensive against the Islamic militant group. Amid the war, the bad information and propaganda spreading on various social media platforms, tech researchers have detected a network of 67 accounts that posted false content about the war and received millions of views. The ‘European Commission’ has sent a letter to Elon Musk, directing them to remove illegal content and disinformation; otherwise, penalties can be imposed. The European Commission has formally requested information from several social media giants on their handling of content related to the Israel-Hamas war. This widespread disinformation impacts and triggers the nature of war and also impacts the world and affects the goodwill of the citizens. The bad group, in this way, weaponise the information and fuels online hate activity, terrorism and extremism, flooding political polarisation with hateful content on social media. Online misinformation about the war is inciting extremism, violence, hate and different propaganda-based ideologies. The online information environment surrounding this conflict is being flooded with disinformation and misinformation, which amplifies the nature of war and too many fake narratives and videos are flooded on social media platforms.
Response of social media platforms
As there is a proliferation of online misinformation and violent content surrounding the war, It imposes a question on social media companies in terms of content moderation and other policy shifts. It is notable that Instagram, Facebook and X(Formerly Twitter) all have certain features in place giving users the ability to decide what content they want to view. They also allow for limiting the potentially sensitive content from being displayed in search results.
The experts say that It is of paramount importance to get a sort of control in this regard and define what is permissible online and what is not, Hence, what is required is expertise to determine the situation, and most importantly, It requires robust content moderation policies.
During wartime, people who are aggrieved or provoked are often targeted by this internet disinformation that blends ideological beliefs and spreads conspiracy theories and hatred. This is not a new phenomenon, it is often observed that disinformation-spreading groups emerged and became active during such war and emergency times and spread disinformation and propaganda-based ideologies and influence the society at large by misrepresenting the facts and planted stories. Social media has made it easier to post user-generated content without properly moderating it. However, it is a shared responsibility of tech companies, users, government guidelines and policies to collectively define and follow certain mechanisms to fight against disinformation and misinformation.
Digital Services Act (DSA)
The newly enacted EU law, i.e. Digital Services Act, pushes various larger online platforms to prevent posts containing illegal content and also puts limits on targeted advertising. DSA enables to challenge the of illegal online content and also poses requirements to prevent misinformation and disinformation and ensure more transparency over what the users see on the platforms. Rules under the DSA cover everything from content moderation & user privacy to transparency in operations. DSA is a landmark EU legislation moderating online platforms. Large tech platforms are now subject to content-related regulation under this new EU law ‘The Digital Services Act’, which also requires them to prevent the spread of misinformation and disinformation and overall ensure a safer online environment.
Indian Scenario
The Indian government introduced the Intermediary Guidelines (Intermediary Guidelines and Digital Media Ethics Code) Rules, updated in 2023 which talks about the establishment of a "fact check unit" to identify false or misleading online content. Digital Personal Data Protection, 2023 has also been enacted which aims to protect personal data. The upcoming Digital India bill is also proposed to be tabled in the parliament, this act will replace the current Information & Technology Act, of 2000. The upcoming Digital India bill can be seen as future-ready legislation to strengthen India’s current cybersecurity posture. It will comprehensively deal with the aspects of ensuring privacy, data protection, and fighting growing cyber crimes in the evolving digital landscape and ensuring a safe digital environment. Certain other entities including civil societies are also actively engaged in fighting misinformation and spreading awareness for safe and responsible use of the Internet.
Conclusion:
The widespread disinformation and misinformation content amid the Israel-Hamas war showcases how user-generated content on social media shows you the illusion of reality. There is widespread misinformation, misleading content or posts on social media platforms, and misuse of new advanced AI technologies that even make it easier for bad actors to create synthetic media content. It is also notable that social media has connected us like never before. Social media is a great platform with billions of active social media users around the globe, it offers various conveniences and opportunities to individuals and businesses. It is just certain aspects that require the attention of all of us to prevent the bad use of social media. The social media platforms and regulatory authorities need to be vigilant and active in clearly defining and improving the policies for content regulation and safe and responsible use of social media which can effectively combat and curtail the bad actors from misusing social media for their bad motives. As a user, it's the responsibility of users to exercise certain duties and promote responsible use of social media. With the increasing penetration of social media and the internet, misinformation is rampant all across the world and remains a global issue which needs to be addressed properly by implementing strict policies and adopting best practices to fight the misinformation. Users are encouraged to flag and report misinformative or misleading content on social media and should always verify it from authentic sources. Hence creating a safer Internet environment for everyone.
References:
- https://abcnews.go.com/US/experts-fear-hate-extremism-social-media-israel-hamas-war/story?id=104221215
- https://edition.cnn.com/2023/10/14/tech/social-media-misinformation-israel-hamas/index.html
- https://www.nytimes.com/2023/10/13/business/israel-hamas-misinformation-social-media-x.html
- https://www.africanews.com/2023/10/24/fact-check-misinformation-about-the-israel-hamas-war-is-flooding-social-media-here-are-the//
- https://www.theverge.com/23845672/eu-digital-services-act-explained

Executive Summary:
A viral online image claims to show Arvind Kejriwal, Chief Minister of Delhi, welcoming Elon Musk during his visit to India to discuss Delhi’s administrative policies. However, the CyberPeace Research Team has confirmed that the image is a deep fake, created using AI technology. The assertion that Elon Musk visited India to discuss Delhi’s administrative policies is false and misleading.


Claim
A viral image claims that Arvind Kejriwal welcomed Elon Musk during his visit to India to discuss Delhi’s administrative policies.


Fact Check:
Upon receiving the viral posts, we conducted a reverse image search using InVid Reverse Image searching tool. The search traced the image back to different unrelated sources featuring both Arvind Kejriwal and Elon Musk, but none of the sources depicted them together or involved any such event. The viral image displayed visible inconsistencies, such as lighting disparities and unnatural blending, which prompted further investigation.
Using advanced AI detection tools like TrueMedia.org and Hive AI Detection tool, we analyzed the image. The analysis confirmed with 97.5% confidence that the image was a deepfake. The tools identified “substantial evidence of manipulation,” particularly in the merging of facial features and the alignment of clothes and background, which were artificially generated.




Moreover, a review of official statements and credible reports revealed no record of Elon Musk visiting India to discuss Delhi’s administrative policies. Neither Arvind Kejriwal’s office nor Tesla or SpaceX made any announcement regarding such an event, further debunking the viral claim.
Conclusion:
The viral image claiming that Arvind Kejriwal welcomed Elon Musk during his visit to India to discuss Delhi’s administrative policies is a deep fake. Tools like Reverse Image search and AI detection confirm the image’s manipulation through AI technology. Additionally, there is no supporting evidence from any credible sources. The CyberPeace Research Team confirms the claim is false and misleading.
- Claim: Arvind Kejriwal welcomed Elon Musk to India to discuss Delhi’s administrative policies, viral on social media.
- Claimed on: Facebook and X(Formerly Twitter)
- Fact Check: False & Misleading