#FactCheck:AI-Generated War Video Falsely Linked to Israel-Iran Tensions Goes Viral
Executive Summary
A video is being widely shared on social media linking it to the ongoing tensions between Israel and Iran. The clip shows multiple fighter jets flying across the sky, while massive flames appear to be rising from tall buildings below. The visuals are dramatic and alarming, creating the impression of a large-scale military strike. Users sharing the video claim that after Israel carried out an attack, Iran launched a retaliatory strike on Israel, and that the viral footage captures the aftermath of this counterattack. However, research conducted by the CyberPeace found the claim to be misleading. Our research revealed that the viral video is not authentic but AI-generated.
Claim
On the social media platform Facebook, a user shared the viral video with the caption: “Iran has also carried out a retaliatory attack on Israel.”
(Post link and archive link provided above.)

Factcheck
Upon closely examining the video, we noticed several irregularities in the visuals and motion patterns, which raised suspicion that the footage may have been generated using artificial intelligence. To verify this, we analyzed the video using the AI detection tool developed by Hive Moderation. According to the analysis report, there is a 62 percent likelihood that the viral video is AI-generated.

As part of further verification, we also scanned the video using Sightengine. The results indicated an even stronger probability, suggesting that the video is 99 percent AI-generated.

Conclusion
Our research confirms that the viral video does not depict a real military attack. It is AI-generated content being falsely shared in the context of Israel-Iran tensions.
Related Blogs

Introduction: Reasons Why These Amendments Have Been Suggested.
The suggested changes in the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, are the much-needed regulatory reaction to the blistering emergence of synthetic information and deepfakes. These reforms are due to the pressing necessity to govern risks within the digital ecosystem as opposed to regular reformation.
The Emergence of the Digital Menace
Generative AI tools have also facilitated the generation of very realistic images, videos, audio, and text in recent years. Such artificial media have been abused to portray people in situations they are not in or in statements they have never said. The market size is expected to have a compound annual growth rate(CAGR) from 2025 to 2031 of 37.57%, resulting in a market volume of US$400.00 bn by 2031. Therefore, tight regulatory controls are necessary to curb a high prevalence of harm in the Indian digital world.
The Gap in Law and Institution
None of the IT Rules, 2021, clearly addressed synthetic content. Although the Information Technology Act, 2000 dealt with identity theft, impersonation and violation of privacy, the intermediaries were not explicitly obligated on artificial media. This left a loophole in enforcement, particularly since AI-generated content might get around the old system of moderation. These amendments bring India closer to the international standards, including the EU AI Act, which requires transparency and labelling of AI-driven content. India addresses such requirements and adapts to local constitutional and digital ecosystem needs.
II. Explanation of the Amendments
The amendments of 2025 present five alternative changes in the current IT Rules framework, which address various areas of synthetic media regulation.
A. Definitional Clarification: Synthetic Generation of Information Introduction.
Rule 2(1)(wa) Amendment:
The amendments provide an all-inclusive definition of what is meant by “synthetically generated information” as information, which is created, or produced, changed or distorted with the use of a computer resource, in a way that such information can reasonably be perceived to be genuine. This definition is intentionally broad and is not limited to deepfakes in the strict sense but to any artificial media that has gone through algorithmic manipulation in order to have a semblance of authenticity.
Expansion of Legal Scope:
Rule 2(1A) also makes it clear that any mention of information in the context of unlawful acts, namely, including categories listed in Rule 3(1)(b), Rule 3(1)(d), Rule 4(2), and Rule 4(4), should be understood to mean synthetically generated information. This is a pivotal interpretative protection that does not allow intermediaries to purport that synthetic versions of illegal material are not under the control of the regulation since they are algorithmic creations and not descriptions of what actually occurred.
B. Safe Harbour Protection and Content Removal Requirements
Amendment, rule 3(1)(b)- Safe Harbour Clarification:
The amendments add a certain proviso to the Rule (3) (1)(b) that explains a deletion or facilitation of access of synthetically produced information (or any information falling within specified categories) which the intermediaries have made in good faith as part of reasonable endeavours or at the receipt of a complaint shall not be considered a breach of the Section 79(2) (a) or (b) of the Information Technology Act, 2000. This coverage is relevant especially since it insures the intermediaries against liability in situations where they censor the synthetic contents in advance of a court ruling or governmental warnings.
C. Labelling and Metadata Requirements that are mandatory on Intermediaries that enable the creation of synthetic content
The amendments establish a new framework of due diligence in Rule 3(3) on the case of intermediaries that offer tools to generate, modify, or alter the synthetically generated information. Two fundamental requirements are laid down.
- The generated information must be prominently labelled or embedded with a permanent, unique metadata or identifier. The label or metadata must be:
- Visibly displayed or made audible in a prominent manner on or within that synthetically generated information.
- It should cover at least 10% of the surface of the visual display or, in the case of audio content, during the initial 10% of its duration.
- It can be used to immediately identify that such information is synthetically generated information which has been created, generated, modified, or altered using the computer resource of the intermediary.
- The intermediary in clause (a) shall not enable modification, suppression or removal of such label, permanent unique metadata or identifier, by whatever name called.
D. Important Social Media Intermediaries- Pre-Publication Checking Responsibilities
The amendments present a three-step verification mechanism, under Rule 4(1A), to Significant Social Media Intermediaries (SSMIs), which enables displaying, uploading or publishing on its computer resource before such display, uploading, or publication has to follow three steps.
Step 1- User Declaration: It should compel the users to indicate whether the materials they are posting are synthetically created. This puts the first burden on users.
Step 2-Technical Verification: To ensure that the user is truly valid, the SSMIs need to provide reasonable technical means, such as automated tools or other applications. This duty is contextual and would be based on the nature, format and source of content. It does not allow intermediaries to escape when it is known that not every type of content can be verified using the same standards.
Step 3- Prominent Labelling: In case the synthetic origin is verified by user declaration or technical verification, SSMIs should have a notice or label that is prominently displayed to be seen by users before publication.
The amendments provide a better system of accountability and set that intermediaries will be found to have failed due diligence in a case where it is established that they either knowingly permitted, encouraged or otherwise failed to act on synthetically produced information in contravention of these requirements. This brings in an aspect of knowledge, and intermediaries cannot use accidental errors as an excuse for non-compliance.
An explanation clause makes it clear that SSMIs should also make reasonable and proportionate technical measures to check user declarations and keep no synthetic content published without adequate declaration or labelling. This eliminates confusion on the role of the intermediaries with respect to making declarations.
III. Attributes of The Amendment Framework
- Precision in Balancing Innovation and Accountability.
The amendments have commendably balanced two extreme regulatory postures by neither prohibiting nor allowing the synthetic media to run out of control. It has recognised the legitimate use of synthetic media creation in entertainment, education, research and artistic expression by adopting a transparent and traceable mandate that preserves innovation while ensuring accountability.
- Overt Acceptance of the Intermediary Liability and Reverse Onus of Knowledge
Rule 4(1A) gives a highly significant deeming rule; in cases where the intermediary permits or refrains from acting with respect to the synthetic content knowing that the rules are violated, it will be considered as having failed to comply with the due diligence provisions. This description closes any loopholes in unscrupulous supervision where intermediaries can be able to argue that they did so. Standard of scienter promotes material investment in the detection devices and censor mechanisms that have been in place to offer security to the platforms that have sound systems, albeit the fact that the tools fail to capture violations at times.
- Clarity Through Definition and Interpretive Guidance
The cautious definition of the term “synthetically generated information” and the guidance that is provided in Rule 2(1A) is an admirable attempt to solve confusion in the previous regulatory framework. Instead of having to go through conflicting case law or regulatory direction, the amendments give specific definitional limits. The purposefully broad formulation (artificially or algorithmically created, generated, modified or altered) makes sure that the framework is not avoided by semantic games over what is considered to be a real synthetic content versus a slight algorithmic alteration.
- Insurance of non-accountability but encourages preventative moderation
The safe harbour clarification of the Rule 3(1)(b) amendment clearly safeguards the intermediaries who voluntarily dismiss the synthetic content without a court order or government notification. It is an important incentive scheme that prompts platforms to implement sound self-regulation measures. In the absence of such protection, platforms may also make rational decisions to stay in a passive stance of compliance, only deleting content under the pressure of an external authority, thus making them more effective in keeping users safe against dangerous synthetic media.
IV. Conclusion
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2025 suggest a structured, transparent, and accountable execution of curbing the rising predicaments of synthetic media and deepfakes. The amendments deal with the regulatory and interpretative gaps that have always existed in determining what should be considered as synthetically generated information, the intermediary liabilities and the mandatory labelling and metadata requirement. Safe-harbour protection will encourage the moderation proactively, and a scienter-based liability rule will not permit the intermediaries to escape liability when they are aware of the non-compliance but tolerate such non-compliance. The idea to introduce pre-publication verification of Significant Social Media Intermediaries adds the responsibility to users and due diligence to the platform. Overall, the amendments provide a reasonable balance between innovation and regulation, make the process more open with its proper definitions, promote responsible conduct on the platform and transform India and the new standards in the sphere of synthetic media regulation. They collaborate to enhance the verisimilitude, defence of the users, and visibility of the systems of the digital ecosystem of India.
V. References
2. https://www.statista.com/outlook/tmo/artificial-intelligence/generative-ai/worldwide

Executive Summary
A video of senior Congress leader Shashi Tharoor is widely circulating on social media, allegedly showing him praising Pakistan’s diplomatic stance over the ICC T20 World Cup issue. Many users are sharing the clip believing it to be genuine. However, research by the CyberPeace found the claim to be false. The viral video of Tharoor is a deepfake, and the Congress leader himself has described it as fabricated and fake.
Claim
A Facebook page named “Vok Sports” shared the video on February 11, 2026, claiming that Tharoor praised Pakistan. In the viral clip, he is purportedly heard saying in English that Pakistan’s diplomatic handling of the matter was “brilliant” and that it had outmanoeuvred the Indian cricket board, adding that good diplomacy could make a weak nation appear powerful.
The video was widely shared by social media users as authentic. (Archive links and post details provided.)
Fact Check
To verify the claim, we first scanned Tharoor’s official X (formerly Twitter) handle. We found a post dated February 12 in which he responded to a Pakistani journalist who had shared the video. Tharoor stated that the clip was AI-generated “fake news,” adding that neither the language nor the voice in the video was his.

A reverse image search using Google Lens led the Desk to a video uploaded on February 10, 2026, by India Today on its official YouTube channel. The visuals in this original video exactly matched those seen in the viral clip showing Tharoor speaking to the media. However, upon analysing the original footage, we found that Tharoor was speaking in Hindi about the controversy surrounding the T20 World Cup. He stated that politics should not be mixed with cricket or sports and did not praise Pakistan or the Pakistan Cricket Board at any point. This indicates that the audio in the viral clip had been manipulated and replaced. In the original video, Tharoor said that politicians should conduct politics separately, diplomats should handle diplomacy, and cricket players should focus on the game, expressing hope that cricket would move forward with the match.
- https://www.youtube.com/watch?v=GkA1mLlAT8Q&t=3s

To further verify the authenticity of the video, several AI detection tools were used. Analysis through Aurigin.ai suggested a 78 percent probability that the audio in the viral clip was AI-generated.

Conclusion
The CyberPeace confirmed that the viral video is a deepfake. Tharoor did not praise Pakistan’s diplomatic stance during the T20 World Cup controversy, and the circulating clip has been digitally manipulated.

Executive Summary
A misleading advertisement circulating in social media providing attractive offers like iPhone15, AirPods and Smartwatches from the Indian e-commerce platform ‘Myntra’. This “Myntra - Festival Gifts” scam aims to attract the unsuspecting users into a series of redirects and fake interactions to compromise their personal information and devices. It is important to stay vigilant to protect ourselves from misleading attractive offers. Through this report, the Research Wing of CyberPeace explains about a series of processes that happens when the link gets clicked. Through this knowledge, we aim to provide awareness and empower the users to guard themselves and not fall into deceptive offers that aim to scam them.
False Claim
The widely shared WhatsApp message claims that Myntra is offering a wide range of high-valued prizes including the latest iPhone 15, AirPods, various smartwatches among all as a Festival Gift promotion. The campaign invites the users to click on the link provided and take a short quiz to be eligible for the prize.

The Deceptive Scheme
- The link in the social media post is tailored to work only on mobile devices, users are taken through a chain of redirects.
- Users are greeted with the Myntra's "Big Fashion Festival" branding accompanied by Myntra’s logo once they reach the landing page, which gives an impression of authenticity.
- Next, a simple quiz asks basic questions about the user's shopping experience with Myntra, their age, and gender.
- On the bottom of the quiz, there is a comment section that shows the comments from users who are supposedly provided with the prizes to look real,
- After the completion of the quiz, users are presented with a Spin-to-Win mechanism, to win the prize.
- After winning, a congratulatory message is displayed which says that the user has won an iPhone 15.
- The final step requires the user to share the campaign over WhatsApp in order to claim the prize.
Analyzing the Fraudulent Campaign
- The use of Myntra's branding and the promise of exclusive, high-value prizes are designed to attract users' interest.
- The fake comments and social proof elements aim to create a false sense of legitimacy and widespread participation, making the offer seem more credible.
- The series of redirects, quizzes, and Spin-to-Win mechanics are tactics to keep users engaged and increase the likelihood of them falling for the scam.
- The final step of sharing the post on WhatsApp is a way for the scammers to further spread the campaign and compromise more victims. Through sharing the link over WhatsApp, users become unaware accomplices that are simply assisting the scammers to reach an even bigger audience and hence their popularity.
- The primary objectives of such scams are to gather users' personal information and potentially gain access to their devices. By luring users with the promise of exclusive gifts and creating a false sense of legitimacy, the scammers aim to exploit user trust and compromise their data, leading to potential identity theft, financial fraud, or the installation of potentially unwanted softwares.
- We have also cross-checked and as of now there is no well established and credible source or any official notification that has confirmed such an offer advertised by Myntra.
- Domain Analysis: If we closely look at the viral message, it is clearly visible that the scammers mentioned myntra.com in the url. However, the actual url takes the user to a different domain as the campaign is hosted on a third party domain instead of the official Website of Myntra, this raised suspicion. This is the common way to deceive users into falling for a Phishing scam. Whois information reveals that the domain has been registered not long ago i.e on 8th April 2024, just a few days back. Cybercriminals used Cloudflare technology to mask the actual IP address of the fraudulent website.

- Domain Name: MYTNRA.CYOU
- Registry Domain ID: D445770144-CNIC
- Registrar WHOIS Server: whois.hkdns.hk
- Registrar URL: http://www.hkdns.hk
- Updated Date: 2024-04-08T03:27:58.0Z
- Creation Date: 2024-04-08T02:58:14.0Z
- Registry Expiry Date: 2025-04-08T23:59:59.0Z
- Registrar: West263 International Limited
- Registrant State/Province: Delhi
- Registrant Country: IN
- Name Server: NORMAN.NS.CLOUDFLARE.COM
- Name Server: PAM.NS.CLOUDFLARE.COM
CyberPeace Advisory and Best Practices
- Do not open those messages received from social platforms in which you think that such messages are suspicious or unsolicited. In the beginning, your own discretion can become your best weapon.
- Falling prey to such scams could compromise your entire system, potentially granting unauthorized access to your microphone, camera, text messages, contacts, pictures, videos, banking applications, and more. Keep your cyber world safe against any attacks.
- Never, in any case, reveal such sensitive data as your login credentials and banking details to entities you haven't validated as reliable ones.
- Before sharing any content or clicking on links within messages, always verify the legitimacy of the source. Protect not only yourself but also those in your digital circle.
- For the sake of the truthfulness of offers and messages, find the official sources and companies directly. Verify the authenticity of alluring offers before taking any action.
Conclusion:
The “Myntra - Festival Gift” scam is a kind of manipulation in which the fraudsters exploit the trust of the users and take advantage of a popular e-commerce website. It is equally crucial to equip the users by imparting them knowledge on fraudulent behavior tactics like impersonating brands, creating fake social proof and application of different engagement strategies. We are required to remain alert and stand firm against cyber attacks. Be careful, make sure that information is verified and share awareness to help make a safe online environment for all users.