#FactCheck -Misleading Claim Uses 2013 Army Coffin Photo to Spread False Ceasefire Narrative
Executive Summary
Social media users, particularly Pakistani propaganda accounts, shared an image showing coffins wrapped in the Indian tricolour and claimed that India violated the ceasefire along the Line of Control (LoC). According to the posts, Pakistan retaliated with heavy firing, captured the Indian Army’s Kumar Top post, and several Indian soldiers were killed in the exchange.
One user wrote, “Breaking News: Indian Army once again violated the ceasefire in the Mandal sector, targeting civilians with mortar shelling. Pakistan responded strongly, captured the Indian Army’s Kumar Top post, and several soldiers were reportedly killed. Calm has now been restored after Pakistan’s response.”

Fact Check
Research by CyberPeace found the viral claim to be false. Using reverse image search, we traced the viral photo to the Shutterstock website. The image description states that it was taken on August 6, 2013, and shows Indian Army personnel standing near the coffins of soldiers who were killed by Pakistani infiltrators at a brigade headquarters in Poonch, located about 240 km from Jammu. This confirms that the image is old and unrelated to recent developments along the Line of Control.

Further verification led us to a report published by NBC News on August 8, 2013, which also featured the same visual in connection with the 2013 cross-border attack.

Additionally, posts from the official X (formerly Twitter) handle of the Indian Army 16 Corps (White Knight Corps) stated that based on intelligence inputs and continuous surveillance, suspicious terrorist activity was detected near Nathua Tibba in the Sunderbani sector close to the LoC in the early hours of February 19, 2026. Alert troops responded promptly and successfully foiled the infiltration attempt. The Army also confirmed that operational vigilance remains high across the sector. However, there were no reports of casualties due to Pakistani firing.

Conclusion:
The viral image showing coffins of Indian soldiers is not recent but dates back to 2013. There are no confirmed reports of casualties from Pakistani firing along the Line of Control in the current context. Therefore, the claim circulating on social media is misleading.
Related Blogs
.webp)
Introduction
Misinformation is a major issue in the AI age, exacerbated by the broad adoption of AI technologies. The misuse of deepfakes, bots, and content-generating algorithms have made it simpler for bad actors to propagate misinformation on a large scale. These technologies are capable of creating manipulative audio/video content, propagate political propaganda, defame individuals, or incite societal unrest. AI-powered bots may flood internet platforms with false information, swaying public opinion in subtle ways. The spread of misinformation endangers democracy, public health, and social order. It has the potential to affect voter sentiments, erode faith in the election process, and even spark violence. Addressing misinformation includes expanding digital literacy, strengthening platform detection capabilities, incorporating regulatory checks, and removing incorrect information.
AI's Role in Misinformation Creation
AI's growth in its capabilities to generate content have grown exponentially in recent years. Legitimate uses or purposes of AI many-a-times take a backseat and result in the exploitation of content that already exists on the internet. One of the main examples of misinformation flooding the internet is when AI-powered bots flood social media platforms with fake news at a scale and speed that makes it impossible for humans to track and figure out whether the same is true or false.
The netizens in India are greatly influenced by viral content on social media. AI-generated misinformation can have particularly negative consequences. Being literate in the traditional sense of the word does not automatically guarantee one the ability to parse through the nuances of social media content authenticity and impact. Literacy, be it social media literacy or internet literacy, is under attack and one of the main contributors to this is the rampant rise of AI-generated misinformation. Some of the most common examples of misinformation that can be found are related to elections, public health, and communal issues. These issues have one common factor that connects them, which is that they evoke strong emotions in people and as such can go viral very quickly and influence social behaviour, to the extent that they may lead to social unrest, political instability and even violence. Such developments lead to public mistrust in the authorities and institutions, which is dangerous in any economy, but even more so in a country like India which is home to a very large population comprising a diverse range of identity groups.
Misinformation and Gen AI
Generative AI (GAI) is a powerful tool that allows individuals to create massive amounts of realistic-seeming content, including imitating real people's voices and creating photos and videos that are indistinguishable from reality. Advanced deepfake technology blurs the line between authentic and fake. However, when used smartly, GAI is also capable of providing a greater number of content consumers with trustworthy information, counteracting misinformation.
Generative AI (GAI) is a technology that has entered the realm of autonomous content production and language creation, which is linked to the issue of misinformation. It is often difficult to determine if content originates from humans or machines and if we can trust what we read, see, or hear. This has led to media users becoming more confused about their relationship with media platforms and content and highlighted the need for a change in traditional journalistic principles.
We have seen a number of different examples of GAI in action in recent times, from fully AI-generated fake news websites to fake Joe Biden robocalls telling the Democrats in the U.S. not to vote. The consequences of such content and the impact it could have on life as we know it are almost too vast to even comprehend at present. If our ability to identify reality is quickly fading, how will we make critical decisions or navigate the digital landscape safely? As such, the safe and ethical use and applications of this technology needs to be a top global priority.
Challenges for Policymakers
AI's ability to generate anonymous content makes it difficult to hold perpetrators accountable due to the massive amount of data generated. The decentralised nature of the internet further complicates regulation efforts, as misinformation can spread across multiple platforms and jurisdictions. Balancing the need to protect the freedom of speech and expression with the need to combat misinformation is a challenge. Over-regulation could stifle legitimate discourse, while under-regulation could allow misinformation to propagate unchecked. India's multilingual population adds more layers to already-complex issue, as AI-generated misinformation is tailored to different languages and cultural contexts, making it harder to detect and counter. Therefore, developing strategies catering to the multilingual population is necessary.
Potential Solutions
To effectively combat AI-generated misinformation in India, an approach that is multi-faceted and multi-dimensional is essential. Some potential solutions are as follows:
- Developing a framework that is specific in its application to address AI-generated content. It should include stricter penalties for the originator and spreader and dissemination of fake content in proportionality to its consequences. The framework should establish clear and concise guidelines for social media platforms to ensure that proactive measures are taken to detect and remove AI-generated misinformation.
- Investing in tools that are driven by AI for customised detection and flagging of misinformation in real time. This can help in identifying deepfakes, manipulated images, and other forms of AI-generated content.
- The primary aim should be to encourage different collaborations between tech companies, cyber security orgnisations, academic institutions and government agencies to develop solutions for combating misinformation.
- Digital literacy programs will empower individuals by training them to evaluate online content. Educational programs in schools and communities teach critical thinking and media literacy skills, enabling individuals to better discern between real and fake content.
Conclusion
AI-generated misinformation presents a significant threat to India, and it is safe to say that the risks posed are at scale with the rapid rate at which the nation is developing technologically. As the country moves towards greater digital literacy and unprecedented mobile technology adoption, one must be cognizant of the fact that even a single piece of misinformation can quickly and deeply reach and influence a large portion of the population. Indian policymakers need to rise to the challenge of AI-generated misinformation and counteract it by developing comprehensive strategies that not only focus on regulation and technological innovation but also encourage public education. AI technologies are misused by bad actors to create hyper-realistic fake content including deepfakes and fabricated news stories, which can be extremely hard to distinguish from the truth. The battle against misinformation is complex and ongoing, but by developing and deploying the right policies, tools, digital defense frameworks and other mechanisms, we can navigate these challenges and safeguard the online information landscape.
References:
- https://economictimes.indiatimes.com/news/how-to/how-ai-powered-tools-deepfakes-pose-a-misinformation-challenge-for-internet-users/articleshow/98770592.cms?from=mdr
- https://www.dw.com/en/india-ai-driven-political-messaging-raises-ethical-dilemma/a-69172400
- https://pure.rug.nl/ws/portalfiles/portal/975865684/proceedings.pdf#page=62

Introduction
The Telecommunications Act of 2023 was passed by Parliament in December, receiving the President's assent and being published in the official Gazette on December 24, 2023. The act is divided into 11 chapters 62 sections and 3 schedules. Sections 1, 2, 10-30, 42-44, 46, 47, 50-58, 61 and 62 already took effect on June 26, 2024.
On July 04, 2024, the Centre issued a Gazetted Notification and sections 6-8, 48 and 59(b) were notified to be effective from July 05, 2024. The Act aims to amend and consolidate the laws related to telecommunication services, telecommunication networks, and spectrum assignment and it ‘repeals’ certain older colonial-era legislations like the Indian Telegraph Act 1885 and Indian Wireless Telegraph Act 1933. Due to the advancements in technology in the telecom sector, the new law is enacted.
On 18 July 2024 Thursday, the telecom minister while launching the theme of Indian Mobile Congress (IMC), announced that all rules and provisions of the new Telecom Act would be notified within the next 180 days, hence making the Act operational at full capacity.
Important definitions under Telecommunications Act, 2023
- Authorisation: Section 2(d) entails “authorisation” means a permission, by whatever name called, granted under this Act for— (i) providing telecommunication services; (ii) establishing, operating, maintaining or expanding telecommunication networks; or (iii) possessing radio equipment.
- Telecommunication: Section 2(p) entails “Telecommunication” means transmission, emission or reception of any messages, by wire, radio, optical or other electro-magnetic systems, whether or not such messages have been subjected to rearrangement, computation or other processes by any means in the course of their transmission, emission or reception.
- Telecommunication Network: Section 2(s) entails “telecommunication network” means a system or series of systems of telecommunication equipment or infrastructure, including terrestrial or satellite networks or submarine networks, or a combination of such networks, used or intended to be used for providing telecommunication services, but does not include such telecommunication equipment as notified by the Central Government.
- Telecommunication Service: Section 2(t) entails “telecommunication service” means any service for telecommunication.
Measures for Cyber Security for the Telecommunication Network/Services
Section 22 of the Telecommunication Act, 2023 talks about the protection of telecommunication networks and telecommunication services. The section specifies that the centre may provide rules to ensure the cybersecurity of telecommunication networks and telecommunication services. Such measures may include the collection, analysis and dissemination of traffic data that is generated, transmitted, received or stored in telecommunication networks. ‘Traffic data’ can include any data generated, transmitted, received, or stored in telecommunication networks – such as type, duration, or time of a telecommunication.
Section 22 further empowers the central government to declare any telecommunication network, or part thereof, as Critical Telecommunication Infrastructure. It may further provide for standards, security practices, upgradation requirements and procedures to be implemented for such Critical Telecommunication Infrastructure.
CyberPeace Policy Wing Outlook:
The Telecommunication Act, 2023 marks a significant change & growth in the telecom sector by providing a robust regulatory framework, encouraging research and development, promoting infrastructure development, and measures for consumer protection. The Central Government is empowered to authorize individuals for (a) providing telecommunication services, (b) establishing, operating, maintaining, or expanding telecommunication networks, or (c) possessing radio equipment. Section 48 of the act provides no person shall possess or use any equipment that blocks telecommunication unless permitted by the Central Government.
The Central Government will protect users by implementing different measures, such as the requirement of prior consent of users for receiving particular messages, keeping a 'Do Not Disturb' register to stop unwanted messages, the mechanism to enable users to report any malware or specified messages received, the preparation and maintenance of “Do Not Disturb” register, to ensure that users do not receive specified messages or class of specified messages without prior consent. The authorized entity providing telecommunication services will also be required to create an online platform for users for their grievances pertaining to telecommunication services.
In certain limited circumstances such as national security measures, disaster management and public safety, the act contains provisions empowering the Government to take temporary possession of telecom services or networks from authorised entity; direct interception or disclosure of messages, with measures to be specified in rulemaking. This entails that the government gains additional controls in case of emergencies to ensure security and public order. However, this has to be balanced with appropriate measures protecting individual privacy rights and avoiding any unintended arbitrary actions.
Taking into account the cyber security in the telecommunication sector, the government is empowered under the act to introduce standards for cyber security for telecommunication services and telecommunication networks; and encryption and data processing in telecommunication.
The act also promotes the research and development and pilot projects under Digital Bharat Nidhi. The act also promotes the approach of digital by design by bringing online dispute resolution and other frameworks. Overall the approach of the government is noteworthy as they realise the need for updating the colonial era legislation considering the importance of technological advancements and keeping pace with the digital and technical revolution in the telecommunication sector.
References:
- The Telecommunications Act, 2023 https://acrobat.adobe.com/id/urn:aaid:sc:AP:88cb04ff-2cce-4663-ad41-88aafc81a416
- https://pib.gov.in/PressReleasePage.aspx?PRID=2031057
- https://pib.gov.in/PressReleaseIframePage.aspx?PRID=2027941
- https://economictimes.indiatimes.com/industry/telecom/telecom-news/new-telecom-act-will-be-notified-in-180-days-bsnl-4g-rollout-is-monitored-on-a-daily-basis-scindia/articleshow/111851845.cms?from=mdr
- https://www.azbpartners.com/wp-content/uploads/2024/06/Update-Staggered-Enforcement-of-Telecommunications-Act-2023.pdf
- https://telecom.economictimes.indiatimes.com/blog/analysing-the-impact-of-telecommunications-act-2023-on-digital-india-mission/111828226

Executive Summary:
A viral video claiming to show Israelis pleading with Iran to "stop the war" is not authentic. As per our research the footage is AI-generated, created using tools like Google’s Veo, and not evidence of a real protest. The video features unnatural visuals and errors typical of AI fabrication. It is part of a broader wave of misinformation surrounding the Israel-Iran conflict, where AI-generated content is widely used to manipulate public opinion. This incident underscores the growing challenge of distinguishing real events from digital fabrications in global conflicts and highlights the importance of media literacy and fact-checking.
Claim:
A X verified user with the handle "Iran, stop the war, we are sorry" posted a video featuring people holding placards and the Israeli flag. The caption suggests that Israeli citizens are calling for peace and expressing remorse, stating, "Stop the war with Iran! We apologize! The people of Israel want peace." The user further claims that Israel, having allegedly initiated the conflict by attacking Iran, is now seeking reconciliation.

Fact Check:
The bottom-right corner of the video displays a "VEO" watermark, suggesting it was generated using Google's AI tool, VEO 3. The video exhibits several noticeable inconsistencies such as robotic, unnatural speech, a lack of human gestures, and unclear text on the placards. Additionally, in one frame, a person wearing a blue T-shirt is seen holding nothing, while in the next frame, an Israeli flag suddenly appears in their hand, indicating possible AI-generated glitches.

We further analyzed the video using the AI detection tool HIVE Moderation, which revealed a 99% probability that the video was generated using artificial intelligence technology. To validate this finding, we examined a keyframe from the video separately, which showed an even higher likelihood of 99% probability of being AI generated. These results strongly indicate that the video is not authentic and was most likely created using advanced AI tools.

Conclusion:
The video is highly likely to be AI-generated, as indicated by the VEO watermark, visual inconsistencies, and a 99% probability from HIVE Moderation. This highlights the importance of verifying content before sharing, as misleading AI-generated media can easily spread false narratives.
- Claim: AI generated video of Israelis saying "Stop the War, Iran We are Sorry".
- Claimed On: Social Media
- Fact Check:AI Generated Mislead