#FactCheck - AI-Generated Image Falsely Linked to Doda Army Vehicle Accident
Executive Summary
On January 22, an Indian Army vehicle met with an accident in Jammu and Kashmir’s Doda district, resulting in the death of 10 soldiers, while several others were injured. In connection with this tragic incident, a photograph is now going viral on social media. The viral image shows an Army vehicle that appears to have fallen into a deep gorge, with several soldiers visible around the site. Users sharing the image are claiming that it depicts the actual scene of the Doda accident.
However, an research by the CyberPeacehas found that the viral image is not genuine. The photograph has been generated using Artificial Intelligence (AI) and does not represent the real accident. Hence, the viral post is misleading.
Claim
An Instagram user shared the viral image on January 22, 2026, writing:“Deeply saddened by the tragic accident in Doda, Jammu & Kashmir today, in which 10 brave soldiers lost their lives. My heartfelt tribute to the martyrs who laid down their lives in the line of duty.Sincere condolences to the bereaved families, and prayers for the speedy recovery of the injured soldiers.The nation will forever remember your sacrifice.”
The link and screenshot of the post can be seen below.
- https://www.instagram.com/p/DT0UBIRk_3k/
- https://archive.ph/submit/?url=https%3A%2F%2Fwww.instagram.com%2Fp%2FDT0UBIRk_3k%2F+

Fact Check:
To verify the claim, we first closely examined the viral image. Several visual inconsistencies were observed. The structure of the soldier visible inside the damaged vehicle appears distorted, and the hands and limbs of people involved in the rescue operation look unnatural. These anomalies raised suspicion that the image might be AI-generated. Based on this, we ran the image through the AI detection tool Hive Moderation, which indicated that the image is over 99.9% likely to be AI-generated.

Another AI image detection tool, Sightengine, also flagged the image as 99% AI-generated.

During further research , we found a report published by Navbharat Times on January 22, 2026, which confirmed that an Indian Army vehicle had indeed fallen into a deep gorge in Doda district. According to officials, 10 soldiers were killed and 7 others were injured, and rescue operations were immediately launched.
However, it is important to note that the image circulating on social media is not an actual photograph from the incident.

Conclusion
CyberPeace research confirms that the viral image linked to the Doda Army vehicle accident has been created using Artificial Intelligence. It is not a real photograph from the incident, and therefore, the viral post is misleading.
Related Blogs

WhatsApp messages masquerading as an offer from Maruti Suzuki with links luring unsuspecting users with the promise of Maruti Suzuki 40th Anniversary Celebration presents, have been making the rounds on the app. If you receive such messages try to stay away from it, as it can be a scam.
The Research Wing of CyberPeace Foundation along with Autobot Infosec Private Limited have conducted a study based on a WhatsApp message that contained a link pretending to be a free gift offer from Maruti Suzuki which asks users to participate in a survey in order to get a chance to win a Maruti Baleno Sigma MT car.
Warning SignsThe campaign pretends to be an offer from Maruti Suzuki but is hosted on a third party domain instead of the official Maruti Suzuki website which makes it more suspicious.
The domain names associated with the campaign have been registered in very recent times.
Multiple redirections have been noticed between the links.
No reputed site would ask its users to share the campaign on WhatsApp.
The prize is kept really attractive to lure the laymen.
Grammatical mistakes have been noticed.
A congratulations message appears on the landing page with an attractive photo of Maruti Suzuki cars that asks users to participate in a quick survey in order to get a “Maruti Suzuki BALENO Sigma MT”. Also, the bottom of the page seems to appear like a comment section with public comments establishing the truthfulness of the offer.
The survey starts with some basic questions like Do you know Maruti Suzuki?, How old are you?, How do you think of Maruti Suzuki?, Are you male or female? Etc. Once the user answers the questions a “congratulatory message” is displayed.
On clicking the OK button users are given three attempts to win the prize. After completing all the attempts a message pops up that the user has won “Maruti Suzuki BALENO Sigma MT”. It then prompts the user to share the message on WhatsApp.
Strangely enough the user has to keep clicking the WhatsApp button until the progress bar completes. After clicking on the green ‘WhatsApp’ button multiple times it shows a section where an instruction has been given to complete registration in order to get the prize.
After clicking on the green ‘Complete registration’ button, it redirects the user to multiple advertisements web pages varying each time the user clicks on the button.
During the analysis the research team found a javascript code called hm.js was being executed in the background from the host hm[.]baidu[.]com which is a subdomain of Baidu and is used for Baidu Analytics, also known as Baidu Tongji. The important part is that Baidu is a Chinese multinational technology company specializing in Internet-related services, products and artificial intelligence, headquartered in Beijing’s Haidian district, China.To read the full report, please click (https://www.cyberpeace.org/CyberPeace/Repository/20210828Research-report-on-Maruti-Suzuki-40th-Anniversary-Celebration-free-gift-scam.pdf) here:
Conclusive Summary
1. The whole research activity was performed in a secured sandbox environment where the WhatsApp application was not installed. If any user opens the link from a device like smartphones where the WhatsApp application is installed, the sharing features on the site will open the Whatsapp application on the device to share the link.
2. The campaign collects browser and system information from the users.
3. Most of the domain names associated with the campaign have the registrant country as China.
4. Cybercriminals used Cloudflare technologies to mask the real IP addresses of the front-end domain names used in this Maruti Suzuki 40th Anniversary Celebration free gift campaign. But during the phases of investigation, the research team has identified a domain name that was requested in the background and has been traced as belonging to China.
CyberPeace Advisory
1. CyberPeace Foundation and Autobot Infosec recommend that people should avoid opening such messages sent via social platforms.
2. If at all, the user gets into this trap, it could lead to whole system compromise such as access to the microphone, Camera, Text Messages, Contacts, Pictures, Videos, Banking Applications, etc as well as financial losses.
3. Do not share confidential details like login credentials, banking information with such a type of scam.
4. Do not share or forward fake messages containing links without proper verification.
5. There is a need for International Cyber Cooperation between countries to bust the cybercriminal gangs running the fraud campaigns affecting individuals and organizations, to make Cyberspace resilient and peaceful.

Introduction
In today’s hyper-connected world, information spreads faster than ever before. But while much attention is focused on public platforms like Facebook and Twitter, a different challenge lurks in the shadows: misinformation circulating on encrypted and closed-network platforms such as WhatsApp and Telegram. Unlike open platforms where harmful content can be flagged in public, private groups operate behind a digital curtain. Here, falsehoods often spread unchecked, gaining legitimacy because they are shared by trusted contacts. This makes encrypted platforms a double-edged sword. It is essential for privacy and free expression, yet uniquely vulnerable to misuse.
As Prime Minister Narendra Modi rightly reminded,
“Think 10 times before forwarding anything,” warning that even a “single fake news has the capability to snowball into a matter of national concern.”
The Moderation Challenge with End-to-End Encryption
Encrypted messaging platforms were built to protect personal communication. Yet, the same end-to-end encryption that shields users’ privacy also creates a blind spot for moderation. Authorities, researchers, and even the platforms themselves cannot view content circulating in private groups, making fact-checking nearly impossible.
Trust within closed groups makes the problem worse. When a message comes from family, friends, or community leaders, people tend to believe it without questioning and quickly pass it along. Features like large group chats, broadcast lists, and “forward to many” options further speed up its spread. Unlike open networks, there is no public scrutiny, no visible counter-narrative, and no opportunity for timely correction.
During the COVID-19 pandemic, false claims about vaccines spread widely through WhatsApp groups, undermining public health campaigns. Even more alarming, WhatsApp rumors about child kidnappers and cow meat in India triggered mob lynchings, leading to the tragic loss of life.
Encrypted platforms, therefore, represent a unique challenge: they are designed to protect privacy, but, unintentionally, they also protect the spread of dangerous misinformation.
Approaches to Curbing Misinformation on End-to-End Platforms
- Regulatory: Governments worldwide are exploring ways to access encrypted data on messaging platforms, creating tensions between the right to user privacy and crime prevention. Approaches like traceability requirements on WhatsApp, data-sharing mandates for platforms in serious cases, and stronger obligations to act against harmful viral content are also being considered.
- Technological Interventions: Platforms like WhatsApp have introduced features such as “forwarded many times” labels and limits on mass forwarding. These tools can be expanded further by introducing AI-driven link-checking and warnings for suspicious content.
- Community-Based Interventions: Ultimately, no regulation or technology can succeed without public awareness. People need to be inoculated against misinformation through pre-bunking efforts and digital literacy campaigns. Fact-checking websites and tools also have to be taught.
Best Practices for Netizens
Experts recommend simple yet powerful habits that every user can adopt to protect themselves and others. By adopting these, ordinary users can become the first line of defence against misinformation in their own communities:
- Cross-Check Before Forwarding: Verify claims from trusted platforms & official sources.
- Beware of Sensational Content: Headlines that sound too shocking or dramatic probably need checking. Consult multiple sources for a piece of news. If only one platform/ channel is carrying sensational news, it is likely to be clickbait or outright false.
- Stick to Trusted News Sources: Verify news through national newspapers and expert commentary. Remember, not everything on the internet/television is true.
- Look Out for Manipulated Media: Now, with AI-generated deepfakes, it becomes more difficult to tell the difference between original and manipulated media. Check for edited images, cropped videos, or voice messages without source information. Always cross-verify any media received.
- Report Harmful Content: Report misinformation to the platform it is being circulated on and PIB’s Fact Check Unit.
Conclusion
In closed, unmonitored groups, platforms like WhatsApp and Telegram often become safe havens where people trust and forward messages from friends and family without question. Once misinformation takes root, it becomes extremely difficult to challenge or correct, and over time, such actions can snowball into serious social, economic and national concerns.
Preventing this is a matter of shared responsibility. Governments can frame balanced regulations, but individuals must also take initiative: pause, think, and verify before sharing. Ultimately, the right to privacy must be upheld, but with reasonable safeguards to ensure it is not misused at the cost of societal trust and safety.
References
- India WhatsApp ‘child kidnap’ rumours claim two more victims (BBC) The people trying to fight fake news in India (BBC)
- Press Information Bureau – PIB Fact Check
- Brookings Institution – Encryption and Misinformation Report (2021)
- Curtis, T. L., Touzel, M. P., Garneau, W., Gruaz, M., Pinder, M., Wang, L. W., Krishna, S., Cohen, L., Godbout, J.-F., Rabbany, R., & Pelrine, K. (2024). Veracity: An Open-Source AI Fact-Checking System. arXiv.
- NDTV – PM Modi cautions against fake news (2022)
- Times of India – Govt may insist on WhatsApp traceability (2019)
- Medianama – Telegram refused to share ISIS channel data (2019)
.webp)
Introduction
Smartphones have revolutionised human connectivity. In 2023, it was estimated that almost 96% of the global digital population is accessing the internet via their mobile phones and India alone has 1.05 billion users. Information consumption has grown exponentially due to the enhanced accessibility that these mobiles provide. These devices allow accessibility to information no matter where one is, and have completely transformed how we engage with the world around us, be it to skim through work emails while commuting, video streaming during breaks, reading an ebook at our convenience or even catching up on news at any time or place. Mobile phones grant us instant access to the web and are always within reach.
But this instant connection has its downsides too, and one of the most worrying of these is the rampant rise of misinformation. These tiny screens and our constant, on-the-go dependence on them can be directly linked to the spread of “fake news,” as people consume more and more content in rapid bursts, without taking the time to really process the same or think deeply about its authenticity. There is an underlying cultural shift in how we approach information and learning currently: the onslaught of vast amounts of “bite-sized information” discourages people from researching what they’re being told or shown. The focus has shifted from learning deeply to consuming more and sharing faster. And this change in audience behaviour is making us vulnerable to misinformation, disinformation and unchecked foreign influence.
The Growth of Mobile Internet Access
More than 5 billion people are connected to the internet and web traffic is increasing rapidly. The developed countries in North America and Europe are experiencing mobile internet penetration at a universal rate. Contrastingly, the developing countries of Africa, Asia, and Latin America are experiencing rapid growth in this penetration. The introduction of affordable smartphones and low-cost mobile data plans has expanded access to internet connectivity. 4G and 5G infrastructure development have further bridged any connectivity gaps. This widespread access to the mobile internet has democratised information, allowing millions of users to participate in the digital economy. Access to educational resources while at the same time engaging in global conversations is one such example of the democratisation of information. This reduces the digital divide between diverse groups and empowers communities with unprecedented access to knowledge and opportunities.
The Nature of Misinformation in the Mobile Era
Misinformation spread has become more prominent in recent times and one of the contributing factors is the rise of mobile internet. This instantaneous connection has made social media platforms like Facebook, WhatsApp, and X (formerly Twitter) available on a single compact and portable device. These social media platforms enable users to share content instantly and to a wide user base, many times without verifying its accuracy. The virality of social media sharing, where posts can reach thousands of users in seconds, accelerates the spread of false information. This ease of sharing, combined with algorithms that prioritise engagement, creates a fertile ground for misinformation to flourish, misleading vast numbers of people before corrections or factual information can be disseminated.
Some of the factors that are amplifying misinformation sharing through mobile internet are algorithmic amplification which prioritises engagement, the ease of sharing content due to instant access and user-generated content, the limited media literacy of users and the echo chambers which reinforce existing biases and spread false information.
Gaps and Challenges due to the increased accessibility of Mobile Internet
Despite growing concerns about misinformation spread due to mobile internet, policy responses remain inadequate, particularly in developing countries. These gaps include: the lack of algorithm regulation, as social media platforms prioritise engaging content, often fueling misinformation. Inadequate international cooperation further complicates enforcement, as addressing the cross-border nature of misinformation has been a struggle for national regulations.
Additionally, balancing content moderation with free speech remains challenging, with efforts to curb misinformation sometimes leading to concerns over censorship.
Finally, a deficit in media literacy leaves many vulnerable to false information. Governments and international organisations must prioritise public education to equip users with the required skills to evaluate online content, especially in low-literacy regions.
CyberPeace Recommendations
Addressing misinformation via mobile internet requires a collaborative, multi-stakeholder approach.
- Governments should mandate algorithm transparency, ensuring social media platforms disclose how content is prioritised and give users more control.
- Collaborative fact-checking initiatives between governments, platforms, and civil society could help flag or correct false information before it spreads, especially during crises like elections or public health emergencies.
- International organisations should lead efforts to create standardised global regulations to hold platforms accountable across borders.
- Additionally, large-scale digital literacy campaigns are crucial, teaching the public how to assess online content and avoid misinformation traps.
Conclusion
Mobile internet access has transformed information consumption and bridged the digital divide. At the same time, it has also accelerated the spread of misinformation. The global reach and instant nature of mobile platforms, combined with algorithmic amplification, have created significant challenges in controlling the flow of false information. Addressing this issue requires a collective effort from governments, tech companies, and civil society to implement transparent algorithms, promote fact-checking, and establish international regulatory standards. Digital literacy should be enhanced to empower users to assess online content and counter any risks that it poses.
References
- https://www.statista.com/statistics/1289755/internet-access-by-device-worldwide/
- https://www.forbes.com/sites/kalevleetaru/2019/05/01/are-smartphones-making-fake-news-and-disinformation-worse/
- https://www.pewresearch.org/short-reads/2019/03/07/7-key-findings-about-mobile-phone-and-social-media-use-in-emerging-economies/ft_19-02-28_globalmobilekeytakeaways_misinformation/
- https://www.psu.edu/news/research/story/slow-scroll-users-less-vigilant-about-misinformation-mobile-phones