#FactCheck - Viral Photos Falsely Linked to Iranian President Ebrahim Raisi's Helicopter Crash
Executive Summary:
On 20th May, 2024, Iranian President Ebrahim Raisi and several others died in a helicopter crash that occurred northwest of Iran. The images circulated on social media claiming to show the crash site, are found to be false. CyberPeace Research Team’s investigation revealed that these images show the wreckage of a training plane crash in Iran's Mazandaran province in 2019 or 2020. Reverse image searches and confirmations from Tehran-based Rokna Press and Ten News verified that the viral images originated from an incident involving a police force's two-seater training plane, not the recent helicopter crash.
Claims:
The images circulating on social media claim to show the site of Iranian President Ebrahim Raisi's helicopter crash.



Fact Check:
After receiving the posts, we reverse-searched each of the images and found a link to the 2020 Air Crash incident, except for the blue plane that can be seen in the viral image. We found a website where they uploaded the viral plane crash images on April 22, 2020.

According to the website, a police training plane crashed in the forests of Mazandaran, Swan Motel. We also found the images on another Iran News media outlet named, ‘Ten News’.

The Photos uploaded on to this website were posted in May 2019. The news reads, “A training plane that was flying from Bisheh Kolah to Tehran. The wreckage of the plane was found near Salman Shahr in the area of Qila Kala Abbas Abad.”
Hence, we concluded that the recent viral photos are not of Iranian President Ebrahim Raisi's Chopper Crash, It’s false and Misleading.
Conclusion:
The images being shared on social media as evidence of the helicopter crash involving Iranian President Ebrahim Raisi are incorrectly shown. They actually show the aftermath of a training plane crash that occurred in Mazandaran province in 2019 or 2020 which is uncertain. This has been confirmed through reverse image searches that traced the images back to their original publication by Rokna Press and Ten News. Consequently, the claim that these images are from the site of President Ebrahim Raisi's helicopter crash is false and Misleading.
- Claim: Viral images of Iranian President Raisi's fatal chopper crash.
- Claimed on: X (Formerly known as Twitter), YouTube, Instagram
- Fact Check: Fake & Misleading
Related Blogs

About Global Commission on Internet Governance
The Global Commission on Internet Governance was established in January 2014 with the goal of formulating and advancing a strategic vision for Internet governance going forward. Independent research on Internet-related issues of international public policy is carried out and supported over the two-year initiative. An official commission report with particular policy recommendations for the future of Internet governance will be made available as a result of this initiative.
There are two goals for the Global Commission on Internet Governance. First, it will encourage a broad and inclusive public discussion on how Internet governance will develop globally. Second, through its comprehensive policy-oriented report and the subsequent marketing of this final report, the Global Commission on Internet Governance will present its findings to key stakeholders at major Internet governance events.
The Internet: exploring the world wide web and the deep web
The Internet can be thought of as a vast networking infrastructure, or network of networks. By linking millions of computers worldwide, it creates a network that allows any two computers, provided they are both online, to speak with one another.
The Hypertext Transfer Protocol is the only language spoken over the Internet and is used by the Web to transfer data. Email, which depends on File Transfer Protocol, Usenet newsgroups, Simple Mail Transfer Protocol, and instant messaging, is also used on the Internet—not the Web. Thus, even though it's a sizable chunk, the Web is only a part of the Internet [1]. In summary, the deep Web is the portion of the Internet that is not visible to the naked eye. It is stuff from the World Wide Web that isn't available on the main Web. Standard search engines cannot reach it. More than 500 times larger than the visible Web is this enormous subset of the Internet [1-2].
The Global Commission on Internet Governance will concentrate on four principal themes:
• Improving the legitimacy of government, including standards and methods for regulation;
• Promoting economic innovation and expansion, including the development of infrastructure, competition laws, and vital Internet resources;
• Safeguarding online human rights, including establishing the idea of technological neutrality for rights to privacy, human rights, and freedom of expression;
• Preventing systemic risk includes setting standards for state behaviour, cooperating with law enforcement to combat cybercrime, preventing its spread, fostering confidence, and addressing disarmament-related issues.
Dark Web
The part of the deep Web that has been purposefully concealed and is unreachable using conventional Web browsers is known as the "dark Web." Dark Web sites are a platform for Internet users who value their anonymity since they shield users from prying eyes and typically utilize encryption to thwart monitoring. The Tor network is a well-known source for content that may be discovered on the dark web. Only a unique Web browser known as the Tor browser is required to access the anonymous Tor network (Tor 2014). It was a technique for anonymous online communication that the US Naval Research Laboratory first introduced as The Onion Routing (Tor) project in 2002. Many of the functionality offered by Tor are also available on I2P, another network. On the other hand, I2P was intended to function as a network inside the Internet, with traffic contained within its boundaries. Better anonymous access to the open Internet is offered by Tor, while a more dependable and stable "network within the network" is provided by I2P [3].
Cybersecurity in the dark web
Cyber crime is not any different than crime in the real world — it is just executed in a new medium: “Virtual criminality’ is basically the same as the terrestrial crime with which we are familiar. To be sure, some of the manifestations are new. But a great deal of crime committed with or against computers differs only in terms of the medium. While the technology of implementation, and particularly its efficiency, may be without precedent, the crime is fundamentally familiar. It is less a question of something completely different than a recognizable crime committed in a completely different way [4].”
Dark web monitoring
The dark Web, in general, and the Tor network, in particular, offer a secure platform for cybercriminals to support a vast amount of illegal activities — from anonymous marketplaces to secure means of communication, to an untraceable and difficult to shut down infrastructure for deploying malware and botnets.
As such, it has become increasingly important for security agencies to track and monitor the activities in the dark Web, focusing today on Tor networks, but possibly extending to other technologies in the near future. Due to its intricate webbing and design, monitoring the dark Web will continue to pose significant challenges. Efforts to address it should be focused on the areas discussed below [5].
Hidden service directory of dark web
A domain database used by both Tor and I2P is based on a distributed system called a "distributed hash table," or DHT. In order for a DHT to function, its nodes must cooperate to store and manage a portion of the database, which takes the shape of a key-value store. Owing to the distributed character of the domain resolution process for hidden services, nodes inside the DHT can be positioned to track requests originating from a certain domain [6].
Conclusion
The deep Web, and especially dark Web networks like Tor (2004), offer bad actors a practical means of transacting in products anonymously and lawfully.
The absence of discernible activity in non-traditional dark web networks is not evidence of their nonexistence. As per the guiding philosophy of the dark web, the actions are actually harder to identify and monitor. Critical mass is one of the market's driving forces. It seems unlikely that operators on the black Web will require a great degree of stealth until the repercussions are severe enough, should they be caught. It is possible that certain websites might go down, have a short trading window, and then reappear, which would make it harder to look into them.
References
- Ciancaglini, Vincenzo, Marco Balduzzi, Max Goncharov and Robert McArdle. 2013. “Deepweb and Cybercrime: It’s Not All About TOR.” Trend Micro Research Paper. October.
- Coughlin, Con. 2014. “How Social Media Is Helping Islamic State to Spread Its Poison.” The Telegraph, November 5.
- Dahl, Julia. 2014. “Identity Theft Ensnares Millions while the Law Plays Catch Up.” CBS News, July 14.
- Dean, Matt. 2014. “Digital Currencies Fueling Crime on the Dark Side of the Internet.” Fox Business, December 18.
- Falconer, Joel. 2012. “A Journey into the Dark Corners of the Deep Web.” The Next Web, October 8.
- Gehl, Robert W. 2014. “Power/Freedom on the Dark Web: A Digital Ethnography of the Dark Web Social Network.” New Media & Society, October 15. http://nms.sagepub.com/content/early/2014/ 10/16/1461444814554900.full#ref-38.

Introduction
The increasing online interaction and popularity of social media platforms for netizens have made a breeding ground for misinformation generation and spread. Misinformation propagation has become easier and faster on online social media platforms, unlike traditional news media sources like newspapers or TV. The big data analytics and Artificial Intelligence (AI) systems have made it possible to gather, combine, analyse and indefinitely store massive volumes of data. The constant surveillance of digital platforms can help detect and promptly respond to false and misinformation content.
During the recent Israel-Hamas conflict, there was a lot of misinformation spread on big platforms like X (formerly Twitter) and Telegram. Images and videos were falsely shared attributing to the ongoing conflict, and had spread widespread confusion and tension. While advanced technologies such as AI and big data analytics can help flag harmful content quickly, they must be carefully balanced against privacy concerns to ensure that surveillance practices do not infringe upon individual privacy rights. Ultimately, the challenge lies in creating a system that upholds both public security and personal privacy, fostering trust without compromising on either front.
The Need for Real-Time Misinformation Surveillance
According to a recent survey from the Pew Research Center, 54% of U.S. adults at least sometimes get news on social media. The top spots are taken by Facebook and YouTube respectively with Instagram trailing in as third and TikTok and X as fourth and fifth. Social media platforms provide users with instant connectivity allowing them to share information quickly with other users without requiring the permission of a gatekeeper such as an editor as in the case of traditional media channels.
Keeping in mind the data dumps that generated misinformation due to the elections that took place in 2024 (more than 100 countries), the public health crisis of COVID-19, the conflicts in the West Bank and Gaza Strip and the sheer volume of information, both true and false, has been immense. Identifying accurate information amid real-time misinformation is challenging. The dilemma emerges as the traditional content moderation techniques may not be sufficient in curbing it. Traditional content moderation alone may be insufficient, hence the call for a dedicated, real-time misinformation surveillance system backed by AI and with certain human sight and also balancing the privacy of user's data, can be proven to be a good mechanism to counter misinformation on much larger platforms. The concerns regarding data privacy need to be prioritized before deploying such technologies on platforms with larger user bases.
Ethical Concerns Surrounding Surveillance in Misinformation Control
Real-time misinformation surveillance could pose significant ethical risks and privacy risks. Monitoring communication patterns and metadata, or even inspecting private messages, can infringe upon user privacy and restrict their freedom of expression. Furthermore, defining misinformation remains a challenge; overly restrictive surveillance can unintentionally stifle legitimate dissent and alternate perspectives. Beyond these concerns, real-time surveillance mechanisms could be exploited for political, economic, or social objectives unrelated to misinformation control. Establishing clear ethical standards and limitations is essential to ensure that surveillance supports public safety without compromising individual rights.
In light of these ethical challenges, developing a responsible framework for real-time surveillance is essential.
Balancing Ethics and Efficacy in Real-Time Surveillance: Key Policy Implications
Despite these ethical challenges, a reliable misinformation surveillance system is essential. Key considerations for creating ethical, real-time surveillance may include:
- Misinformation-detection algorithms should be designed with transparency and accountability in mind. Third-party audits and explainable AI can help ensure fairness, avoid biases, and foster trust in monitoring systems.
- Establishing clear, consistent definitions of misinformation is crucial for fair enforcement. These guidelines should carefully differentiate harmful misinformation from protected free speech to respect users’ rights.
- Only collecting necessary data and adopting a consent-based approach which protects user privacy and enhances transparency and trust. It further protects them from stifling dissent and profiling for targeted ads.
- An independent oversight body that can monitor surveillance activities while ensuring accountability and preventing misuse or overreach can be created. These measures, such as the ability to appeal to wrongful content flagging, can increase user confidence in the system.
Conclusion: Striking a Balance
Real-time misinformation surveillance has shown its usefulness in counteracting the rapid spread of false information online. But, it brings complex ethical challenges that cannot be overlooked such as balancing the need for public safety with the preservation of privacy and free expression is essential to maintaining a democratic digital landscape. The references from the EU’s Digital Services Act and Singapore’s POFMA underscore that, while regulation can enhance accountability and transparency, it also risks overreach if not carefully structured. Moving forward, a framework for misinformation monitoring must prioritise transparency, accountability, and user rights, ensuring that algorithms are fair, oversight is independent, and user data is protected. By embedding these safeguards, we can create a system that addresses the threat of misinformation and upholds the foundational values of an open, responsible, and ethical online ecosystem. Balancing ethics and privacy and policy-driven AI Solutions for Real-Time Misinformation Monitoring are the need of the hour.
References
- https://www.pewresearch.org/journalism/fact-sheet/social-media-and-news-fact-sheet/
- https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:C:2018:233:FULL

Introduction
Misinformation regarding health is sensitive and can have far-reaching consequences. These include its effect on personal medical decisions taken by individuals, lack of trust in conventional medicine, delay in seeking treatments, and even loss of life. The fast-paced nature and influx of information on social media can aggravate the situation further. Recently, a report titled Health Misinformation Vectors in India was presented at the Health of India Summit, 2024. It provided certain key insights into health-related misinformation circulating online.
The Health Misinformation Vectors in India Report
The analysis was conducted by the doctors at First Check, a global health fact-checking initiative alongside DataLEADS, a Delhi-based digital media and technology company. The report covers health-related social media content that was posted online from October 2023 to November 2024. It mentions that among all the health scares, misinformation regarding reproductive health, cancer, vaccines, and lifestyle diseases such as diabetes and obesity is the most prominent type that is spread through social media. Misinformation regarding reproductive health includes illegal abortion methods that often go unchecked and even tips on conceiving a male child, among other things.
In order to combat this misinformation, the report encourages stricter regulations regarding health-related content on digital media, inculcating technology for health literacy and misinformation management in public health curricula and recommending tech platforms to work on algorithms that prioritise credible information and fact-checks. Doctors state that people affected by life-threatening diseases are particularly vulnerable to such misinformation, as they are desperate to seek options for treatment for themselves and their family members to have a chance at life. In a diverse society, with the lack of clear and credible information, limited access to or awareness about tools that cross-check content, and low digital literacy, people gravitate towards alternate sources of information which also fosters a sense of disengagement among the public overall. The diseases mentioned in the report, which are prone to misinformation, are life-altering and require attention from healthcare professionals.
CyberPeace Outlook
Globally, there are cases of medically-unqualified social media influencers who disperse false/mis- information regarding various health matters. The topics covered are mostly associated with stigma and are still undergoing research. This gap allows for misinformation to be fostered. An example is misinformation regarding PCOS( Polycystic Ovary Syndrome) which is circulating online.
In the midst of all of this, YouTube has released a new feature that aligns with combating health misinformation, trying to bridge the gap between healthcare professionals and Indians who look for trustworthy health-related information online. The initiative includes a feature that allows doctors, nurses, and other healthcare professionals to sign up for a health information source license. This would help by labeling all their informative videos, as addressed- from a healthcare professional. Earlier, this feature was available only for health organisations including a health source information panel and health content shelves, but this step broadens the scope for verification of licenses of individual healthcare professionals.
As digital literacy continues to grow, methods of seeking credible information, especially regarding sensitive topics such as health, require a combined effort on the part of all the stakeholders involved. We need a robust strategy for battling health-related misinformation online, including more awareness programmes and proactive participation from the consumers as well as medical professionals regarding such content.
References
- https://timesofindia.indiatimes.com/india/misinformation-about-cancer-reproductive-health-is-widespread-in-india-impacting-medical-decisions-says-report/articleshow/115931612.cms
- https://www.ndtv.com/india-news/cancer-misinformation-prevalent-in-india-trust-in-medicine-crucial-report-7165458
- https://www.newindian.in/ai-driven-health-misinformation-poses-threat-to-indias-public-health-report/
- https://www.etvbharat.com/en/!health/youtube-latest-initiative-combat-health-misinformation-india-enn24121002361
- https://blog.google/intl/en-in/products/platforms/new-ways-for-registered-healthcare-professionals-in-india-to-reach-people-on-youtube/
- https://www.bbc.com/news/articles/ckgz2p0999yo