#FactCheck - Old Video Misleadingly Claimed as Footage of Iranian President Before Crash
Executive Summary:
A video that circulated on social media to show Iranian President Ebrahim Raisi inside a helicopter moments before the tragic crash on May 20, 2024, has equally been proven to be fake. The validation of information leaves no doubt, that the video was shot in January 2024, which showed Raisi’s visiting Nemroud Reservoir Dam project. As a means of verifying the origin of the video, the CyberPeace Research Team conducted reverse image search and analyzed the information obtained from the Islamic Republic News Agency, Mehran News, and the Iranian Students’ News Agency. Further, the associated press pointed out inconsistencies between the part in the video that went viral and the segment that was shown by Iranian state television. The original video is old and it is not related to the tragic crash as there is incongruence between the snowy background and the green landscape with a river presented in the clip.

Claims:
A video circulating on social media claims to show Iranian President Ebrahim Raisi inside a helicopter an hour before his fatal crash.



Fact Check:
Upon receiving the posts, in some of the social media posts we found some similar watermarks of the IRNA News agency and Nouk-e-Qalam News.

Taking a cue from this, we performed a keyword search to find any credible source of the shared video, but we found no such video uploaded by the IRNA News agency on their website. Recently, they haven’t uploaded any video regarding the viral news.
We closely analyzed the video, it can be seen that President Ebrahim Raisi was watching outside the snow-covered mountain, but in the internet-available footage regarding the accident, there were no such snow-covered mountains that could be seen but green forest.
We then checked for any social media posts uploaded by IRNA News Agency and found that they had uploaded the same video on X on January 18, 2024. The post clearly indicates the President’s aerial visit to Nemroud Dam.

The viral video is old and does not contain scenes that appear before the tragic chopper crash involving President Raisi.
Conclusion:
The viral clip is not related to the fatal crash of Iranian President Ebrahim Raisi's helicopter and is actually from a January 2024 visit to the Nemroud Reservoir Dam project. The claim that the video shows visuals before the crash is false and misleading.
- Claim: Viral Video of Iranian President Raisi was shot before fatal chopper crash.
- Claimed on: X (Formerly known as Twitter), YouTube, Instagram
- Fact Check: Fake & Misleading
Related Blogs
.png)
Introduction
The fast-paced development of technology and the wider use of social media platforms have led to the rapid dissemination of misinformation with characteristics such as diffusion, fast propagation speed, wide influence, and deep impact through these platforms. Social Media Algorithms and their decisions are often perceived as a black box introduction that makes it impossible for users to understand and recognise how the decision-making process works.
Social media algorithms may unintentionally promote false narratives that garner more interactions, further reinforcing the misinformation cycle and making it harder to control its spread within vast, interconnected networks. Algorithms judge the content based on the metrics, which is user engagement. It is the prerequisite for algorithms to serve you the best. Hence, algorithms or search engines enlist relevant items you are more likely to enjoy. This process, initially, was created to cut the clutter and provide you with the best information. However, sometimes it results in unknowingly widespread misinformation due to the viral nature of information and user interactions.
Analysing the Algorithmic Architecture of Misinformation
Social media algorithms, designed to maximize user engagement, can inadvertently promote misinformation due to their tendency to trigger strong emotions, creating echo chambers and filter bubbles. These algorithms prioritize content based on user behaviour, leading to the promotion of emotionally charged misinformation. Additionally, the algorithms prioritize content that has the potential to go viral, which can lead to the spread of false or misleading content faster than corrections or factual content.
Additionally, popular content is amplified by platforms, which spreads it faster by presenting it to more users. Limited fact-checking efforts are particularly difficult since, by the time they are reported or corrected, erroneous claims may have gained widespread acceptance due to delayed responses. Social media algorithms find it difficult to distinguish between real people and organized networks of troll farms or bots that propagate false information. This creates a vicious loop where users are constantly exposed to inaccurate or misleading material, which strengthens their convictions and disseminates erroneous information through networks.
Though algorithms, primarily, aim to enhance user engagement by curating content that aligns with the user's previous behaviour and preferences. Sometimes this process leads to "echo chambers," where individuals are exposed mainly to information that reaffirms their beliefs which existed prior, effectively silencing dissenting voices and opposing viewpoints. This curated experience reduces exposure to diverse opinions and amplifies biased and polarising content, making it arduous for users to discern credible information from misinformation. Algorithms feed into a feedback loop that continuously gathers data from users' activities across digital platforms, including websites, social media, and apps. This data is analysed to optimise user experiences, making platforms more attractive. While this process drives innovation and improves user satisfaction from a business standpoint, it also poses a danger in the context of misinformation. The repetitive reinforcement of user preferences leads to the entrenchment of false beliefs, as users are less likely to encounter fact-checks or corrective information.
Moreover, social networks and their sheer size and complexity today exacerbate the issue. With billions of users participating in online spaces, misinformation spreads rapidly, and attempting to contain it—such as by inspecting messages or URLs for false information—can be computationally challenging and inefficient. The extensive amount of content that is shared daily means that misinformation can be propagated far quicker than it can get fact-checked or debunked.
Understanding how algorithms influence user behaviour is important to tackling misinformation. The personalisation of content, feedback loops, the complexity of network structures, and the role of superspreaders all work together to create a challenging environment where misinformation thrives. Hence, highlighting the importance of countering misinformation through robust measures.
The Role of Regulations in Curbing Algorithmic Misinformation
The EU's Digital Services Act (DSA) applicable in the EU is one of the regulations that aims to increase the responsibilities of tech companies and ensure that their algorithms do not promote harmful content. These regulatory frameworks play an important role they can be used to establish mechanisms for users to appeal against the algorithmic decisions and ensure that these systems do not disproportionately suppress legitimate voices. Independent oversight and periodic audits can ensure that algorithms are not biased or used maliciously. Self-regulation and Platform regulation are the first steps that can be taken to regulate misinformation. By fostering a more transparent and accountable ecosystem, regulations help mitigate the negative effects of algorithmic misinformation, thereby protecting the integrity of information that is shared online. In the Indian context, the Intermediary Guidelines, 2023, Rule 3(1)(b)(v) explicitly prohibits the dissemination of misinformation on digital platforms. The ‘Intermediaries’ are obliged to ensure reasonable efforts to prevent users from hosting, displaying, uploading, modifying, publishing, transmitting, storing, updating, or sharing any information related to the 11 listed user harms or prohibited content. This rule aims to ensure platforms identify and swiftly remove misinformation, and false or misleading content.
Cyberpeace Outlook
Understanding how algorithms prioritise content will enable users to critically evaluate the information they encounter and recognise potential biases. Such cognitive defenses can empower individuals to question the sources of the information and report misleading content effectively. In the future of algorithms in information moderation, platforms should evolve toward more transparent, user-driven systems where algorithms are optimised not just for engagement but for accuracy and fairness. Incorporating advanced AI moderation tools, coupled with human oversight can improve the detection and reduction of harmful and misleading content. Collaboration between regulatory bodies, tech companies, and users will help shape the algorithms landscape to promote a healthier, more informed digital environment.
References:
- https://www.advancedsciencenews.com/misformation-spreads-like-a-nuclear-reaction-on-the-internet/
- https://www.niemanlab.org/2024/09/want-to-fight-misinformation-teach-people-how-algorithms-work/
- Press Release: Press Information Bureau (pib.gov.in)

Introduction
Discussions took place focused on cybersecurity measures, specifically addressing cybercrime in the context of emerging technologies such as Non-Fungible Tokens (NFTs), Artificial Intelligence (AI), and the Metaverse. Session 5 of the conference focused on the interconnectedness between the darknet and cryptocurrency and the challenges it poses for law enforcement agencies and regulators. They discussed that Understanding AI is necessary for enterprises. AI models have difficulties, but we are looking forward to trustworthy AIs. and AI technology must be transparent.
Darknet and Cryptocurrency
The darknet refers to the hidden part of the internet where illicit activities have proliferated in recent years. It was initially developed to provide anonymity, privacy, and protection to specific individuals such as journalists, activists, and whistleblowers. However, it has now become a playground for criminal activities. Cryptocurrency, particularly Bitcoin, has been widely adopted on the darknet due to its anonymous nature, enabling anti-money laundering and unlawful transactions.
Three major points emerge from this relationship: the integrated nature of the darknet and cryptocurrency, the need for regulations to prevent darknet-based crimes, and the importance of striking a balance between privacy and security.
Key Challenges:
- Integrated Relations: The darknet and cryptocurrency have evolved independently, with different motives and purposes. It is crucial to understand the integrated relationship between them and how criminals exploit this connection.
- Regulatory Frameworks: There is a need for effective regulations to prevent crimes facilitated through the darknet and cryptocurrency while striking a balance between privacy and security.
- Privacy and Security: Privacy is a fundamental right, and any measures taken to enhance security should not infringe upon individual privacy. A multistakeholder approach involving tech companies and regulators is necessary to find this delicate balance.
Challenges Associated with Cryptocurrency Use:
The use of cryptocurrency on the darknet poses several challenges. The risks associated with darknet-based cryptocurrency crimes are a significant concern. Additionally, regulatory challenges arise due to the decentralised and borderless nature of cryptocurrencies. Mitigating these challenges requires innovative approaches utilising emerging technologies.
Preventing Misuse of Technologies:
The discussion emphasised that we can step ahead of the people who wish to use these beautiful technologies meant and developed for a different purpose, to prevent from using them for crime.
Monitoring the Darknet:
The darknet, as explained, is an elusive part of the internet that necessitates the use of a special browser for access. Initially designed for secure communication by the US government, its purpose has drastically changed over time. The darknet’s evolution has given rise to significant challenges for law enforcement agencies striving to monitor its activities.
Around 95% of the activities carried out on the dark net are associated with criminal acts. Estimates suggest that over 50% of the global cybercrime revenue originates from the dark net. This implies that approximately half of all cybercrimes are facilitated through the darknet.
The exploitation of the darknet has raised concerns regarding the need for effective regulation. Monitoring the darknet is crucial for law enforcement, national agencies, and cybersecurity companies. The challenges associated with the darknet’s exploitation and the criminal activities facilitated by cryptocurrency emphasise the pressing need for regulations to ensure a secure digital landscape.
Use of Cryptocurrency on the Darknet
Cryptocurrency plays a central role in the activities taking place on the darknet. The discussion highlighted its involvement in various illicit practices, including ransomware attacks, terrorist financing, extortion, theft, and the operation of darknet marketplaces. These applications leverage cryptocurrency’s anonymous features to enable illegal transactions and maintain anonymity.
AI's Role in De-Anonymizing the Darknet and Monitoring Challenges:
- 1.AI’s Potential in De-Anonymizing the Darknet
During the discussion, it was highlighted how AI could be utilised to help in de-anonymizing the darknet. AI’s pattern recognition capabilities can aid in identifying and analysing patterns of behaviour within the darknet, enabling law enforcement agencies and cybersecurity experts to gain insights into its operations. However, there are limitations to what AI can accomplish in this context. AI cannot break encryption or directly associate patterns with specific users, but it can assist in identifying illegal marketplaces and facilitating their takedown. The dynamic nature of the darknet, with new marketplaces quickly emerging, adds further complexity to monitoring efforts.
- 2.Challenges in Darknet Monitoring
Monitoring the darknet poses various challenges due to its vast amount of data, anonymous and encrypted nature, dynamically evolving landscape, and the need for specialised access. These challenges make it difficult for law enforcement agencies and cybersecurity professionals to effectively track and prevent illicit activities.
- 3.Possible Ways Forward
To address the challenges, several potential avenues were discussed. Ethical considerations, striking a balance between privacy and security, must be taken into account. Cross-border collaboration, involving the development of relevant laws and policies, can enhance efforts to combat darknet-related crimes. Additionally, education and awareness initiatives, driven by collaboration among law enforcement, government entities, and academia, can play a crucial role in combating darknet activities.
The panel also addressed the questions from the audience
- How law enforcement agencies and regulators can use AI to detect and prevent crimes on the darknet and cryptocurrency? The panel answered that- Law enforcement officers should also be AI and technology ready, and that kind of upskilling program should be there in place.
- How should lawyers and the judiciary understand the problem and regulate it? The panel answered that AI should only be applied by looking at the outcomes. And Law has to be clear as to what is acceptable and what is not.
- Aligning AI with human intention? Whether it’s possible? Whether can we create an ethical AI instead of talking about using AI ethically? The panel answered that we have to understand how to behave ethically. AI can beat any human. We have to learn AI. Step one is to focus on our ethical behaviour. And step two is bringing the ethical aspect to the software and technologies. Aligning AI with human intention and creating ethical AI is a challenge. The focus should be on ethical behaviour both in humans and in the development of AI technologies.
Conclusion
The G20 Conference on Crime and Security shed light on the intertwined relationship between the darknet and cryptocurrency and the challenges it presents to cybersecurity. The discussions emphasised the need for effective regulations, privacy-security balance, AI integration, and cross-border collaboration to tackle the rising cybercrime activities associated with the darknet and cryptocurrency. Addressing these challenges will require the combined efforts of governments, law enforcement agencies, technology companies, and individuals committed to building a safer digital landscape.

Introduction
Google’s search engine is widely known for its ability to tailor its search results based on user activity, enhancing the relevance of search outcomes. Recently, Google introduced the ‘Try Without Personalisation’ feature. This feature allows users to view results independent of their prior activity. This change marks a significant shift in platform experiences, offering users more control over their search experience while addressing privacy concerns.
However, even in this non-personalised mode, certain contextual factors including location, language, and device type, continue to influence results. This essentially provides the search with a baseline level of relevance. This feature carries significant policy implications, particularly in the areas of privacy, consumer rights, and market competition.
Understanding the Feature
When users engage with this option of non-personalised search, it will no longer show them helpful individual results that are personalisation-dependent and will instead provide unbiased search results. Essentially,this feature provides users with neutral (non-personalised) search results by bypassing their data.
This feature allows the following changes:
- Disables the user’s ability to find past searches in Autofill/Autocomplete.
- Does not pause or delete stored activity within a user’s Google account. Users, because of this feature, will be able to pause or delete stored activity through data and privacy controls.
- The feature doesn't delete or disable app/website preferences like language or search settings are some of the unaffected preferences.
- It also does not disable or delete the material that users save.
- When a user is signed in, they can ‘turn off the personalisation’ by clicking on the search option at the end of the webpage. These changes, offered by the feature, in functionality, have significant implications for privacy, competition, and user trust.
Policy Implications: An Analysis
This feature aligns with global privacy frameworks such as the GDPR in the EU and the DPDP Act in India. By adhering to principles like data minimisation and user consent, it offers users control over their data and the choice to enable or disable personalisation, thereby enhancing user autonomy and trust.
However, there is a trade-off between user expectations for relevance and the impartiality of non-personalised results. Additionally, the introduction of such features may align with emerging regulations on data usage, transparency, and consent. Policymakers play a crucial role in encouraging innovations like these while ensuring they safeguard user rights and maintain a competitive market.
Conclusion and Future Outlook
Google's 'Try Without Personalisation' feature represents a pivotal moment for innovation by balancing user privacy with search functionality. By aligning with global privacy frameworks such as the GDPR and the DPDP Act, it empowers users to control their data while navigating the complex interplay between relevance and neutrality. However, its success hinges on overcoming technical hurdles, fostering user understanding, and addressing competitive and regulatory scrutiny. As digital platforms increasingly prioritise transparency, such features could redefine user expectations and regulatory standards in the evolving tech ecosystem.