#FactCheck - Viral image circulating on social media depicts a natural optical illusion from Epirus, Greece.
Executive Summary:
A viral image circulating on social media claims it to be a natural optical illusion from Epirus, Greece. However, upon fact-checking, it was found that the image is an AI-generated artwork created by Iranian artist Hamidreza Edalatnia using the Stable Diffusion AI tool. CyberPeace Research Team found it through reverse image search and analysis with an AI content detection tool named HIVE Detection, which indicated a 100% likelihood of AI generation. The claim of the image being a natural phenomenon from Epirus, Greece, is false, as no evidence of such optical illusions in the region was found.

Claims:
The viral image circulating on social media depicts a natural optical illusion from Epirus, Greece. Users share on X (formerly known as Twitter), YouTube Video, and Facebook. It’s spreading very fast across Social Media.

Similar Posts:


Fact Check:
Upon receiving the Posts, the CyberPeace Research Team first checked for any Synthetic Media detection, and the Hive AI Detection tool found it to be 100% AI generated, which is proof that the Image is AI Generated. Then, we checked for the source of the image and did a reverse image search for it. We landed on similar Posts from where an Instagram account is linked, and the account of similar visuals was made by the creator named hamidreza.edalatnia. The account we landed posted a photo of similar types of visuals.

We searched for the viral image in his account, and it was confirmed that the viral image was created by this person.

The Photo was posted on 10th December, 2023 and he mentioned using AI Stable Diffusion the image was generated . Hence, the Claim made in the Viral image of the optical illusion from Epirus, Greece is Misleading.
Conclusion:
The image claiming to show a natural optical illusion in Epirus, Greece, is not genuine, and it's False. It is an artificial artwork created by Hamidreza Edalatnia, an artist from Iran, using the artificial intelligence tool Stable Diffusion. Hence the claim is false.
Related Blogs

Introduction:
Welcome to the second edition of our blog on Digital forensics series. In our previous blog we discussed what digital forensics is, the process followed by the tools, and the subsequent challenges faced in the field. Further, we looked at how the future of Digital Forensics will hold in the current scenario. Today, we will explore differences between 3 particular similar sounding terms that vary significantly in functionality when implemented: Copying, Cloning and Imaging.
In Digital Forensics, the preservation and analysis of electronic evidence are important for investigations and legal proceedings. Replication of the data and devices is one of the fundamental tasks in this domain, without compromising the integrity of the original evidence.
Three primary techniques -- copying, cloning, and imaging -- are used for this purpose. Each technique has its own strengths and is applied according to the needs of the investigation.
In this blog, we will examine the differences between copying, cloning and imaging. We will talk about the importance of each technique, their applications and why imaging is considered the best for forensic investigations.
Copying
Copying means duplicating data or files from one location to another. When one does copying, it implies that one is using standard copy commands. However, when dealing with evidence, it might be hard to use copy only. It is because the standard copy can alter the metadata and change the hidden or deleted data .
The characteristics of copying include:
- Speed: copying is simpler and faster,compared to cloning or imaging.
- Risk: The risk involved in copying is that the metadata might be altered and all the data might be captured.
Cloning
It is the process where the transfer of the entire contents of a hard drive or a storage device is done on another storage device. This process is known as cloning . This way, the cloning process captures both the active data and the unallocated space and hidden partitions, thus containing the whole structure of the original device. Cloning is generally used at the sector level of the device. Clones can be used as the working copy of a device .
Characteristics of cloning:
- bit-for-bit replication: cloning keeps the exact content and the whole structure of the original device.
- Use cases: cloning is used when it is needed to keep the original device intact for further examination or a legal affair.
- Time consuming: Cloning is usually longer in comparison to simple copying since it involves the whole detailed replication. Though it depends on various factors like the size of the storage device, the speed of the devices involved, and the method of cloning.
Imaging:
It is the process of creating a forensic image of a storage device. A forensic image is a replica copy of every bit of data that was on the source device, this including the allocated, unallocated, and the available slack space .
The image is then used for analysis and investigation, and the original evidence is left untouched. Images can’t be used as the working copies of a device. Unlike cloning, which produces working copies, forensic images are typically used for analysis and investigation purposes and are not intended for regular use as working copies.
Characteristics of Imaging:
- Integrity: Imaging ensures the integrity and authenticity of the evidence produced
- Flexibility: Forensic image replicas can be mounted as a virtual drive to create image-specific mode for analysis of data without affecting the original evidence .
- Metadata: Imaging captures metadata associated with the data, thus promoting forensic analysis.
Key Differences
- Purpose: Copying is for everyday use but not good for forensic investigations requiring data integrity. Cloning and imaging are made for forensic preservation.
- Depth of Replication: Cloning and imaging captures the entire storage device including hidden, unallocated, and deleted data whereas copying may miss crucial forensic data.
- Data Integrity: Imaging and cloning keep the integrity of the original evidence thus making them suitable for legal and forensic use. Which is a critical aspect of forensic investigations.
- Forensic Soundness: Imaging is considered the best in digital forensics due to its comprehensive and non-invasive nature.
- Cloning is generally from one hard disk to another, where as imaging creates a compressed file that contains a snapshot of the entire hard drive or a specific partitions
Conclusion
Therefore, copying, cloning, and imaging all deal with duplication of data or storage devices with significant variations, especially in digital forensic. However, for forensic investigations, imaging is the most selected approach due to the correct preservation of the evidence state for any analysis or legal use . Therefore, it is essential for forensic investigators to understand these rigorous differences to avail of real and uncontaminated digital evidence for their investigation and legal argument.

Introduction
Twitter Inc.’s appeal against barring orders for specific accounts issued by the Ministry of Electronics and Information Technology was denied by a single judge on the Karnataka High Court. Twitter Inc. was also given an Rs. 50 lakh fine by Justice Krishna Dixit, who claimed the social media corporation had approached the court defying government directives.
As a foreign corporation, Twitter’s locus standi had been called into doubt by the government, which said they were ineligible to apply Articles 19 and 21 to their situation. Additionally, the government claimed that because Twitter was only designed to serve as an intermediary, there was no “jural relationship” between Twitter and its users.
The Issue
In accordance with Section 69A of the Information Technology Act, the Ministry issued the directives. Nevertheless, Twitter had argued in its appeal that the orders “fall foul of Section 69A both substantially and procedurally.” Twitter argued that in accordance with 69A, account holders were to be notified before having their tweets and accounts deleted. However, the Ministry failed to provide these account holders with any notices.
On June 4, 2022, and again on June 6, 2022, the government sent letters to Twitter’s compliance officer requesting that they come before them and provide an explanation for why the Blocking Orders were not followed and why no action should be taken against them.
Twitter replied on June 9 that the content against which it had not followed the blocking orders does not seem to be a violation of Section 69A. On June 27, 2022, the Government issued another notice stating Twitter was violating its directions. On June 29, Twitter replied, asking the Government to reconsider the direction on the basis of the doctrine of proportionality. On June 30, 2022, the Government withdrew blocking orders on ten account-level URLs but gave an additional list of 27 URLs to be blocked. On July 10, more accounts were blocked. Compiling the orders “under protest,” Twitter approached the HC with the petition challenging the orders.
Legality
Additionally, the government claimed that because Twitter was only designed to serve as an intermediary, there was no “jural relationship” between Twitter and its users.
Government attorney Additional Solicitor General R Sankaranarayanan argued that tweets mentioning “Indian Occupied Kashmir” and the survival of LTTE commander Velupillai Prabhakaran were serious enough to undermine the integrity of the nation.
Twitter, on the other hand, claimed that its users have pushed for these rights. Additionally, Twitter maintained that under Article 14 of the Constitution, even as a foreign company, they were entitled to certain rights, such as the right to equality. They also argued that the reason for the account blocking in each case was not stated and that Section 69a’s provision for blocking a URL should only apply to the offending URL rather than the entire account because blocking the entire account would prevent the creation of information while blocking the offending tweet only applied to already-created information.
Conclusion
The evolution of cyberspace has been substantiated by big tech companies like Facebook, Google, Twitter, Amazon and many more. These companies have been instrumental in leading the spectrum of emerging technologies and creating a blanket of ease and accessibility for users. Compliance with laws and policies is of utmost priority for the government, and the new bills and policies are empowering the Indian cyberspace. Non Compliance will be taken very seriously, and the same is legalised under the Intermediary Guidelines 2021 and 2022 by Meity. Referring to Section 79 of the Information Technology Act, which pertains to an exemption from liability of intermediary in some instances, it was said, “Intermediary is bound to obey the orders which the designate authority/agency which the government fixes from time to time.”

Introduction
The spread of information in the quickly changing digital age presents both advantages and difficulties. The phrases "misinformation" and "disinformation" are commonly used in conversations concerning information inaccuracy. It's important to counter such prevalent threats, especially in light of how they affect countries like India. It becomes essential to investigate the practical ramifications of misinformation/disinformation and other prevalent digital threats. Like many other nations, India has had to deal with the fallout from fraudulent internet actions in 2023, which has highlighted the critical necessity for strong cybersecurity safeguards.
The Emergence of AI Chatbots; OpenAI's ChatGPT and Google's Bard
The launch of OpenAI's ChatGPT in November 2022 was a major turning point in the AI space, inspiring the creation of rival chatbot ‘Google's Bard’ (Launched in 2023). These chatbots represent a significant breakthrough in artificial intelligence (AI) as they produce replies by combining information gathered from huge databases, driven by Large Language Models (LLMs). In the same way, AI picture generators that make use of diffusion models and existing datasets have attracted a lot of interest in 2023.
Deepfake Proliferation in 2023
Deepfake technology's proliferation in 2023 contributed to misinformation/disinformation in India, affecting politicians, corporate leaders, and celebrities. Some of these fakes were used for political purposes while others were for creating pornographic and entertainment content. Social turmoil, political instability, and financial ramifications were among the outcomes. The lack of tech measures about the same added difficulties in detection & prevention, causing widespread synthetic content.
Challenges of Synthetic Media
Problems of synthetic media, especially AI-powered or synthetic Audio video content proliferated widely during 2023 in India. These included issues with political manipulation, identity theft, disinformation, legal and ethical issues, security risks, difficulties with identification, and issues with media integrity. It covered an array of consequences, ranging from financial deception and the dissemination of false information to swaying elections and intensifying intercultural conflicts.
Biometric Fraud Surge in 2023
Biometric fraud in India, especially through the Aadhaar-enabled Payment System (AePS), has become a major threat in 2023. Due to the AePS's weaknesses being exploited by cybercriminals, many depositors have had their hard-earned assets stolen by fraudulent activity. This demonstrates the real effects of biometric fraud on those who have had their Aadhaar-linked data manipulated and unauthorized access granted. The use of biometric data in financial systems raises more questions about the security and integrity of the nation's digital payment systems in addition to endangering individual financial stability.
Government strategies to counter digital threats
- The Indian Union Government has sent a warning to the country's largest social media platforms, highlighting the importance of exercising caution when spotting and responding to deepfake and false material. The advice directs intermediaries to delete reported information within 36 hours, disable access in compliance with IT Rules 2021, and act quickly against content that violates laws and regulations. The government's dedication to ensuring the safety of digital citizens was underscored by Union Minister Rajeev Chandrasekhar, who also stressed the gravity of deepfake crimes, which disproportionately impact women.
- The government has recently come up with an advisory to social media intermediaries to identify misinformation and deepfakes and to make sure of the compliance of Information Technology (IT) Rules 2021. It is the legal obligation of online platforms to prevent the spread of misinformation and exercise due diligence or reasonable efforts to identify misinformation and deepfakes.
- The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules 2021 were amended in 2023. The online gaming industry is required to abide by a set of rules. These include not hosting harmful or unverified online games, not promoting games without approval from the SRB, labelling real-money games with a verification mark, educating users about deposit and winning policies, setting up a quick and effective grievance redressal process, requesting user information, and forbidding the offering of credit or financing for real-money gaming. These steps are intended to guarantee ethical and open behaviour throughout the online gaming industry.
- With an emphasis on Personal Data Protection, the government enacted the Digital Personal Data Protection Act, 2023. It is a brand-new framework for digital personal data protection which aims to protect the individual's digital personal data.
- The " Cyber Swachhta Kendra " (Botnet Cleaning and Malware Analysis Centre) is a part of the Government of India's Digital India initiative under the (MeitY) to create a secure cyberspace. It uses malware research and botnet identification to tackle cybersecurity. It works with antivirus software providers and internet service providers to establish a safer digital environment.
Strategies by Social Media Platforms
Various social media platforms like YouTube, and Meta have reformed their policies on misinformation and disinformation. This shows their comprehensive strategy for combating deepfake, misinformation/disinformation content on the network. The platform YouTube prioritizes eliminating content that transgresses its regulations, decreasing the amount of questionable information that is recommended, endorsing reliable news sources, and assisting reputable authors. YouTube uses unambiguous facts and expert consensus to thwart misrepresentation. In order to quickly delete information that violates policies, a mix of content reviewers and machine learning is used throughout the enforcement process. Policies are designed in partnership with external experts and producers. In order to improve the overall quality of information that users have access to, the platform also gives users the ability to flag material, places a strong emphasis on media literacy, and gives precedence to giving context.
Meta’s policies address different misinformation categories, aiming for a balance between expression, safety, and authenticity. Content directly contributing to imminent harm or political interference is removed, with partnerships with experts for assessment. To counter misinformation, the efforts include fact-checking partnerships, directing users to authoritative sources, and promoting media literacy.
Promoting ‘Tech for Good’
By 2024, the vision for "Tech for Good" will have expanded to include programs that enable people to understand the ever-complex digital world and promote a more secure and reliable online community. The emphasis is on using technology to strengthen cybersecurity defenses and combat dishonest practices. This entails encouraging digital literacy and providing users with the knowledge and skills to recognize and stop false information, online dangers, and cybercrimes. Furthermore, the focus is on promoting and exposing effective strategies for preventing cybercrime through cooperation between citizens, government agencies, and technology businesses. The intention is to employ technology's good aspects to build a digital environment that values security, honesty, and moral behaviour while also promoting innovation and connectedness.
Conclusion
In the evolving digital landscape, difficulties are presented by false information powered by artificial intelligence and the misuse of advanced technology by bad actors. Notably, there are ongoing collaborative efforts and progress in creating a secure digital environment. Governments, social media corporations, civil societies and tech companies have shown a united commitment to tackling the intricacies of the digital world in 2024 through their own projects. It is evident that everyone has a shared obligation to establish a safe online environment with the adoption of ethical norms, protective laws, and cybersecurity measures. The "Tech for Good" goal for 2024, which emphasizes digital literacy, collaboration, and the ethical use of technology, seems promising. The cooperative efforts of people, governments, civil societies and tech firms will play a crucial role as we continue to improve our policies, practices, and technical solutions.
References:
- https://news.abplive.com/fact-check/deepfakes-ai-driven-misinformation-year-2023-brought-new-era-of-digital-deception-abpp-1651243
- https://pib.gov.in/PressReleaseIframePage.aspx?PRID=1975445