#FactCheck - "Deep fake video falsely circulated as of a Syrian prisoner who saw sunlight for the first time in 13 years”
Executive Summary:
A viral online video claims to show a Syrian prisoner experiencing sunlight for the first time in 13 years. However, the CyberPeace Research Team has confirmed that the video is a deep fake, created using AI technology to manipulate the prisoner’s facial expressions and surroundings. The original footage is unrelated to the claim that the prisoner has been held in solitary confinement for 13 years. The assertion that this video depicts a Syrian prisoner seeing sunlight for the first time is false and misleading.

Claim A viral video falsely claims that a Syrian prisoner is seeing sunlight for the first time in 13 years.


Factcheck:
Upon receiving the viral posts, we conducted a Google Lens search on keyframes from the video. The search led us to various legitimate sources featuring real reports about Syrian prisoners, but none of them included any mention of such an incident. The viral video exhibited several signs of digital manipulation, prompting further investigation.

We used AI detection tools, such as TrueMedia, to analyze the video. The analysis confirmed with 97.0% confidence that the video was a deepfake. The tools identified “substantial evidence of manipulation,” particularly in the prisoner’s facial movements and the lighting conditions, both of which appeared artificially generated.


Additionally, a thorough review of news sources and official reports related to Syrian prisoners revealed no evidence of a prisoner being released from solitary confinement after 13 years, or experiencing sunlight for the first time in such a manner. No credible reports supported the viral video’s claim, further confirming its inauthenticity.
Conclusion:
The viral video claiming that a Syrian prisoner is seeing sunlight for the first time in 13 years is a deep fake. Investigations using tools like Hive AI detection confirm that the video was digitally manipulated using AI technology. Furthermore, there is no supporting information in any reliable sources. The CyberPeace Research Team confirms that the video was fabricated, and the claim is false and misleading.
- Claim: Syrian prisoner sees sunlight for the first time in 13 years, viral on social media.
- Claimed on: Facebook and X(Formerly Twitter)
- Fact Check: False & Misleading
Related Blogs

Introduction
The Central Board of Secondary Education (CBSE) has issued a warning to students about fake social media accounts that spread false information about the CBSE. The board has warned students not to trust the information coming from these accounts and has released a list of 30 fake accounts. The board has expressed concern that these handles are misleading students and parents by spreading fake information with the name and logo of the CBSE. The board has has also clarified that it is not responsible for the information being spread from these fake accounts.
The Central Board of Secondary Education (CBSE), a venerable institution in the realm of Indian education, has found itself ensnared in the web of cyber duplicity. Impersonation attacks, a sinister facet of cybercrime, have burgeoned, prompting the Board to adopt a vigilant stance against the proliferation of counterfeit social media handles that masquerade under its esteemed name and emblem.
The CBSE, has revealed a list of approximately 30 spurious handles that have been sowing seeds of disinformation across the social media landscape. These digital doppelgängers, cloaked in the Board's identity, have been identified and exposed. The Board's official beacon in this murky sea of falsehoods is the verified handle '@cbseindia29', a lighthouse guiding the public to the shores of authentic information.
This unfolding narrative signifies the Board's unwavering commitment to tackle the scourge of misinformation and to fortify the bulwarks safeguarding the sanctity of its official communications. By spotlighting the rampant growth of fake social media personas, the CBSE endeavors to shield the public from the detrimental effects of misleading information and to preserve the trust vested in its official channels.
CBSE Impersonator Accounts
The list of identified malefactors, parading under the CBSE banner, serves as a stark admonition to the public to exercise discernment while navigating the treacherous waters of social media platforms. The CBSE has initiated appropriate legal manoeuvres against these unauthorised entities to stymie their dissemination of fallacious narratives.
The Board has previously unfurled comprehensive details concerning the impending board examinations for both Class 10 and Class 12 in the year 2024. These academic assessments are slated to commence from February 15 to April 2, 2024, with a uniform start time of 10:30 AM (IST) across all designated dates.
The CBSE has made it unequivocally clear that there are nefarious entities lurking in the shadows of social media, masquerading in the guise of the CBSE. It has implored students and the general public not to be ensnared by the siren songs emanating from these fraudulent accounts and has also unfurled a list of these imposters. The Board's warning is a beacon of caution, illuminating the path for students as they navigate the digital expanse with the impending commencement of the CBSE Class X and XII exams.
Sounding The Alarm
The Central Board of Secondary Education (CBSE) has sounded the alarm, issuing an advisory to schools, students, and their guardians about the existence of fake social media platform handles that brandish the board’s logo and mislead the academic community. The board has identified about 30 such accounts on the microblogging site 'X' (formerly known as Twitter) that misuse the CBSE logo and acronym, sowing confusion and disarray.
The board is in the process of taking appropriate action against these deceptive entities. CBSE has also stated that it bears no responsibility for any information disseminated by any other source that unlawfully appropriates its name and logo on social media platforms.
Sources reveal that these impostors post false information on various updates, including admissions and exam schedules. After receiving complaints about such accounts on 'X', the CBSE issued the advisory and has initiated action against those operating these accounts, sources said.
The Brute Nature of Impersonation
In the contemporary digital epoch, cybersecurity has ascended to a position of critical importance. It is the bulwark that ensures the sanctity of computer networks is maintained and that computer systems are not marked as prey by cyber predators. Cyberattacks are insidious stratagems executed with the intent of expropriating, manipulating, or annihilating authenticated user or organizational data. It is imperative that cyberattacks be mitigated at their roots so that users and organizations utilizing internet services can navigate the digital domain with a sense of safety and security. Knowledge about cyberattacks thus plays a pivotal role in educating cyber users about the diverse types of cyber threats and the preventive measures to counteract them.
Impersonation Attacks are a vicious form of cyberattack, characterised by the malicious intent to extract confidential information. These attacks revolve around a process where cyber attackers eschew the use of malware or bots to perpetrate their crimes, instead wielding the potent tactic of social engineering. The attacker meticulously researches and harvests information about the legitimate user through platforms such as social media and then exploits this information to impersonate or masquerade as the original, legitimate user.
The threats posed by Impersonation Attacks are particularly insidious because they demand immediate action, pressuring the victim to act without discerning between the authenticated user and the impersonated one. The very nature of an Impersonation Attack is a perilous form of cyber assault, as the original user who is impersonated holds rights to private information. These attacks can be executed by exploiting a resemblance to the original user's identity, such as email IDs. Email IDs with minute differences from the legitimate user are employed in this form of attack, setting it apart from the phishing cyber mechanism. The email addresses are so similar and close to each other that, without paying heed or attention to them, the differences can be easily overlooked. Moreover, the email addresses appear to be correct, as they generally do not contain spelling errors.
Strategies to Prevent
To prevent Impersonation Attacks, the following strategies can be employed:
- Proper security mechanisms help identify malicious emails and thereby filter spamming email addresses on a regular basis.
- Double-checking sensitive information is crucial, especially when important data or funds need to be transferred. It is vital to ensure that the data is transferred to a legitimate user by cross-verifying the email address.
- Ensuring organizational-level security is paramount. Organizations should have specific domain names assigned to them, which can help employees and users distinguish their identity from that of cyber attackers.
- Protection of User Identity is essential. Employees must not publicly share their private identities, which can be exploited by attackers to impersonate their presence within the organization.
Conclusion
The CBSE's struggle against the masquerade of misinformation is a reminder of the vigilance required to safeguard the legitimacy of our digital interactions. As we navigate the complex and uncharted terrain of the internet, let us arm ourselves with the knowledge and discernment necessary to unmask these digital charlatans and uphold the sanctity of truth.
References
- https://timesofindia.indiatimes.com/city/ahmedabad/cbse-warns-against-misuse-of-its-name-by-fake-social-media-handles/articleshow/107644422.cms
- https://www.timesnownews.com/education/cbse-releases-list-of-fake-social-media-handles-asks-not-to-follow-article-107632266
- https://www.etvbharat.com/en/!bharat/cbse-public-advisory-enn24021205856

Introduction
Meta is the leader in social media platforms and has been successful in having a widespread network of users and services across global cyberspace. The corporate house has been responsible for revolutionizing messaging and connectivity since 2004. The platform has brought people closer together in terms of connectivity, however, being one of the most popular platforms is an issue as well. Popular platforms are mostly used by cyber criminals to gain unauthorised data or create chatrooms to maintain anonymity and prevent tracking. These bad actors often operate under fake names or accounts so that they are not caught. The platforms like Facebook and Instagram have been often in the headlines as portals where cybercriminals were operating and committing crimes.
To keep the data of the netizen safe and secure Paytm under first of its kind service is offering customers protection against cyber fraud through an insurance policy available for fraudulent mobile transactions up to Rs 10,000 for a premium of Rs 30. The cover ‘Paytm Payment Protect’ is provided through a group insurance policy issued by HDFC Ergo. The company said that the plan is being offered to increase the trust in digital payments, which will push up adoption.
Meta’s Cybersecurity
Meta has one of the best cyber security in the world but that diest mean that it cannot be breached. The social media giant is the most vulnerable platform in cases of data breaches as various third parties are also involved. As seen the in the case of Cambridge Analytica, a huge chunk of user data was available to influence the users in terms of elections. Meta needs to be ahead of the curve to have a safe and secure platform, for this Meta has deployed various AI and ML driven crawlers and software which work o keeping the platform safe for its users and simultaneously figure out which accounts may be used by bad actors and further removes the criminal accounts. The same is also supported by the keen participation of the user in terms of the reporting mechanism. Meta-Cyber provides visibility of all OT activities, observes continuously the PLC and SCADA for changes and configuration, and checks the authorization and its levels. Meta is also running various penetration and bug bounty programs to reduce vulnerabilities in their systems and applications, these testers are paid heavily depending upon the scope of the vulnerability they found.
CyberRoot Risk Investigation
Social media giant Meta has taken down over 40 accounts operated by an Indian firm CyberRoot Risk Analysis, allegedly involved in hack-for-hire services along with this Meta has taken down 900 fraudulently run accounts, these accounts are said to be operated from China by an unknown entity. CyberRoot Risk Analysis was responsible for sharing malware over the platform and used it to impersonate themselves just as their targets, i.e lawyers, doctors, entrepreneurs, and industries like – cosmetic surgery, real estate, investment firms, pharmaceutical, private equity firms, and environmental and anti-corruption activists. They would get in touch with such personalities and then share malware hidden in files which would often lead to data breaches subsequently leading to different types of cybercrimes.
Meta and its team is working tirelessly to eradicate the influence of such bad actors from their platforms, use of AI and Ml based tools have increased exponentially.
Paytm CyberFraud Cover
Paytm is offering customers protection against cyber fraud through an insurance policy available for fraudulent mobile transactions up to Rs 10,000 for a premium of Rs 30. The cover ‘Paytm Payment Protect’ is provided through a group insurance policy issued by HDFC Ergo. The company said that the plan is being offered to increase the trust in digital payments, which will push up adoption. The insurance cover protects transactions made through UPI across all apps and wallets. The insurance coverage has been obtained by One97 Communications, which operates under the Paytm brand.
The exponential increase in the use of digital payments during the pandemic has made more people susceptible to cyber fraud. While UPI has all the digital safeguards in place, most UPI-related frauds are undertaken by confidence tricksters who get their victims to authorise a transaction by passing collect requests as payments. There are also many fraudsters collecting payments by pretending to be merchants. These types of frauds have resulted in a loss of more than Rs 63 crores in the previous financial year. The issue of data insurance is new to India but is indeed the need of the hour, majority of netizens are unaware of the value of their data and hence remain ignorant towards data protection, such steps will result in safer data management and protection mechanisms, thus safeguarding the Indian cyberspace.
Conclusion
cyberspace is at a critical juncture in terms of data protection and privacy, with new legislation coming out on the same we can expect new and stronger policies to prevent cybercrimes and cyber-attacks. The efforts by tech giants like Meta need to gain more speed in terms of the efficiency of cyber safety of the platform and the user to make sure that the future of the platforms remains secured strongly. The concept of data insurance needs to be shared with netizens to increase awareness about the subject. The initiative by Paytm will be a monumental initiative as this will encourage more platforms and banks to commit towards coverage for cyber crimes. With the increasing cases of cybercrimes, such financial coverage has come as a light of hope and security for the netizens.

AI has grown manifold in the past decade and so has its reliance. A MarketsandMarkets study estimates the AI market to reach $1,339 billion by 2030. Further, Statista reports that ChatGPT amassed more than a million users within the first five days of its release, showcasing its rapid integration into our lives. This development and integration have their risks. Consider this response from Google’s AI chatbot, Gemini to a student’s homework inquiry: “You are not special, you are not important, and you are not needed…Please die.” In other instances, AI has suggested eating rocks for minerals or adding glue to pizza sauce. Such nonsensical outputs are not just absurd; they’re dangerous. They underscore the urgent need to address the risks of unrestrained AI reliance.
AI’s Rise and Its Limitations
The swiftness of AI’s rise, fueled by OpenAI's GPT series, has revolutionised fields like natural language processing, computer vision, and robotics. Generative AI Models like GPT-3, GPT-4 and GPT-4o with their advanced language understanding, enable learning from data, recognising patterns, predicting outcomes and finally improving through trial and error. However, despite their efficiency, these AI models are not infallible. Some seemingly harmless outputs can spread toxic misinformation or cause harm in critical areas like healthcare or legal advice. These instances underscore the dangers of blindly trusting AI-generated content and highlight the importance and the need to understand its limitations.
Defining the Problem: What Constitutes “Nonsensical Answers”?
Harmless errors due to AI nonsensical responses can be in the form of a wrong answer for a trivia question, whereas, critical failures could be as damaging as wrong legal advice.
AI algorithms sometimes produce outputs that are not based on training data, are incorrectly decoded by the transformer or do not follow any identifiable pattern. This response is known as a Nonsensical Answer and the situation is known as an “AI Hallucination”. It can be factual inaccuracies, irrelevant information or even contextually inappropriate responses.
A significant source of hallucination in machine learning algorithms is the bias in input that it receives. If the inputs for the AI model are full of biased datasets or unrepresentative data, it may lead to the model hallucinating and producing results that reflect these biases. These models are also vulnerable to adversarial attacks, wherein bad actors manipulate the output of an AI model by tweaking the input data ina subtle manner.
The Need for Policy Intervention
Nonsensical AI responses risk eroding user trust and causing harm, highlighting the need for accountability despite AI’s opaque and probabilistic nature. Different jurisdictions address these challenges in varied ways. The EU’s AI Act enforces stringent reliability standards with a risk-based and transparent approach. The U.S. emphasises creating ethical guidelines and industry-driven standards. India’s DPDP Act indirectly tackles AI safety through data protection, focusing on the principles of accountability and consent. While the EU prioritises compliance, the U.S. and India balance innovation with safeguards. This reflects on the diverse approaches that nations have to AI regulation.
Where Do We Draw the Line?
The critical question is whether AI policies should demand perfection or accept a reasonable margin for error. Striving for flawless AI responses may be impractical, but a well-defined framework can balance innovation and accountability. Adopting these simple measures can lead to the creation of an ecosystem where AI develops responsibly while minimising the societal risks it can pose. Key measures to achieve this include:
- Ensure that users are informed about AI and its capabilities and limitations. Transparent communication is the key to this.
- Implement regular audits and rigorous quality checks to maintain high standards. This will in turn prevent any form of lapses.
- Establishing robust liability mechanisms to address any harms caused by AI-generated material which is in the form of misinformation. This fosters trust and accountability.
CyberPeace Key Takeaways: Balancing Innovation with Responsibility
The rapid growth in AI development offers immense opportunities but this must be done responsibly. Overregulation of AI can stifle innovation, on the other hand, being lax could lead to unintended societal harm or disruptions.
Maintaining a balanced approach to development is essential. Collaboration between stakeholders such as governments, academia, and the private sector is important. They can ensure the establishment of guidelines, promote transparency, and create liability mechanisms. Regular audits and promoting user education can build trust in AI systems. Furthermore, policymakers need to prioritise user safety and trust without hindering creativity while making regulatory policies.
We can create a future that is AI-development-driven and benefits us all by fostering ethical AI development and enabling innovation. Striking this balance will ensure AI remains a tool for progress, underpinned by safety, reliability, and human values.
References
- https://timesofindia.indiatimes.com/technology/tech-news/googles-ai-chatbot-tells-student-you-are-not-needed-please-die/articleshow/115343886.cms
- https://www.forbes.com/advisor/business/ai-statistics/#2
- https://www.reuters.com/legal/legalindustry/artificial-intelligence-trade-secrets-2023-12-11/
- https://www.indiatoday.in/technology/news/story/chatgpt-has-gone-mad-today-openai-says-it-is-investigating-reports-of-unexpected-responses-2505070-2024-02-21