#FactCheck - AI-Generated Video of Peacock ‘Rescue’ Falsely Shared as Real
Executive Summary:
A video showing a peacock allegedly trapped in ice has been going viral on social media. In the clip, the peacock appears to be frozen in a snow-covered area. Moments later, a man is seen approaching with a hammer and breaking the ice to rescue the bird. Social media users are sharing the video as a real-life incident, praising the peacock’s resilience and describing the scene as inspiring. However, CyberPeace research found the viral claim to be misleading. Our research revealed that the video was created using Artificial Intelligence (AI) and is being falsely circulated as a real incident.
Claim:
Facebook user ‘Ras Bihari Pathak’ shared the viral video on January 25, 2026, with the caption: “This peacock is not standing on ice, but on courage. It reminds us that no matter how harsh the circumstances are, hope always returns in colours.” The archived version of the post can be accessed here.

Fact Check:
To verify the claim, we first conducted a keyword search on Google to check whether any such real incident involving a peacock trapped in ice had been reported. However, no credible or verified media reports were found. Next, we closely examined the viral video. Upon observation, the peacock’s movements and reactions appeared unnatural and artificial. The motion lacked realistic physical behaviour, raising suspicion that the video might have been digitally generated. To confirm this, we analysed the clip using the AI video detection tool Hive Moderation, which indicated a 99 per cent or higher likelihood that the video was AI-generated.

Conclusion:
CyberPeace research confirms that the viral video showing a peacock allegedly trapped in ice is not real. The clip has been created using Artificial Intelligence and is being shared on social media with a false and misleading claim.
Related Blogs

Introduction
With the ever-growing technology where cyber-crimes are increasing, a new cyber-attack is on the rise, but it’s not in your inbox or your computer- it's targeting your phone, especially your smartphone. Cybercriminals are expanding their reach in India, with a new text-messaging fraud targeting individuals. The Indian Computer Emergency Response Team (CERT-In) has warned against "smishing," or SMS phishing.
Understanding Smishing
Smishing is a combination of the terms "SMS" and "phishing." It entails sending false text messages that appear to be from reputable sources such as banks, government organizations, or well-known companies. These communications frequently generate a feeling of urgency in their readers, prompting them to click on harmful links, expose personal information, or conduct financial transactions.
When hackers "phish," they send out phony emails in the hopes of tricking the receiver into clicking on a dangerous link. Smishing is just the use of text messaging rather than email. In essence, these hackers are out to steal your personal information to commit fraud or other cybercrimes. This generally entails stealing money – usually your own, but occasionally also the money of your firm.
The cybercriminals typically use these tactics to lure victims and steal the information.
Malware- The cyber crooks send the smishing URL link that might tick you into downloading malicious software on your phone itself. This SMS malware may appear as legitimate software, deceiving you into putting in sensitive information and transmitting it to crooks.
Malicious website- The URL in the smishing message may direct you to a bogus website that seeks sensitive personal information. Cybercriminals employ custom-made rogue sites meant to seem like legitimate ones, making it simpler to steal your information.
Smishing text messages often appear to be from your bank, asking you to share personal sensitive information, ATM numbers, or account details. Mobile device cybercrime is increasing, as is mobile device usage. Aside from the fact that texting is the most prevalent usage of cell phones, a few additional aspects make this an especially pernicious security issue. Let's go over how smishing attacks operate.
Modus Operandi
The cyber crooks commit the fraud via SMS. As attackers assume an identity that might be of someone trusted, Smishing attackers can use social engineering techniques to sway a victim's decision-making. Three things are causing this deception:
- Trust- Cyber crooks target individuals, by posing to someone from a legitimate individual and organization, this naturally lowers a person’s defense against threats.
- Context- Using a circumstance that might be relevant to targets helps an attacker to create an effective disguise. The message feels personalized, which helps it overcome any assumption that it is spam.
- Emotion- The nature of the SMS is critical; it makes the victim think that is urgent and requires rapid action. Using these tactics, attackers craft communications that compel the receiver to act.
- Typically, attackers want the victim to click on a URL link within the text message, which takes them to a phishing tool that asks them for sensitive information. This phishing tool is frequently in the form of a website or app that also assumes a phony identity.
How does Smishing Spread?
As we have revealed earlier smishing attacks are delivered through both traditional texts. However, SMS phishing attacks primarily appear to be from known sources People are less careful while they are on their phones. Many people believe that their cell phones are more secure than their desktops. However, smartphone security has limits and cannot always guard against smishing directly.
Considering the fact phones are the target While Android smartphones dominate the market and are a perfect target for malware text messages, iOS devices are as vulnerable. Although Apple's iOS mobile technology has a high reputation for security, no mobile operating system can protect you from phishing-style assaults on its own. A false feeling of security, regardless of platform, might leave users especially exposed.
Kinds of smishing attacks
Some common types of smishing attacks that occurred are;
- COVID-19 Smishing: The Better Business Bureau observed an increase in reports of US government impersonators sending text messages requesting consumers to take an obligatory COVID-19 test via a connected website in April 2020. The concept of these smishing assaults may readily develop, as feeding on pandemic concerns is a successful technique of victimizing the public.
- Gift Smishing: Give away, shopping rewards, or any number of other free offers, this kind of smishing includes free services or products, from a reputable or other company. attackers plan in such a way that the offer is for a limited time or is an exclusive offer and the offers are so lucrative that one gets excited and falls into the trap.
CERT Guidelines
CERT-In shared some steps to avoid falling victim to smishing.
- Never click on any suspicious link in SMS/social media charts or posts.
- Use online resources to validate shortened URLs.
- Always check the link before clicking.
- Use updated antivirus and antimalware tools.
- If you receive any suspicious message pretending to be from a bank or institution, immediately contact the bank or institution.
- Use a separate email account for personal online transactions.
- Enforce multi-factor authentication (MFA) for emails and bank accounts.
- Keep your operating system and software updated with the latest patches.
Conclusion
Smishing uses fraudulent mobile text messages to trick people into downloading malware, sharing sensitive data, or paying cybercriminals money. With the latest technological developments, it has become really important to stay vigilant in the digital era not only protecting your computers but safeguarding the devices that fit in the palm of your hand, CERT warning plays a vital role in this. Awareness and best practices play a pivotal role in safeguarding yourself from evolving threats.
Reference
- https://www.ndtv.com/india-news/government-warns-of-smishing-attacks-heres-how-to-stay-safe-4709458
- https://zeenews.india.com/technology/govt-warns-citizens-about-smishing-scam-how-to-protect-against-this-online-threat-2654285.html
- https://www.the420.in/protect-against-smishing-scams-cert-in-advice-online-safety/

Introduction
The rapid advancement of artificial intelligence (AI) technology has sparked intense debates and concerns about its potential impact on humanity. Sam Altman, CEO of AI research laboratory OpenAI, and Altman, known as the father of ChatGPT, an AI chatbot, hold a complex position, recognising both the existential risks AI poses and its potential benefits. In a world tour to raise awareness about AI risks, Altman advocates for global cooperation to establish responsible guidelines for AI development. Artificial intelligence has become a topic of increasing interest and concern as technology advances. Developing sophisticated AI systems raises many ethical questions, including whether they will ultimately save or destroy humanity.
Addressing Concerns
Altman engages with various stakeholders, including protesters who voice concerns about the race toward artificial general intelligence (AGI). Critics argue that focusing on safety rather than pushing AGI development would be a more responsible approach. Altman acknowledges the importance of safety progress but believes capability progress is necessary to ensure safety. He advocates for a global regulatory framework similar to the International Atomic Energy Agency, which would coordinate research efforts, establish safety standards, monitor computing power dedicated to AI training, and possibly restrict specific approaches.
Risks of AI Systems
While AI holds tremendous promise, it also presents risks that must be carefully considered. One of the major concerns is the development of artificial general intelligence (AGI) without sufficient safety precautions. AGI systems with unchecked capabilities could potentially pose existential risks to humanity if they surpass human intelligence and become difficult to control. These risks include the concentration of power, misuse of technology, and potential for unintended consequences.
There are also fears surrounding AI systems’ impact on employment. As machines become more intelligent and capable of performing complex tasks, there is a risk that many jobs will become obsolete. This could lead to widespread unemployment and economic instability if steps are not taken to prepare for this shift in the labour market.
While these risks are certainly caused for concern, it is important to remember that AI systems also have tremendous potential to do good in the world. By carefully designing these technologies with ethics and human values in mind, we can mitigate many of the risks while still reaping the benefits of this exciting new frontier in technology.

Open AI Systems and Chatbots
Open AI systems like ChatGPT and chatbots have gained popularity due to their ability to engage in natural language conversations. However, they also come with risks. The reliance on large-scale training data can lead to biases, misinformation, and unethical use of AI. Ensuring open AI systems’ safety and responsible development mitigates potential harm and maintains public trust.
The Need for Global Cooperation
Sam Altman and other tech leaders emphasise the need for global cooperation to address the risks associated with AI development. They advocate for establishing a global regulatory framework for superintelligence. Superintelligence refers to AGI operating at an exceptionally advanced level, capable of solving complex problems that have eluded human comprehension. Such a framework would coordinate research efforts, enforce safety standards, monitor computing power, and potentially restrict specific approaches. International collaboration is essential to ensure responsible and beneficial AI development while minimising the risks of misuse or unintended consequences.
Can AI Systems Make the World a Better Place: Benefits of AI Systems
AI systems hold many benefits that can greatly improve human life. One of the most significant advantages of AI is its ability to process large amounts of data at a rapid pace. In industries such as healthcare, this has allowed for faster diagnoses and more effective treatments. Another benefit of AI systems is their capacity to learn and adapt over time. This allows for more personalised experiences in areas such as customer service, where AI-powered chatbots can provide tailored solutions based on an individual’s needs. Additionally, AI can potentially increase efficiency in various industries, from manufacturing to transportation. By automating repetitive tasks, human workers can focus on higher-level tasks that require creativity and problem-solving skills. Overall, the benefits of AI systems are numerous and promising for improving human life in various ways.
We must remember the impact of AI on education. It has already started to show its potential by providing personalised learning experiences for students at all levels. With the help of AI-driven systems like intelligent tutoring systems (ITS), adaptive learning technologies (ALT), and educational chatbots, students can learn at their own pace without feeling overwhelmed or left behind.
While there are certain risks associated with the development of AI systems, there are also numerous opportunities for them to make our world a better place. By harnessing the power of these technologies for good, we can create a brighter future for ourselves and generations to come.

Conclusion
The AI revolution presents both extraordinary opportunities and significant challenges for humanity. The benefits of AI, when developed responsibly, have the potential to uplift societies, improve quality of life, and address long-standing global issues. However, the risks associated with AGI demand careful attention and international cooperation. Governments, researchers, and industry leaders must work together to establish guidelines, safety measures, and ethical standards to navigate the path toward AI systems that serve humanity’s best interests and safeguard against potential risks. By taking a balanced approach, we can strive for a future where AI systems save humanity rather than destroy it.

Introduction
Misinformation and disinformation are significant issues in today's digital age. The challenge is not limited to any one sector or industry, and has been seen to affect everyone that deals with data of any sort. In recent times, we have seen a rise in misinformation about all manner of subjects, from product and corporate misinformation to manipulated content about regulatory or policy developments.
Micro, Small, and Medium Enterprises (MSMEs) play an important role in economies, particularly in developing nations, by promoting employment, innovation, and growth. However, in the evolving digital landscape, they also confront tremendous hurdles, such as the dissemination of mis/disinformation which may harm reputations, disrupt businesses, and reduce consumer trust. MSMEs are particularly susceptible since they have minimal resources at their disposal and cannot afford to invest in the kind of talent, technology and training that is needed for a business to be able to protect itself in today’s digital-first ecosystem. Mis/disinformation for MSMEs can arise from internal communications, supply chain partners, social media, competitors, etc. To address these dangers, MSMEs must take proactive steps such as adopting frameworks to counter misinformation and prioritising best practices like digital literacy and training, monitoring and social listening, transparency protocols and robust communication practices.
Assessing the Impact of Misinformation on MSMEs
To assess the impact of misinformation on MSMEs, it is essential to get a full sense of the challenges. To begin with, one must consider the categories of damage which can include financial loss, reputational damage, operational damages, and regulatory noncompliance. Various assessment methodologies can be used to analyze the impact of misinformation, including surveys, interviews, case studies, social media and news data analysis, and risk analysis practices.
Policy Framework and Gaps in Addressing Misinformation
The Digital India Initiative, a flagship program of the Government of India, aims to transform India into a digitally empowered society and knowledge economy. The Information Technology Act, 2000 and the rules made therein govern the technology space and serve as the legal framework for cyber security and data protection. The Bhartiya Nyay Sanhita, 2023 also contains provisions regarding ‘fake news’. The Digital Personal Data Protection Act, 2023 is a brand new law aimed at protecting personal data. Fact-check units (FCUs) are government and private independent bodies that verify claims about government policies, regulations, announcements, and measures. However, these policy measures are not sector-specific and lack specific guidelines, which have limited impact on their awareness initiatives on misinformation and insufficient support structure for MSMEs to verify information and protect themselves.
Recommendations for Countering Misinformation in the MSME Sector
To counter misinformation for MSMEs, recommendations include creating a dedicated Misinformation Helpline, promoting awareness campaigns, creating regulatory support and guidelines, and collaborating with tech platforms and expert organisations for the identification and curbing of misinformation.
Organisational recommendations include the Information Verification Protocols for the consumers of Information for the verification of critical information before acting upon it, engaging in employee training for regular training on the identification and management of misinformation, creating a crisis management plan to deal with misinformation crisis, form collaboration networks with other MSMEs to share verified information and best practices.
Engage with technological solutions like AI and ML tools for the detection and flagging of potential misinformation along with fact-checking tools and engaging with cyber security measures to prevent misinformation via digital channels.
Conclusion: Developing a Vulnerability Assessment Framework for MSMEs
Creating a vulnerability assessment framework for misinformation in Micro, Small, and Medium Enterprises (MSMEs) in India involves several key components which include the understanding of the sources and types of misinformation, assessing the impact on MSMEs, identifying the current policies and gaps, and providing actionable recommendations. The implementation strategy for policies to counter misinformation in the MSME sector can be by starting with pilot programs in key MSME clusters, and stakeholder engagement by involving industry associations, tech companies and government bodies. Initiating a feedback mechanism for constant improvement of the framework and finally, developing a plan to scale successful initiatives across the country.
References
- https://publications.ut-capitole.fr/id/eprint/48849/1/wp_tse_1516.pdf
- https://techinformed.com/how-misinformation-can-impact-businesses/
- https://pib.gov.in/aboutfactchecke.aspx