#FactCheck - AI-Generated Video of Monkey Falsely Linked to Hanuman Devotion
A video is being widely shared on social media showing a monkey, with users claiming that the animal is immersed in devotion to Lord Hanuman. The clip is being circulated with assertions that the monkey was seen participating in Hanuman Aarti. Cyber Peace Foundation’s research found that the viral claim is fake. Our investigation revealed that the video is not real and has been generated using artificial intelligence tools.
Claim
On January 6, 2026, Facebook users shared the viral video claiming, “A monkey was seen immersed in devotion during Hanuman Aarti.”
- Post link: https://www.facebook.com/reel/1261813845766976
- Archived link: https://archive.ph/anid5
Screenshots of the post can be seen below.

FactCheck:
When we closely examined the viral video, we noticed several visual inconsistencies. These anomalies raised suspicion that the video might be AI-generated. To verify this, we scanned the video using the AI detection tool Hive Moderation. According to the results, the video was found to be 97 percent AI-generated.

Further, we analysed the video using another AI detection tool, Sightengine. The tool’s assessment indicated that the viral video is 98 percent AI-generated.

Conclusion
Our investigation confirms that the viral video claiming to show a monkey immersed in devotion to Lord Hanuman is AI-generated and not real. The claim circulating on social media is false and misleading.
Related Blogs

Introduction
On March 12, the Ministry of Corporate Affairs (MCA) proposed the Bill to curb anti-competitive practices of tech giants through ex-ante regulation. The Draft Digital Competition Bill is to apply to ‘Core Digital Services,’ with the Central Government having the authority to update the list periodically. The proposed list in the Bill encompasses online search engines, online social networking services, video-sharing platforms, interpersonal communications services, operating systems, web browsers, cloud services, advertising services, and online intermediation services.
The primary highlight of the Digital Competition Law Report created by the Committee on Digital Competition Law presented to the Parliament in the 2nd week of March 2024 involves a recommendation to introduce new legislation called the ‘Digital Competition Act,’ intended to strike a balance between certainty and flexibility. The report identified ten anti-competitive practices relevant to digital enterprises in India. These are anti-steering, platform neutrality/self-preferencing, bundling and tying, data usage (use of non-public data), pricing/ deep discounting, exclusive tie-ups, search and ranking preferencing, restricting third-party applications and finally advertising Policies.
Key Take-Aways: Digital Competition Bill, 2024
- Qualitative and quantitative criteria for identifying Systematically Significant Digital Enterprises, if it meets any of the specified thresholds.
- Financial thresholds in each of the immediately preceding three financial years like turnover in India, global turnover, gross merchandise value in India, or global market capitalization.
- User thresholds in each of the immediately preceding 3 financial years in India like the core digital service provided by the enterprise has at least 1 crore end users, or it has at least 10,000 business users.
- The Commission may make the designation based on other factors such as the size and resources of an enterprise, number of business or end users, market structure and size, scale and scope of activities of an enterprise and any other relevant factor.
- A period of 90 days is provided to notify the CCI of qualification as an SSDE. Additionally, the enterprise must also notify the Commission of other enterprises within the group that are directly or indirectly involved in the provision of Core Digital Services, as Associate Digital Enterprises (ADE) and the qualification shall be for 3 years.
- It prescribes obligations for SSDEs and their ADEs upon designation. The enterprise must comply with certain obligations regarding Core Digital Services, and non-compliance with the same shall result in penalties. Enterprises must not directly or indirectly prevent or restrict business users or end users from raising any issue of non-compliance with the enterprise’s obligations under the Act.
- Avoidance of favouritism in product offerings by SSDE, its related parties, or third parties for the manufacture and sale of products or provision of services over those offered by third-party business users on the Core Digital Service in any manner.
- The Commission will be having the same powers as vested to a civil court under the Code of Civil Procedure, 1908 when trying a suit.
- Penalty for non-compliance without reasonable cause may extend to Rs 1 lakh for each day during which such non-compliance occurs (max. of Rs 10 crore). It may extend to 3 years or with a fine, which may extend to Rs 25 crore or with both. The Commission may also pass an order imposing a penalty on an enterprise (not exceeding 1% of the global turnover) in case it provides incorrect, incomplete, misleading information or fails to provide information.
Suggestions and Recommendations
- The ex-ante model of regulation needs to be examined for the Indian scenario and studies need to be conducted on it has worked previously in different jurisdictions like the EU.
- The Bill should be aimed at prioritising the fostering of fair competition by preventing monopolistic practices in digital markets exclusively. A clear distinction from the already existing Competition Act, 2002 in its functioning needs to be created so that there is no overlap in the regulations and double jeopardy is not created for enterprises.
- Restrictions on tying and bundling and data usage have been shown to negatively impact MSMEs that rely significantly on big tech to reduce operational costs and enhance customer outreach.
- Clear definitions of "dominant position" and "anti-competitive behaviour" are essential for effective enforcement in terms of digital competition need to be defined.
- Encouraging innovation while safeguarding consumer data privacy in consonance with the DPDP Act should be the aim. Promoting interoperability and transparency in algorithms can prevent discriminatory practices.
- Regular reviews and stakeholder consultations will ensure the law adapts to rapidly evolving technologies.
- Collaboration with global antitrust bodies which is aimed at enhancing cross-border regulatory coherence and effectiveness.
Conclusion
The need for a competition law that is focused exclusively on Digital Enterprises is the need of the hour and hence the Committee recommended enacting the Digital Competition Act to enable CCI to selectively regulate large digital enterprises. The proposed legislation should be restricted to regulate only those enterprises that have a significant presence and ability to influence the Indian digital market. The impact of the law needs to be restrictive to digital enterprises and it should not encroach upon matters not influenced by the digital arena. India's proposed Digital Competition Bill aims to promote competition and fairness in the digital market by addressing anti-competitive practices and dominant position abuses prevalent in the digital business space. The Ministry of Corporate Affairs has received 41-page public feedback on the draft which is expected to be tabled next year in front of the Parliament.
References
- https://www.medianama.com/wp-content/uploads/2024/03/DRAFT-DIGITAL-COMPETITION-BILL-2024.pdf
- https://prsindia.org/files/policy/policy_committee_reports/Report_Summary-Digital_Competition_Law.pdf
- https://economictimes.indiatimes.com/tech/startups/meity-meets-india-inc-to-hear-out-digital-competition-law-concerns/articleshow/111091837.cms?from=mdr
- https://www.mca.gov.in/bin/dms/getdocument?mds=gzGtvSkE3zIVhAuBe2pbow%253D%253D&type=open
- https://www.barandbench.com/law-firms/view-point/digital-competition-laws-beginning-of-a-new-era
- https://www.linkedin.com/pulse/policy-explainer-digital-competition-bill-nimisha-srivastava-lhltc/
- https://www.lexology.com/library/detail.aspx?g=5722a078-1839-4ece-aec9-49336ff53b6c

Brief Overview of the EU AI Act
The EU AI Act, Regulation (EU) 2024/1689, was officially published in the EU Official Journal on 12 July 2024. This landmark legislation on Artificial Intelligence (AI) will come into force just 20 days after publication, setting harmonized rules across the EU. It amends key regulations and directives to ensure a robust framework for AI technologies. The AI Act, a set of EU rules governing AI, has been in development for two years and now, the EU AI Act enters into force across all 27 EU Member States on 1 August 2024, with certain future deadlines tied up and the enforcement of the majority of its provisions will commence on 2 August 2026. The law prohibits certain uses of AI tools, including those that threaten citizens' rights, such as biometric categorization, untargeted scraping of faces, and systems that try to read emotions are banned in the workplace and schools, as are social scoring systems. It also prohibits the use of predictive policing tools in some instances. The law takes a phased approach to implementing the EU's AI rulebook, meaning there are various deadlines between now and then as different legal provisions will start to apply.
The framework puts different obligations on AI developers, depending on use cases and perceived risk. The bulk of AI uses will not be regulated as they are considered low-risk, but a small number of potential AI use cases are banned under the law. High-risk use cases, such as biometric uses of AI or AI used in law enforcement, employment, education, and critical infrastructure, are allowed under the law but developers of such apps face obligations in areas like data quality and anti-bias considerations. A third risk tier also applies some lighter transparency requirements for makers of tools like AI chatbots.
In case of failure to comply with the Act, the companies in the EU providing, distributing, importing, and using AI systems and GPAI models, are subject to fines of up to EUR 35 million or seven per cent of the total worldwide annual turnover, whichever is higher.
Key highlights of EU AI Act Provisions
- The AI Act classifies AI according to its risk. It prohibits Unacceptable risks such as social scoring systems and manipulative AI. The regulation mostly addresses high-risk AI systems.
- Limited-risk AI systems are subject to lighter transparency obligations and according to the act, the developers and deployers must ensure that the end-users are aware that the interaction they are having is with AI such as Chatbots and Deepfakes. The AI Act allows the free use of minimal-risk AI. This includes the majority of AI applications currently available in the EU single market like AI-enabled video games, and spam filters, but with the advancement of Gen AI changes with regards to this might be done. The majority of obligations fall on providers (developers) of high-risk AI systems that intend to place on the market or put into service high-risk AI systems in the EU, regardless of whether they are based in the EU or a third country. And also, a third-country provider where the high-risk AI system’s output is used in the EU.
- Users are natural or legal persons who deploy an AI system in a professional capacity, not affected end-users. Users (deployers) of high-risk AI systems have some obligations, though less than providers (developers). This applies to users located in the EU, and third-country users where the AI system’s output is used in the EU.
- General purpose AI or GPAI model providers must provide technical documentation, and instructions for use, comply with the Copyright Directive, and publish a summary of the content used for training. Free and open license GPAI model providers only need to comply with copyright and publish the training data summary, unless they present a systemic risk. All providers of GPAI models that present a systemic risk – open or closed – must also conduct model evaluations, and adversarial testing, and track and report serious incidents and ensure cybersecurity protections.
- The Codes of Practice will account for international approaches. It will cover but not necessarily be limited to the obligations, particularly the relevant information to include in technical documentation for authorities and downstream providers, identification of the type and nature of systemic risks and their sources, and the modalities of risk management accounting for specific challenges in addressing risks due to the way they may emerge and materialize throughout the value chain. The AI Office may invite GPAI model providers, and relevant national competent authorities to participate in drawing up the codes, while civil society, industry, academia, downstream providers and independent experts may support the process.
Application & Timeline of Act
The EU AI Act will be fully applicable 24 months after entry into force, but some parts will be applicable sooner, for instance the ban on AI systems posing unacceptable risks will apply six months after the entry into force. The Codes of Practice will apply nine months after entry into force. Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force. High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force. The expected timeline for the same is:
- August 1st, 2024: The AI Act will enter into force.
- February 2025: Prohibition of certain AI systems - Chapters I (general provisions) & II (prohibited AI systems) will apply; Prohibition of certain AI systems.
- August 2025: Chapter III Section 4 (notifying authorities), Chapter V (general purpose AI models), Chapter VII (governance), Chapter XII (confidentiality and penalties), and Article 78 (confidentiality) will apply, except for Article 101 (fines for General Purpose AI providers); Requirements for new GPAI models.
- August 2026: The whole AI Act applies, except for Article 6(1) & corresponding obligations (one of the categories of high-risk AI systems);
- August 2027: Article 6(1) & corresponding obligations apply.
The AI Act sets out clear definitions for the different actors involved in AI, such as the providers, deployers, importers, distributors, and product manufacturers. This means all parties involved in the development, usage, import, distribution, or manufacturing of AI systems will be held accountable. Along with this, the AI Act also applies to providers and deployers of AI systems located outside of the EU, e.g., in Switzerland, if output produced by the system is intended to be used in the EU. The Act applies to any AI system within the EU that is on the market, in service, or in use, covering both AI providers (the companies selling AI systems) and AI deployers (the organizations using those systems).
In short, the AI Act will apply to different companies across the AI distribution chain, including providers, deployers, importers, and distributors (collectively referred to as “Operators”). The EU AI Act also has extraterritorial application and can also apply to companies not established in the EU, or providers outside the EU if they -make an AI system or GPAI model available on the EU market. Even if only the output generated by the AI system is used in the EU, the Act still applies to such providers and deployers.
CyberPeace Outlook
The EU AI Act, approved by EU lawmakers in 2024, is a landmark legislation designed to protect citizens' health, safety, and fundamental rights from potential harm caused by AI systems. The AI Act will apply to AI systems and GPAI models. The Act creates a tiered risk categorization system with various regulations and stiff penalties for noncompliance. The Act adopts a risk-based approach to AI governance, categorizing potential risks into four tiers: unacceptable, high, limited, and low. Violations of banned systems carry the highest fine: €35 million, or 7 percent of global annual revenue. It establishes transparency requirements for general-purpose AI systems. The regulation also provides specific rules for general-purpose AI (GPAI) models and lays down more stringent requirements for GPAI models with 'high-impact capabilities' that could pose a systemic risk and have a significant impact on the internal market. For high-risk AI systems, the AI Act addresses the issues of fundamental rights impact assessment and data protection impact assessment.
The EU AI Act aims to enhance trust in AI technologies by establishing clear regulatory standards governing AI. We encourage regulatory frameworks that strive to balance the desire to foster innovation with the critical need to prevent unethical practices that may cause user harm. The legislation can be seen as strengthening the EU's position as a global leader in AI innovation and developing regulatory frameworks for emerging technologies. It sets a global benchmark for regulating AI. The companies to which the act applies will need to make sure their practices align with the same. The act may inspire other nations to develop their own legislation contributing to global AI governance. The world of AI is complex and challenging, the implementation of regulatory checks, and compliance by the concerned companies, all pose a conundrum. However, in the end, balancing innovation with ethical considerations is paramount.
At the same hand, the tech sector welcomes regulatory progress but warns that overly-rigid regulations could stifle innovation. Hence flexibility and adaptability are key to effective AI governance. The journey towards robust AI regulation has begun in major countries, and it is important that we find the right balance between safety and innovation and also take into consideration the industry reactions.
References:
- https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401689
- https://www.theverge.com/2024/7/12/24197058/eu-ai-act-regulations-bans-deadline
- https://techcrunch.com/2024/07/12/eus-ai-act-gets-published-in-blocs-official-journal-starting-clock-on-legal-deadlines/
- https://www.wsgr.com/en/insights/eu-ai-act-to-enter-into-force-in-august.html
- https://www.techtarget.com/searchenterpriseai/tip/Is-your-business-ready-for-the-EU-AI-Act
- https://www.simmons-simmons.com/en/publications/clyimpowh000ouxgkw1oidakk/the-eu-ai-act-a-quick-guide

What are Deepfakes?
A deepfake is essentially a video of a person in which their face or body has been digitally altered so that they appear to be someone else, typically used maliciously or to spread false information. Deepfake technology is a method for manipulating videos, images, and audio utilising powerful computers and deep learning. It is used to generate fake news and commit financial fraud, among other wrongdoings. It overlays a digital composite over an already-existing video, picture, or audio; cybercriminals use Artificial Intelligence technology. The term deepfake was coined first time in 2017 by an anonymous Reddit user, who called himself deepfake.
Deepfakes works on a combination of AI and ML, which makes the technology hard to detect by Web 2.0 applications, and it is almost impossible for a layman to see if an image or video is fake or has been created using deepfakes. In recent times, we have seen a wave of AI-driven tools which have impacted all industries and professions across the globe. Deepfakes are often created to spread misinformation. There lies a key difference between image morphing and deepfakes. Image morphing is primarily used for evading facial recognition, but deepfakes are created to spread misinformation and propaganda.
Issues Pertaining to Deepfakes in India
Deepfakes are a threat to any nation as the impact can be divesting in terms of monetary losses, social and cultural unrest, and actions against the sovereignty of India by anti-national elements. Deepfake detection is difficult but not impossible. The following threats/issues are seen to be originating out of deep fakes:
- Misinformation: One of the biggest issues of Deepfake is misinformation, the same was seen during the Russia-Ukraine conflict, where in a deepfake of Ukraine’s president, Mr Zelensky, surfaced on the internet and caused mass confusion and propaganda-based misappropriation among the Ukrainians.
- Instigation against the Union of India: Deepfake poses a massive threat to the integrity of the Union of India, as this is one of the easiest ways for anti-national elements to propagate violence or instigate people against the nation and its interests. As India grows, so do the possibilities of anti-national attacks against the nation.
- Cyberbullying/ Harassment: Deepfakes can be used by bad actors to harass and bully people online in order to extort money from them.
- Exposure to Illicit Content: Deepfakes can be easily used to create illicit content, and oftentimes, it is seen that it is being circulated on online gaming platforms where children engage the most.
- Threat to Digital Privacy: Deepfakes are created by using existing videos. Hence, bad actors often use photos and videos from Social media accounts to create deepfakes, this directly poses a threat to the digital privacy of a netizen.
- Lack of Grievance Redressal Mechanism: In the contemporary world, the majority of nations lack a concrete policy to address the aspects of deepfake. Hence, it is of paramount importance to establish legal and industry-based grievance redressal mechanisms for the victims.
- Lack of Digital Literacy: Despite of high internet and technology penetration rates in India, digital literacy lags behind, this is a massive concern for the Indian netizens as it takes them far from understanding the tech, which results in the under-reporting of crimes. Large-scale awareness and sensitisation campaigns need to be undertaken in India to address misinformation and the influence of deepfakes.
How to spot deepfakes?
Deepfakes look like the original video at first look, but as we progress into the digital world, it is pertinent to establish identifying deepfakes in our digital routine and netiquettes in order to stay protected in the future and to address this issue before it is too late. The following aspects can be kept in mind while differentiating between a real video and a deepfake
- Look for facial expressions and irregularities: Whenever differentiating between an original video and deepfake, always look for changes in facial expressions and irregularities, it can be seen that the facial expressions, such as eye movement and a temporary twitch on the face, are all signs of a video being a deepfake.
- Listen to the audio: The audio in deepfake also has variations as it is imposed on an existing video, so keep a check on the sound effects coming from a video in congruence with the actions or gestures in the video.
- Pay attention to the background: The most easiest way to spot a deepfake is to pay attention to the background, in all deepfakes, you can spot irregularities in the background as, in most cases, its created using virtual effects so that all deepfakes will have an element of artificialness in the background.
- Context and Content: Most of the instances of deepfake have been focused towards creating or spreading misinformation hence, the context and content of any video is an integral part of differentiating between an original video and deepfake.
- Fact-Checking: As a basic cyber safety and digital hygiene protocol, one should always make sure to fact-check each and every piece of information they come across on social media. As a preventive measure, always make sure to fact-check any information or post sharing it with your known ones.
- AI Tools: When in doubt, check it out, and never refrain from using Deepfake detection tools like- Sentinel, Intel’s real-time deepfake detector - Fake catcher, We Verify, and Microsoft’s Video Authenticator tool to analyze the videos and combating technology with technology.
Recent Instance
A deepfake video of actress Rashmika Mandanna recently went viral on social media, creating quite a stir. The video showed a woman entering an elevator who looked remarkably like Mandanna. However, it was later revealed that the woman in the video was not Mandanna, but rather, her face was superimposed using AI tools. Some social media users were deceived into believing that the woman was indeed Mandanna, while others identified it as an AI-generated deepfake. The original video was actually of a British-Indian girl named Zara Patel, who has a substantial following on Instagram. This incident sparked criticism from social media users towards those who created and shared the video merely for views, and there were calls for strict action against the uploaders. The rapid changes in the digital world pose a threat to personal privacy; hence, caution is advised when sharing personal items on social media.
Legal Remedies
Although Deepfake is not recognised by law in India, it is indirectly addressed by Sec. 66 E of the IT Act, which makes it illegal to capture, publish, or transmit someone's image in the media without that person's consent, thus violating their privacy. The maximum penalty for this violation is ₹2 lakh in fines or three years in prison. The DPDP Act's applicability in 2023 means that the creation of deepfakes will directly affect an individual's right to digital privacy and will also violate the IT guidelines under the Intermediary Guidelines, as platforms will be required to exercise caution while disseminating and publishing misinformation through deepfakes. The indirect provisions of the Indian Penal Code, which cover the sale and dissemination of derogatory publications, songs and actions, deception in the delivery of property, cheating and dishonestly influencing the delivery of property, and forgery with the intent to defame, are the only legal remedies available for deepfakes. Deep fakes must be recognized legally due to the growing power of misinformation. The Data Protection Board and the soon-to-be-established fact-checking body must recognize crimes related to deepfakes and provide an efficient system for filing complaints.
Conclusion
Deepfake is an aftermath of the advancements of Web 3.0 and, hence is just the tip of the iceberg in terms of the issues/threats from emerging technologies. It is pertinent to upskill and educate the netizens about the keen aspects of deepfakes to stay safe in the future. At the same time, developing and developed nations need to create policies and laws to efficiently regulate deepfake and to set up redressal mechanisms for victims and industry. As we move ahead, it is pertinent to address the threats originating out of the emerging techs and, at the same time, create a robust resilience for the same.