#FactCheck - AI-Generated Clip of Lion Carrying Woman Shared as Real Incident
Executive Summary
A video circulating on social media shows a lion carrying away a woman who was washing clothes near a pond. Users are sharing the clip claiming it depicts a real incident. However, research by CyberPeace found the viral claim to be false. The research revealed that the video is not real but AI-generated.
Claim
A user on Facebook shared the viral video claiming that a lion attacked and carried away a woman from a pond while she was washing clothes. The link to the post and its archived version are provided below

Fact Check:
Upon closely examining the viral clip, we noticed several visual inconsistencies that raised suspicion about its authenticity. The video was then analyzed using the AI-detection tool Sightengine. According to the analysis results, the viral video was identified as AI-generated.

Conclusion
The research confirms that the viral video does not depict a real incident. The clip is digitally created using artificial intelligence and is being falsely shared as a genuine event.
Related Blogs

Introduction
In today's relentless current of information, where social media is oftentimes both the stage and the playwright, the line between reality and spectacle can become distressingly blurry. In such a virtual Pantheon, the conflation of truth and fiction has recently surfaced in a particularly contentious instance. The central figure is Poonam Pandey, an entertainment personality known for transgressing traditional contours of celebrity boldness. Pandey found herself ensnared in a narrative of her own orchestration—a grim hoax purporting she had succumbed to cervical cancer. This deceptive foray, rather than awakening public consciousness as intended, spiralled into an ominous fable about the malignant spread of misinformation and the profound moral dilemmas it engenders.
The Deception
The tapestry of this event was woven with threads of tragedy and deception, framing Pandey both as the tragic hero and the ill-fated architect of a spectacle that unfolded with a haunting familiarity evocative of ancient Greek dramas. The monumental pillar of social media, on what seemed to be an ordinary day, was shattered by the startling declaration of Pandey's untimely passing. The statement, as bereft of nuance as it was devastating, proclaimed: 'We are deeply grieved to announce the loss of our cherished Poonam to cervical cancer.' The emotional pulse of the Indian Film Industry was jolted; waves of homage inundated the digital space, each tribute a poignant echo of the shock that rippled through her fanbase. Yet the crux of the matter had yet to be unveiled.
As the world grappled with this news, the scenario took an unforeseen detour. Poonam Pandey made a re-entrance onto the world stage, alive, revealing her alleged demise to be nothing more than a macabre masquerade. The public's reaction to this revelation was a stratified symphony of emotions—indignation mingled with disbelief, with an underlying crescendo of betrayal. Pandey's defense postured her act as a last resort to draw attention to the silent yet pervasive threat of cervical cancer. In the ensuing mire of reactions, an inescapable quandary emerged: is it ever permissible to employ deceit for the sake of presumed publicity?
The Chaos
Satyajeet Tambe, an esteemed Maharashtra legislator, emerged amidst the churning chaos as a paragon of principled reason. Advocating that such mendacious stunts, playing the chords of public emotion and adulterating truth, should be met with legal repercussions, Tambe called for judicious action against Pandey. His imploration resonated with the necessity of integrity in the public domain, stating, 'The announcement of an influencer/model succumbing to cervical cancer should not be wielded as a tool for awareness.' His pronouncement sent reverberations through the collective conscience, echoing the need for accountability in the face of such transgressions.
Repercussion
The All Indian Cine Workers Association, a custodian of the film industry's values, also voiced its reproach. They urged for an FIR to be lodged against Poonam Pandey, underlining their sentiments with disappointment and a keen sense of betrayal. Within their condemnation lay a profound recognition of the elevated emotional investment inherent in their industry—an industry where the reverence for life and the abhorrence of deceit intertwine, making the cultivation of such lowly stunts anathema.
This spectacle, while unique in the temerity of its execution, mirrors the broader pathological wave of misinformation that corrodes the foundations of our digital era: the malady of fake news. When delineated, fake news finds its essence as information chiselled specifically to deceive, a form of communication that is not merely slanted but entirely devoid of authenticity, manufactured with nefarious intent. A protean adversary, fake news adeptly masquerades as trustworthy news, ensnaring the unsuspecting in its tendrils. Its purveyors span a spectrum—from shadowy figures to ostensibly benign social media accounts—all contributing to a dystopian fabric where truth is persistently imperilled.
The conjurers of these illusions are, in a sense, cunning illusionists ensconced behind curtains of anonymity or masquerading under a cloak of transparency. They craft elaborate illusions devoid of truth, but dripping with sufficient plausibility to ensnare those who yearn for simplicity in an increasingly complex world. Destabilizing forces, such as hyper partisan media outlets, regurgitate a concoction of concocted 'facts' and distortions, deliberately smudging the once-clear line between empirical truth and partisan fabrication.
The Aftermath
The Poonam Pandey episode stands as a harrowing beacon of the ethical abyss we face. It compels us to confront the irony of utilising falsity to raise awareness for laudable causes and considers the ramifications for public figures influencing the dissemination of information. The tempest around this event demonstrates the potent gravitational pull of information and the overarching need for the conscientious stewardship of its power.
Yet, as we sail through the murky waters of the digital expanse, where the allure of sensationalism and clickbait headlines is ever-present, our vigilance must not wane. The imperative of truth cannot come at the altar of awareness or sensationalism. The sanctity of fact anchors our understanding of reality; devoid of it, we are adrift in an ocean of confusion and misinformation.
In the dust settled after the Poonam Pandey debacle, the contours of a new discourse have emerged, harboring vital interrogations. How do we balance the drive for poignant awareness initiatives against the cardinal principle of truth? What mechanisms can ensure that health campaigns and their noble aspirations are not tainted by the allure of deception? Addressing these queries is not a solitary task for policymakers or influencers but, indeed, a collective societal responsibility that will define our cultural ethics and the legacy we wish to preserve.
Conclusion
As we contemplate the broader implications of this incident, let us not allow its sensational nature to eclipse the very real and pressing issue of cervical cancer—a condition that, beyond the glare of controversy, continues to shadow lives with its lethal silence. Instead, let our focus pivot towards tangible, truth-driven efforts aimed at education and empowerment. Truth, after all, is the beacon that dispels the murky shadows of ignorance and guides us toward enlightenment and healing.
References
- https://www.hindustantimes.com/india-news/poonam-pandey-in-trouble-as-maharashtra-politician-seeks-case-for-faking-her-death-101707005742992.html
- https://www.nagpurtoday.in/state-mlc-tambe-demands-police-action-against-poonam-pandey-for-faking-her-death/02051417
.webp)
Introduction
AI-generated fake videos are proliferating on the Internet indeed becoming more common by the day. There is a use of sophisticated AI algorithms that help manipulate or generate multimedia content such as videos, audio, and images. As a result, it has become increasingly difficult to differentiate between genuine, altered, or fake content, and these AI-manipulated videos look realistic. A recent study has shown that 98% of deepfake-generated videos have adult content featuring young girls, women, and children, with India ranking 6th among the nations that suffer from misuse of deepfake technology. This practice has dangerous consequences and could harm an individual's reputation, and criminals could use this technology to create a false narrative about a candidate or a political party during elections.
The working of deepfake videos is based on algorithms that refine the fake content, and the generators are built and trained in such a way as to get the desired output. The process is repeated several times, allowing the generator to improve the content until it seems realistic, making it more flawless. Deepfake videos are created by specific approaches some of them are: -
- Lip syncing: This is the most common technique used in deepfake. Here, the voice recordings of the video, make it appear as to what was originally said by the person appearing in the video.
- Audio deepfake: For Audio-generated deepfake, a generative adversarial network (GAN) is used to colon a person’s voice, based on the vocal patterns and refine it till the desired output is generated.
- Deepfake has become so serious that the technology could be used by bad actors or by cyber-terrorist squads to set their Geo-political agendas. Looking at the present situation in the past few the number of cases has just doubled, targeting children, women and popular faces.
- Greater Risk: in the last few years the cases of deep fake have risen. by the end of the year 2022, the number of cases has risen to 96% against women and children according to a survey.
- Every 60 seconds, a deepfake pornographic video is created, now quicker and more affordable than ever, it takes less than 25 minutes and costs using just one clean face image.
- The connection to deepfakes is that people can become targets of "revenge porn" without the publisher having sexually explicit photographs or films of the victim. They may be made using any number of random pictures or images collected from the internet to obtain the same result. This means that almost everyone who has taken a selfie or shared a photograph of oneself online faces the possibility of a deepfake being constructed in their image.
Deepfake-related security concerns
As deepfakes proliferate, more people are realising that they can be used not only to create non-consensual porn but also as part of disinformation and fake news campaigns with the potential to sway elections and rekindle frozen or low-intensity conflicts.
Deepfakes have three security implications: at the international level, strategic deepfakes have the potential to destroy precarious peace; at the national level, deepfakes may be used to unduly influence elections, and the political process, or discredit opposition, which is a national security concern, and at the personal level, the scope for using Women suffer disproportionately from exposure to sexually explicit content as compared to males, and they are more frequently threatened.
Policy Consideration
Looking at the present situation where the cases of deepfake are on the rise against women and children, the policymakers need to be aware that deepfakes are utilized for a variety of valid objectives, including artistic and satirical works, which policymakers should be aware of. Therefore, simply banning deepfakes is not a way consistent with fundamental liberties. One conceivable legislative option is to require a content warning or disclaimer. Deepfake is an advanced technology and misuse of deepfake technology is a crime.
What are the existing rules to combat deepfakes?
It's worth noting that both the IT Act of 2000 and the IT Rules of 2021 require social media intermediaries to remove deep-fake videos or images as soon as feasible. Failure to follow these guidelines can result in up to three years in jail and a Rs 1 lakh fine. Rule 3(1)(b)(vii) requires social media intermediaries to guarantee that its users do not host content that impersonates another person, and Rule 3(2)(b) requires such content to be withdrawn within 24 hours of receiving a complaint. Furthermore, the government has stipulated that any post must be removed within 36 hours of being published online. Recently government has also issued an advisory to social media intermediaries to identify misinformation and deepfakes.
Conclusion
It is important to foster ethical and responsible consumption of technology. This can only be achieved by creating standards for both the creators and users, educating individuals about content limits, and providing information. Internet-based platforms should also devise techniques to deter the uploading of inappropriate information. We can reduce the negative and misleading impacts of deepfakes by collaborating and ensuring technology can be used in a better manner.
References
- https://timesofindia.indiatimes.com/life-style/parenting/moments/how-social-media-scandals-like-deepfake-impact-minors-and-students-mental-health/articleshow/105168380.cms?from=mdr
- https://www.aa.com.tr/en/science-technology/deepfake-technology-putting-children-at-risk-say-experts/2980880
- https://wiisglobal.org/deepfakes-as-a-security-issue-why-gender-matters/

Introduction
To combat the problem of annoying calls and SMS, telecom regulator TRAI has urged service providers to create a uniform digital platform in two months that will allow them to request, maintain, and withdraw customers’ approval for promotional calls and messages. In the initial stage, only subscribers will be able to initiate the process of registering their consent to receive promotional calls and SMS, and later, business entities will be able to contact customers to seek their consent to receive promotional messages, according to a statement issued by the Telecom Regulatory Authority of India (TRAI) on Saturday.
TRAI Directs Telecom Providers to Set Up Digital Platform
TRAI has now directed all access providers to develop and deploy the Digital Consent Acquisition (DCA) facility for creating a unified platform and process to digitally register customers’ consent across all service providers and principal entities. Consent is received and maintained under the current system by several key entities such as banks, other financial institutions, insurance firms, trading companies, business entities, real estate businesses, and so on.
The purpose, scope of consent, and the principal entity or brand name shall be clearly mentioned in the consent-seeking message sent over the short code,” according to the statement.
It stated that only approved online or app links, call-back numbers, and so on will be permitted to be used in consent-seeking communications.
TRAI issued guidelines to guarantee that all voice-based Telemarketers are brought under a single Distributed ledger technology (DLT) platform for more efficient monitoring of nuisance calls and unwanted communications. It also instructs operators to actively deploy AI/ML-based anti-phishing systems as well as to integrate tech solutions on the DLT platform to deal with malicious calls and texts.
TRAI has issued two separate Directions to Access Service Providers under TCCCPR-2018 (Telecom Commercial Communications Customer Preference Regulations) to ensure that all promotional messages are sent through Registered Telemarketers (RTMs) using approved Headers and Message Templates on Distributed Ledger Technologies (DLT) platform, and to stop misuse of Headers and Message Templates,” the regulator said in a statement.
Users can already block telemarketing calls and texts by texting 1909 from their registered mobile number. By dialing 1909, customers can opt out of getting advertising calls by activating the do not disturb (DND) feature.

Telecom providers operate DLT platforms, and businesses involved in sending bulk promotional or transactional SMS must register by providing their company information, including sender IDs and SMS templates.
According to the instructions, telecom companies will send consent-seeking messages using the common short code 127. The goal, extent of consent, and primary entity/brand name must be clearly stated in the consent-seeking message delivered via the shortcode.
TRAI stated that only whitelisted URLs/APKs (Android package kits file format)/OTT links/call back numbers, etc., shall be used in consent-seeking messages.
Telcos must “ensure that promotional messages are not transmitted by unregistered telemarketers or telemarketers using telephone numbers (10 digits numbers).” Telecom providers have been urged to act against all erring telemarketers in accordance with the applicable regulations and legal requirements.
Users can, however, refuse to receive any consent-seeking messages launched by any significant Telcos have been urged to create an SMS/IVR (interactive voice response)/online service for this purpose.
According to TRAI’s timeline, the consent-taking process by primary companies will begin on September 1.According to a nationwide survey conducted by a local circle, 66% of mobile users continue to receive three or more bothersome calls per day, the majority of which originate from personal cell numbers.
There are scams surfacing on the internet with new types of scams, like WhatsApp international call scams. The latest scam is targeting Delhi police, the scammers pretend to be police officials of Delhi and ask for the personal details of the users and the calling them from a 9-digit number.
A recent scam
A Twitter user reported receiving an automated call from +91 96681 9555, stating, “This call is from Delhi Police.” It went on to ask her to stay in the queue since some of her documents needed to be picked up. Then he said he is a sub-inspector at New Delhi’s Kirti Nagar police station. He then questioned if she had lately misplaced her Aadhaar card, PAN card, or ATM card, to which she replied ‘no’. The fraudster then claims to be a cop and asks her to validate the final four digits of her card because they have discovered a card with her name on it. And so many other people tweeted about this.
The scams are constantly increasing as earlier these scammers asked for account details and claimed to be Delhi police and used 9-digit numbers for scamming people.
TRAI’s new guidelines regarding the consent to receive any promotional calls and messages to telecommunication providers will be able to curb the scams.
The e- KYC is an essential requirement as e-KYC offers a more secure identity verification process in an increasingly digital age that uses biometric technologies to provide quick results.

Conclusion
The aim is to prevent unwanted calls and communications sent to customers via digital methods without their permission. Once this platform is implemented, an organization can only send promotional calls or messages with the customer’s explicit approval. Companies use a variety of methods to notify clients about their products, including phone calls, text messages, emails, and social media. Customers, however, are constantly assaulted with the same calls and messages as a result of this practice. With the constant increase in scams, the new guideline of TRAI will also curb the calling of Scams. digital KYC prevents SIM fraud and offers a more secure identity verification method.