#FactCheck-AI-Generated Video Falsely Shows Samay Raina Making a Joke on Rekha
Executive Summary:
A viral video circulating on social media that appears to be deliberately misleading and manipulative is shown to have been done by comedian Samay Raina casually making a lighthearted joke about actress Rekha in the presence of host Amitabh Bachchan which left him visibly unsettled while shooting for an episode of Kaun Banega Crorepati (KBC) Influencer Special. The joke pointed to the gossip and rumors of unspoken tensions between the two Bollywood Legends. Our research has ruled out that the video is artificially manipulated and reflects a non genuine content. However, the specific joke in the video does not appear in the original KBC episode. This incident highlights the growing misuse of AI technology in creating and spreading misinformation, emphasizing the need for increased public vigilance and awareness in verifying online information.

Claim:
The claim in the video suggests that during a recent "Influencer Special" episode of KBC, Samay Raina humorously asked Amitabh Bachchan, "What do you and a circle have in common?" and then delivered the punchline, "Neither of you and circle have Rekha (line)," playing on the Hindi word "rekha," which means 'line'.ervicing routes between Amritsar, Chandigarh, Delhi, and Jaipur. This assertion is accompanied by images of a futuristic aircraft, implying that such technology is currently being used to transport commercial passengers.

Fact Check:
To check the genuineness of the claim, the whole Influencer Special episode of Kaun Banega Crorepati (KBC) which can also be found on the Sony Set India YouTube channel was carefully reviewed. Our analysis proved that no part of the episode had comedian Samay Raina cracking a joke on actress Rekha. The technical analysis using Hive moderator further found that the viral clip is AI-made.

Conclusion:
A viral video on the Internet that shows Samay Raina making a joke about Rekha during KBC was released and completely AI-generated and false. This poses a serious threat to manipulation online and that makes it all the more important to place a fact-check for any news from credible sources before putting it out. Promoting media literacy is going to be key to combating misinformation at this time, with the danger of misuse of AI-generated content.
- Claim: Fake AI Video: Samay Raina’s Rekha Joke Goes Viral
- Claimed On: X (Formally known as Twitter)
- Fact Check: False and Misleading
Related Blogs

In the vast, uncharted territories of the digital world, a sinister phenomenon is proliferating at an alarming rate. It's a world where artificial intelligence (AI) and human vulnerability intertwine in a disturbing combination, creating a shadowy realm of non-consensual pornography. This is the world of deepfake pornography, a burgeoning industry that is as lucrative as it is unsettling.
According to a recent assessment, at least 100,000 deepfake porn videos are readily available on the internet, with hundreds, if not thousands, being uploaded daily. This staggering statistic prompts a chilling question: what is driving the creation of such a vast number of fakes? Is it merely for amusement, or is there a more sinister motive at play?
Recent Trends and Developments
An investigation by India Today’s Open-Source Intelligence (OSINT) team reveals that deepfake pornography is rapidly morphing into a thriving business. AI enthusiasts, creators, and experts are extending their expertise, investors are injecting money, and even small financial companies to tech giants like Google, VISA, Mastercard, and PayPal are being misused in this dark trade. Synthetic porn has existed for years, but advances in AI and the increasing availability of technology have made it easier—and more profitable—to create and distribute non-consensual sexually explicit material. The 2023 State of Deepfake report by Home Security Heroes reveals a staggering 550% increase in the number of deepfakes compared to 2019.
What’s the Matter with Fakes?
But why should we be concerned about these fakes? The answer lies in the real-world harm they cause. India has already seen cases of extortion carried out by exploiting deepfake technology. An elderly man in UP’s Ghaziabad, for instance, was tricked into paying Rs 74,000 after receiving a deep fake video of a police officer. The situation could have been even more serious if the perpetrators had decided to create deepfake porn of the victim.
The danger is particularly severe for women. The 2023 State of Deepfake Report estimates that at least 98 percent of all deepfakes is porn and 99 percent of its victims are women. A study by Harvard University refrained from using the term “pornography” for creating, sharing, or threatening to create/share sexually explicit images and videos of a person without their consent. “It is abuse and should be understood as such,” it states.
Based on interviews of victims of deepfake porn last year, the study said 63 percent of participants talked about experiences of “sexual deepfake abuse” and reported that their sexual deepfakes had been monetised online. It also found “sexual deepfake abuse to be particularly harmful because of the fluidity and co-occurrence of online offline experiences of abuse, resulting in endless reverberations of abuse in which every aspect of the victim’s life is permanently disrupted”.
Creating deepfake porn is disturbingly easy. There are largely two types of deepfakes: one featuring faces of humans and another featuring computer-generated hyper-realistic faces of non-existing people. The first category is particularly concerning and is created by superimposing faces of real people on existing pornographic images and videos—a task made simple and easy by AI tools.
During the investigation, platforms hosting deepfake porn of stars like Jennifer Lawrence, Emma Stone, Jennifer Aniston, Aishwarya Rai, Rashmika Mandanna to TV actors and influencers like Aanchal Khurana, Ahsaas Channa, and Sonam Bajwa and Anveshi Jain were encountered. It takes a few minutes and as little as Rs 40 for a user to create a high-quality fake porn video of 15 seconds on platforms like FakeApp and FaceSwap.
The Modus Operandi
These platforms brazenly flaunt their business association and hide behind frivolous declarations such as: the content is “meant solely for entertainment” and “not intended to harm or humiliate anyone”. However, the irony of these disclaimers is not lost on anyone, especially when they host thousands of non-consensual deepfake pornography.
As fake porn content and its consumers surge, deepfake porn sites are rushing to forge collaborations with generative AI service providers and have integrated their interfaces for enhanced interoperability. The promise and potential of making quick bucks have given birth to step-by-step guides, video tutorials, and websites that offer tools and programs, recommendations, and ratings.
Nearly 90 per cent of all deepfake porn is hosted by dedicated platforms that charge for long-duration premium fake content and for creating porn—of whoever a user wants, and take requests for celebrities. To encourage them further, they enable creators to monetize their content.
One such website, Civitai, has a system in place that pays “rewards” to creators of AI models that generate “images of real people'', including ordinary people. It also enables users to post AI images, prompts, model data, and LoRA (low-rank adaptation of large language models) files used in generating the images. Model data designed for adult content is gaining great popularity on the platform, and they are not only targeting celebrities. Common people are equally susceptible.
Access to premium fake porn, like any other content, requires payment. But how can a gateway process payment for sexual content that lacks consent? It seems financial institutes and banks are not paying much attention to this legal question. During the investigation, many such websites accepting payments through services like VISA, Mastercard, and Stripe were found.
Those who have failed to register/partner with these fintech giants have found a way out. While some direct users to third-party sites, others use personal PayPal accounts to manually collect money in the personal accounts of their employees/stakeholders, which potentially violates the platform's terms of use that ban the sale of “sexually oriented digital goods or content delivered through a digital medium.”
Among others, the MakeNude.ai web app – which lets users “view any girl without clothing” in “just a single click” – has an interesting method of circumventing restrictions around the sale of non-consensual pornography. The platform has partnered with Ukraine-based Monobank and Dublin’s BetaTransfer Kassa which operates in “high-risk markets”.
BetaTransfer Kassa admits to serving “clients who have already contacted payment aggregators and received a refusal to accept payments, or aggregators stopped payments altogether after the resource was approved or completely freeze your funds”. To make payment processing easy, MakeNude.ai seems to be exploiting the donation ‘jar’ facility of Monobank, which is often used by people to donate money to Ukraine to support it in the war against Russia.
The Indian Scenario
India currently is on its way to design dedicated legislation to address issues arising out of deepfakes. Though existing general laws requiring such platforms to remove offensive content also apply to deepfake porn. However, persecution of the offender and their conviction is extremely difficult for law enforcement agencies as it is a boundaryless crime and sometimes involves several countries in the process.
A victim can register a police complaint under provisions of Section 66E and Section 66D of the IT Act, 2000. Recently enacted Digital Personal Data Protection Act, 2023 aims to protect the digital personal data of users. Recently Union Government issued an advisory to social media intermediaries to identify misinformation and deepfakes. Comprehensive law promised by Union IT minister Ashwini Vaishnav will be able to address these challenges.
Conclusion
In the end, the unsettling dance of AI and human vulnerability continues in the dark web of deepfake pornography. It's a dance that is as disturbing as it is fascinating, a dance that raises questions about the ethical use of technology, the protection of individual rights, and the responsibility of financial institutions. It's a dance that we must all be aware of, for it is a dance that affects us all.
References
- https://www.indiatoday.in/india/story/deepfake-porn-artificial-intelligence-women-fake-photos-2471855-2023-12-04
- https://www.hindustantimes.com/opinion/the-legal-net-to-trap-peddlers-of-deepfakes-101701520933515.html
- https://indianexpress.com/article/opinion/columns/with-deepfakes-getting-better-and-more-alarming-seeing-is-no-longer-believing/

There has been a struggle to create legal frameworks that can define where free speech ends and harmful misinformation begins, specifically in democratic societies where the right to free expression is a fundamental value. Platforms like YouTube, Wikipedia, and Facebook have gained a huge consumer base by focusing on hosting user-generated content. This content includes anything a visitor puts on a website or social media pages.
The legal and ethical landscape surrounding misinformation is dependent on creating a fine balance between freedom of speech and expression while protecting public interests, such as truthfulness and social stability. This blog is focused on examining the legal risks of misinformation, specifically user-generated content, and the accountability of platforms in moderating and addressing it.
The Rise of Misinformation and Platform Dynamics
Misinformation content is amplified by using algorithmic recommendations and social sharing mechanisms. The intent of spreading false information is closely interwoven with the assessment of user data to identify target groups necessary to place targeted political advertising. The disseminators of fake news have benefited from social networks to reach more people, and from the technology that enables faster distribution and can make it more difficult to distinguish fake from hard news.
Multiple challenges emerge that are unique to social media platforms regulating misinformation while balancing freedom of speech and expression and user engagement. The scale at which content is created and published, the different regulatory standards, and moderating misinformation without infringing on freedom of expression complicate moderation policies and practices.
The impacts of misinformation on social, political, and economic consequences, influencing public opinion, electoral outcomes, and market behaviours underscore the urgent need for effective regulation, as the consequences of inaction can be profound and far-reaching.
Legal Frameworks and Evolving Accountability Standards
Safe harbour principles allow for the functioning of a free, open and borderless internet. This principle is embodied under the US Communications Decency Act and the Information Technology Act in Sections 230 and 79 respectively. They play a pivotal role in facilitating the growth and development of the Internet. The legal framework governing misinformation around the world is still in nascent stages. Section 230 of the CDA protects platforms from legal liability relating to harmful content posted on their sites by third parties. It further allows platforms to police their sites for harmful content and protects them from liability if they choose not to.
By granting exemptions to intermediaries, these safe harbour provisions help nurture an online environment that fosters free speech and enables users to freely express themselves without arbitrary intrusions.
A shift in regulations has been observed in recent times. An example is the enactment of the Digital Services Act of 2022 in the European Union. The Act requires companies having at least 45 million monthly users to create systems to control the spread of misinformation, hate speech and terrorist propaganda, among other things. If not followed through, they risk penalties of up to 6% of the global annual revenue or even a ban in EU countries.
Challenges and Risks for Platforms
There are multiple challenges and risks faced by platforms that surround user-generated misinformation.
- Moderating user-generated misinformation is a big challenge, primarily because of the quantity of data in question and the speed at which it is generated. It further leads to legal liabilities, operational costs and reputational risks.
- Platforms can face potential backlash, both in instances of over-moderation or under-moderation. It can be considered as censorship, often overburdening. It can also be considered as insufficient governance in cases where the level of moderation is not protecting the privacy rights of users.
- Another challenge is more in the technical realm, including the limitations of AI and algorithmic moderation in detecting nuanced misinformation. It holds out to the need for human oversight to sift through the misinformation that is created by AI-generated content.
Policy Approaches: Tackling Misinformation through Accountability and Future Outlook
Regulatory approaches to misinformation each present distinct strengths and weaknesses. Government-led regulation establishes clear standards but may risk censorship, while self-regulation offers flexibility yet often lacks accountability. The Indian framework, including the IT Act and the Digital Personal Data Protection Act of 2023, aims to enhance data-sharing oversight and strengthen accountability. Establishing clear definitions of misinformation and fostering collaborative oversight involving government and independent bodies can balance platform autonomy with transparency. Additionally, promoting international collaborations and innovative AI moderation solutions is essential for effectively addressing misinformation, especially given its cross-border nature and the evolving expectations of users in today’s digital landscape.
Conclusion
A balance between protecting free speech and safeguarding public interest is needed to navigate the legal risks of user-generated misinformation poses. As digital platforms like YouTube, Facebook, and Wikipedia continue to host vast amounts of user content, accountability measures are essential to mitigate the harms of misinformation. Establishing clear definitions and collaborative oversight can enhance transparency and build public trust. Furthermore, embracing innovative moderation technologies and fostering international partnerships will be vital in addressing this cross-border challenge. As we advance, the commitment to creating a responsible digital environment must remain a priority to ensure the integrity of information in our increasingly interconnected world.
References
- https://www.thehindu.com/opinion/op-ed/should-digital-platform-owners-be-held-liable-for-user-generated-content/article68609693.ece
- https://www.thehindu.com/opinion/op-ed/should-digital-platform-owners-be-held-liable-for-user-generated-content/article68609693.ece
- https://hbr.org/2021/08/its-time-to-update-section-230
- https://www.cnbctv18.com/information-technology/deepfakes-digital-india-act-safe-harbour-protection-information-technology-act-sajan-poovayya-19255261.htm

Introduction
In an era where digitalization is transforming every facet of life, ensuring that personal data is protected becomes crucial. The enactment of the Digital Personal Data Protection Act, 2023 (DPDP Act) is a significant step that has been taken by the Indian Parliament which sets forth a comprehensive framework for Digital Personal Data. The Draft Digital Personal Data Protection Rules, 2025 has recently been released for public consultation to supplement the Act and ensure its smooth implementation once finalised. Though noting certain positive aspects, there is still room for addressing certain gaps and multiple aspects under the draft rules that require attention. The DPDP Act, 2023 recognises the individual’s right to protect their personal data providing control over the processing of personal data for lawful purposes. This Act applies to data which is available in digital form as well as data which is not in digital form but is digitalised subsequently. While the Act is intended to offer wide control to the individuals (Data Principal) over their personal information, its impact on vulnerable groups such as ‘Persons with Disabilities’ requires closer scrutiny.
Person with Disabilities as data principal
The term ‘data principal’ has been defined under the DPDP Act under Section 2(j) as a person to whom the personal data is related to, which also includes a person with a disability. A lawful guardian acting on behalf of such person with disability has also been included under the ambit of this definition of Data Principal. As a result, a lawful guardian acting on behalf of a person with disability will have the same rights and responsibilities as a data principal under the Act.
- Section 9 of the DPDP Act, 2023 states that before processing the personal data of a person with a disability who has a lawful guardian, the data fiduciary must obtain verifiable consent from that guardian, ensuring proper protection of the person with disability's data privacy.
- The data principal has the right to access information about personal data under Section 11 which is being processed by the data fiduciary.
- Section 12 provides the right to correction and erasure of personal data by making a request in a manner prescribed by the data fiduciary.
- A right to grievance redressal must be provided to the data principal in respect of any act or omission of performance of obligations by the data fiduciary or the consent manager.
- Under Section 14, the data principal has the right to nominate any other person to exercise the rights provided under the Act in case of death or incapacity.
Provision of consent and its implication
The three key components of Consent that can be identified under the DPDP Act, are:
- Explicit and Informed Consent: Consent given for the processing of data by the data principal or a lawful guardian in case of persons with disabilities must be clear, free and informed as per section 6 of the Act. The data fiduciary must specify the itemised description of the personal data required along with the specified purpose and description of the goods or services that would be provided by such processing of data. (Rule 3 under Draft Digital Personal Data Protection Rules)
- Verifiable Consent: Section 9 of the DPDP Act provides that the data fiduciary needs to obtain verifiable consent of the lawful guardian before processing any personal data of such a person with a disability. Rule 10 of the Draft Rules obligates the data fiduciary to adopt measures to ensure that the consent given by the lawful guardian is verifiable before the is processed.
- Withdrawal of Consent: Data principal or such lawful guardian has the option to withdraw consent for the processing of data at any point by making a request to the data fiduciary.
Although the Act includes certain provisions that focus on the inclusivity of persons with disability, the interpretation of such sections says otherwise.
Concerns related to provisions for Persons with Disabilities under the DPDP Act:
- Lack of definition of ‘person with disabilities’: The DPDP Act or the Draft Rules does not define the term ‘persons with disabilities’. This will create confusion as to which categories of disability are included and up to what percentage. The Rights of Persons with Disabilities Act, 2016 clearly defines ‘person with benchmark disability’, ‘person with disability’ and ‘person with disability having high support needs’. This categorisation is essential to determine up to what extent a person with disability needs a lawful guardian which is missing under the DPDP Act.
- Lack of autonomy: Though the definition of data principal includes persons with disabilities however the decision-making authority has been given to the lawful guardian of such individuals. The section creates ambiguity for people who have a lower percentage of disability and are capable of making their own decisions and have no autonomy in making decisions related to the processing of their personal data because of the lack of clarity in the definition of ‘persons with disabilities’.
- Safeguards for abuse of power by lawful guardian: The lawful guardian once verified by the data fiduciary can make decisions for the persons with disabilities. This raises concerns regarding the potential abuse of power by lawful guardians in relation to the handling of personal data. The DPDP Act does not provide any specific protection against such abuse.
- Difficulty in verification of consent: The consent obtained by the Data Fiduciary must be verified. The process that will be adopted for verification is at the discretion of the data fiduciary according to Rule 10 of the Draft Data Protection Rules. The authenticity of consent is difficult to determine as it is a complex process which lacks a standard format. Also, with the technological advancements, it would be challenging to identify whether the information given to verify the consent is actually true.
CyberPeace Recommendations
The DPDP Act, 2023 is a major step towards making the data protection framework more comprehensive, however, the provisions related to persons with disabilities and powers given to lawful guardians acting on their behalf still need certain clarity and refinement within the DPDP Act framework.
- Consonance of DPDP with Rights of Persons with Disabilities (RPWD) Act, 2016: The RPWD and DPDP Act should supplement each other and can be used to clear the existing ambiguities. Such as the definition of ‘persons with disabilities’ under the RPWD Act can be used in the context of the DPDP Act, 2023.
- Also, there must be certain mechanisms and safeguards within the Act to prevent abuse of power by the lawful guardian. The affected individual in case of suspected abuse of power should have an option to file a complaint with the Data Protection Board and the Board can further take necessary actions to determine whether there is abuse of power or not.
- Regulatory oversight and additional safeguards are required to ensure that consent is obtained in a manner that respects the rights of all individuals, including those with disabilities.
References:
- https://www.meity.gov.in/writereaddata/files/Digital%20Personal%20Data%20Protection%20Act%202023.pdf
- https://www.meity.gov.in/writereaddata/files/259889.pdf
- https://www.indiacode.nic.in/bitstream/123456789/15939/1/the_rights_of_persons_with_disabilities_act%2C_2016.pdf
- https://www.deccanherald.com/opinion/consent-disability-rights-and-data-protection-3143441
- https://www.pacta.in/digital-data-protection-consent-protocols-for-disability.pdf
- https://www.snrlaw.in/indias-new-data-protection-regime-tracking-updates-and-preparing-for-compliance/