#FactCheck-AI-Generated Video Falsely Shows Samay Raina Making a Joke on Rekha
Executive Summary:
A viral video circulating on social media that appears to be deliberately misleading and manipulative is shown to have been done by comedian Samay Raina casually making a lighthearted joke about actress Rekha in the presence of host Amitabh Bachchan which left him visibly unsettled while shooting for an episode of Kaun Banega Crorepati (KBC) Influencer Special. The joke pointed to the gossip and rumors of unspoken tensions between the two Bollywood Legends. Our research has ruled out that the video is artificially manipulated and reflects a non genuine content. However, the specific joke in the video does not appear in the original KBC episode. This incident highlights the growing misuse of AI technology in creating and spreading misinformation, emphasizing the need for increased public vigilance and awareness in verifying online information.

Claim:
The claim in the video suggests that during a recent "Influencer Special" episode of KBC, Samay Raina humorously asked Amitabh Bachchan, "What do you and a circle have in common?" and then delivered the punchline, "Neither of you and circle have Rekha (line)," playing on the Hindi word "rekha," which means 'line'.ervicing routes between Amritsar, Chandigarh, Delhi, and Jaipur. This assertion is accompanied by images of a futuristic aircraft, implying that such technology is currently being used to transport commercial passengers.

Fact Check:
To check the genuineness of the claim, the whole Influencer Special episode of Kaun Banega Crorepati (KBC) which can also be found on the Sony Set India YouTube channel was carefully reviewed. Our analysis proved that no part of the episode had comedian Samay Raina cracking a joke on actress Rekha. The technical analysis using Hive moderator further found that the viral clip is AI-made.

Conclusion:
A viral video on the Internet that shows Samay Raina making a joke about Rekha during KBC was released and completely AI-generated and false. This poses a serious threat to manipulation online and that makes it all the more important to place a fact-check for any news from credible sources before putting it out. Promoting media literacy is going to be key to combating misinformation at this time, with the danger of misuse of AI-generated content.
- Claim: Fake AI Video: Samay Raina’s Rekha Joke Goes Viral
- Claimed On: X (Formally known as Twitter)
- Fact Check: False and Misleading
Related Blogs

Introduction
In today’s digital age, everyone is online, so is the healthcare sector worldwide. The latest victim of a data breach is Hong Kong healthcare provider OT&P Healthcare, which has recently suffered a data loss of 100,000 patients that exposed their medical history, and caused concern to the patients and their families. This breach has highlighted the vulnerability in the healthcare sector /industry and the importance of cybersecurity measures to protect sensitive information. This blog will explore the data breach, its impact on patients and families, and the best practices for safeguarding sensitive data.
Background: On 13 March 2023, an incident took place where the Cybercriminals deployed a variety of methods to breach the data, which included phishing attacks, malware, and exploiting software vulnerabilities. OT&P Health Care exploits the sensitive data of the patients. According to OT&P Healthcare, it is working together with law enforcement and has hired a cybersecurity firm to investigate the incident and tighten its security procedures. Like other data breaches, the inquiry will most certainly take some time to uncover the actual source and scope of the intrusion. Regardless of the cause of the breach, this event emphasises the significance of frequent cybersecurity assessments, vulnerability testing, and proactive data protection measures. Considering the dangers in the healthcare sector must be cautious in preserving the personal and medical records of the patients as they are sensitive in nature.
Is confidentiality at stake due to data breaches?
Medical data breaches represent a huge danger to patients, with serious ramifications for their privacy, financial security, and physical health. Some of the potential hazards and effects of medical data breaches are as follows:
- Compromise of patient data: Medical data breaches can expose patients’ sensitive information, such as their medical history, diagnoses, treatment, and medication regimens. If history is highly personal and reaches the wrong hands, it could harm someone’s reputation.
- Identity theft: the data stolen by the cybercriminals may be used by them to open credit accounts and apply for loans, Patients can suffer severe financial and psychological stress because of identity theft since they may spend years attempting to rebuild their credit and regain their good name.
- Medical Fraud: Medical data breaches can also result in medical fraud, which occurs when hackers use stolen medical information to charge insurance companies for services that were not performed or for bogus treatments or procedures. Medical fraud may result in financial losses for patients, insurance companies, and individuals obtaining ineffective or risky medical care.
Impact on patients
Data breach does not cause financial loss but may also profoundly impact their mental health and emotional well-being. let’s understand some psychological impacts:
- Anxiety and Stress: Patients whose medical data has been affected may experience feelings of stress and anxiety as they worry about the potential consequences of the data loss can be misused.
- Loss of faith: Patients may lose faith in their healthcare providers if they believe their personal and medical information needs to be properly As a result, patients may be reluctant to disclose sensitive information to their healthcare professionals, compromising the quality of their medical care.
- Sense of Embarrassment: Patients may feel disregarded or ashamed if their sensitive medical information is revealed, particularly if it relates to a sensitive or stigmatised This might lead to social isolation and a reluctance to seek further medical treatment.
- Post-Traumatic Stress Disorder (PTSD): Patients who have experienced a data breach may have PTSD symptoms such as nightmares, flashbacks, and avoidance behaviour. This can have long-term consequences for their mental health and quality of life.
Legal Implications of Data Breach
Patients have certain legal rights and compensations when a healthcare data breach occurs. Let’s have a look at them: –
- Legal Liability: Healthcare providers have a legal obligation to protect data under various privacy and security laws if they fail to take appropriate measures to protect patient data, they may be held legally liable for resulting harm.
- Legal recourse: Patients whose healthcare data leak has impacted them have the legal right to seek compensation and hold healthcare providers and organisations This could involve suing the healthcare practitioner or organisationresponsible for the breach.
- Right to seek compensation: the patients who have suffered from the data loss are liable to seek compensation.
- Notifications: As soon as a data breach takes place, it impacts the organisation and its customers. In this case, it is the responsibility of the OT&P to
- notify their patients about the data breach and inform them about the consequences.
- Take Away from OT &P Healthcare Data Breach: with the growing data breaches in the healthcare industry, here are some lessons that can be learned from the Hong Kong data breach.
- Cybersecurity: The OT&P Healthcare data breach points to the vital need to prioritisecybersecurity in healthcare. To secure themselves, hospitals and the healthcare sector must use the latest software to protect their data.
- Regular risk assessments: These assessments help find system vulnerabilities and security issues. This can assist healthcare providers and organisationsin taking the necessary actions to avoid data breaches and boost their cybersecurity defences.
- Staff Training: Healthcare workers should be taught cybersecurity best practices, such as detecting and responding to phishing attempts, handling sensitive data, and reporting suspected security breaches. This training should be continued to keep workers updated on the newest cybersecurity trends and threats.
- Incident Response Strategy: Healthcare providers and organisations should have an incident response policy in place to deal with data breaches and other security concerns. This strategy should include protocols for reporting instances, limiting the breach, and alerting patients and verified authorities.
Conclusion
The recent data breach in Hong Kong healthcare impact not only the patients but also their trust is shaken. As we continue to rely on digital technology for medical records and healthcare delivery, it is essential that healthcare providers and organisations take proactive steps to protect patient data from cyber-attacks and data breaches.
References

Executive Summary:
The viral image in the social media which depicts fake injuries on the face of the MP(Member of Parliament, Lok Sabha) Kangana Ranaut alleged to have been beaten by a CISF officer at the Chandigarh airport. The reverse search of the viral image taken back to 2006, was part of an anti-mosquito commercial and does not feature the MP, Kangana Ranaut. The findings contradict the claim that the photos are evidence of injuries resulting from the incident involving the MP, Kangana Ranaut. It is always important to verify the truthfulness of visual content before sharing it, to prevent misinformation.

Claims:
The images circulating on social media platforms claiming the injuries on the MP, Kangana Ranaut’s face were because of an assault incident by a female CISF officer at Chandigarh airport. This claim hinted that the photos are evidence of the physical quarrel and resulting injuries suffered by the MP, Kangana Ranaut.



Fact Check:
When we received the posts, we reverse-searched the image and found another photo that looked similar to the viral one. We could verify through the earring in the viral image with the new image.

The reverse image search revealed that the photo was originally uploaded in 2006 and is unrelated to the MP, Kangana Ranaut. It depicts a model in an advertisement for an anti-mosquito spray campaign.
We can validate this from the earrings in the photo after the comparison between the two photos.

Hence, we can confirm that the viral image of the injury mark of the MP, Kangana Ranaut has been debunked as fake and misleading, instead it has been cropped out from the original photo to misrepresent the context.
Conclusion:
Therefore, the viral photos on social media which claimed to be the results of injuries on the MP, Kangana Ranaut’s face after being assaulted allegedly by a CISF officer at the airport in Chandigarh were fake. Detailed analysis of the pictures provided the fact that the pictures have no connection with Ranaut; the picture was a 2006 anti-mosquito spray advertisement; therefore, the allegations that show these images as that of Ranaut’s injury are fake and misleading.
- Claim: photos circulating on social media claiming to show injuries on the MP, Kangana Ranaut's face following an assault incident by a female CISF officer at Chandigarh airport.
- Claimed on: X (Formerly known as Twitter), thread, Facebook
- Fact Check: Fake & Misleading

In the rich history of humanity, the advent of artificial intelligence (AI) has added a new, delicate aspect. The aspect of promising technological advancement has the potential to either enrich the nest of our society or destroy it entirely. The latest straw in this complex nest is generative AI, a frontier teeming with both potential and perils. It is a realm where the ethereal concepts of cyber peace and resilience are not just theoretical constructs but tangible necessities.
The spectre of generative AI looms large over the digital landscape, casting a long shadow on the sanctity of data privacy and the integrity of political processes. The seeds of this threat were sown in the fertile soil of the Cambridge Analytica scandal of 2018, a watershed moment that unveiled the extent to which personal data could be harvested and utilized to influence electoral outcomes. However despite the indignation, the scandal resulted in meagre alterations to modus operandi of digital platforms.
Fast forward to the present day, and the spectre has only grown more ominous. A recent report by Human Rights Watch has shed light on the continued exploitation of data-driven campaigning in Hungary's re-election of Viktor Orbán. The report paints a chilling picture of political parties leveraging voter databases for targeted social media advertising, with the ruling Fidesz party even resorting to the unethical use of public service data to bolster its voter database.
The Looming Threat of Disinformation
As we stand on the precipice of 2024, a year that will witness over 50 countries holding elections, the advancements in generative AI could exponentially amplify the ability of political campaigns to manipulate electoral outcomes. This is particularly concerning in countries where information disparities are stark, providing fertile ground for the seeds of disinformation to take root and flourish.
The media, the traditional watchdog of democracy, has already begun to sound the alarm about the potential threats posed by deepfakes and manipulative content in the upcoming elections. The limited use of generative AI in disinformation campaigns has raised concerns about the enforcement of policies against generating targeted political materials, such as those designed to sway specific demographic groups towards a particular candidate.
Yet, while the threat of bad actors using AI to generate and disseminate disinformation is real and present, there is another dimension that has largely remained unexplored: the intimate interactions with chatbots. These digital interlocutors, when armed with advanced generative AI, have the potential to manipulate individuals without any intermediaries. The more data they have about a person, the better they can tailor their manipulations.
Root of the Cause
To fully grasp the potential risks, we must journey back 30 years to the birth of online banner ads. The success of the first-ever banner ad for AT&T, which boasted an astounding 44% click rate, birthed a new era of digital advertising. This was followed by the advent of mobile advertising in the early 2000s. Since then, companies have been engaged in a perpetual quest to harness technology for manipulation, blurring the lines between commercial and political advertising in cyberspace.
Regrettably, the safeguards currently in place are woefully inadequate to prevent the rise of manipulative chatbots. Consider the case of Snapchat's My AI generative chatbot, which ostensibly assists users with trivia questions and gift suggestions. Unbeknownst to most users, their interactions with the chatbot are algorithmically harvested for targeted advertising. While this may not seem harmful in its current form, the profit motive could drive it towards more manipulative purposes.
If companies deploying chatbots like My AI face pressure to increase profitability, they may be tempted to subtly steer conversations to extract more user information, providing more fuel for advertising and higher earnings. This kind of nudging is not clearly illegal in the U.S. or the EU, even after the AI Act comes into effect. The market size of AI in India is projected to touch US$4.11bn in 2023.
Taking this further, chatbots may be inclined to guide users towards purchasing specific products or even influencing significant life decisions, such as religious conversions or voting choices. The legal boundaries here remain unclear, especially when manipulation is not detectable by the user.
The Crucial Dos/Dont's
It is crucial to set rules and safeguards in order to manage the possible threats related to manipulative chatbots in the context of the general election in 2024.
First and foremost, candor and transparency are essential. Chatbots, particularly when employed for political or electoral matters, ought to make it clear to users what they are for and why they are automated. By being transparent, people are guaranteed to be aware that they are interacting with automated processes.
Second, getting user consent is crucial. Before collecting user data for any reason, including advertising or political profiling, users should be asked for their informed consent. Giving consumers easy ways to opt-in and opt-out gives them control over their data.
Furthermore, moral use is essential. It's crucial to create an ethics code for chatbot interactions that forbids manipulation, disseminating false information, and trying to sway users' political opinions. This guarantees that chatbots follow moral guidelines.
In order to preserve transparency and accountability, independent audits need to be carried out. Users might feel more confident knowing that chatbot behavior and data collecting procedures are regularly audited by impartial third parties to ensure compliance with legal and ethical norms.
Important "don'ts" to take into account. Coercion and manipulation ought to be outlawed completely. Chatbots should refrain from using misleading or manipulative approaches to sway users' political opinions or religious convictions.
Another hazard to watch out for is unlawful data collecting. Businesses must obtain consumers' express agreement before collecting personal information, and they must not sell or share this information for political reasons.
At all costs, one should steer clear of fake identities. Impersonating people or political figures is not something chatbots should do because it can result in manipulation and false information.
It is essential to be impartial. Bots shouldn't advocate for or take part in political activities that give preference to one political party over another. In encounters, impartiality and equity are crucial.
Finally, one should refrain from using invasive advertising techniques. Chatbots should ensure that advertising tactics comply with legal norms by refraining from displaying political advertisements or messaging without explicit user agreement.
Present Scenario
As we approach the critical 2024 elections and generative AI tools proliferate faster than regulatory measures can keep pace, companies must take an active role in building user trust, transparency, and accountability. This includes comprehensive disclosure about a chatbot's programmed business goals in conversations, ensuring users are fully aware of the chatbot's intended purposes.
To address the regulatory gap, stronger laws are needed. Both the EU AI Act and analogous laws across jurisdictions should be expanded to address the potential for manipulation in various forms. This effort should be driven by public demand, as the interests of lawmakers have been influenced by intensive Big Tech lobbying campaigns.
At present, India doesn’t have any specific laws pertaining to AI regulation. Ministry of Electronics and Information Technology (MEITY), is the executive body responsible for AI strategies and is constantly working towards a policy framework for AI. The Niti Ayog has presented seven principles for responsible AI which includes equality , inclusivity, safety, privacy, transparency, accountability, dependability and protection of positive human values.
Conclusion
We are at a pivotal juncture in history. As generative AI gains more power, we must proactively establish effective strategies to protect our privacy, rights and democracy. The public's waning confidence in Big Tech and the lessons learned from the techlash underscore the need for stronger regulations that hold tech companies accountable. Let's ensure that the power of generative AI is harnessed for the betterment of society and not exploited for manipulation.
Reference
McCallum, B. S. (2022, December 23). Meta settles Cambridge Analytica scandal case for $725m. BBC News. https://www.bbc.com/news/technology-64075067
Hungary: Data misused for political campaigns. (2022, December 1). Human Rights Watch. https://www.hrw.org/news/2022/12/01/hungary-data-misused-political-campaigns
Statista. (n.d.). Artificial Intelligence - India | Statista Market forecast. https://www.statista.com/outlook/tmo/artificial-intelligence/india