#FactCheck – False Claim of Lord Ram's Hologram in Srinagar - Video Actually from Dehradun
Executive Summary:
A video purporting to be from Lal Chowk in Srinagar, which features Lord Ram's hologram on a clock tower, has gone popular on the internet. The footage is from Dehradun, Uttarakhand, not Jammu and Kashmir, the CyberPeace Research Team discovered.
Claims:
A Viral 48-second clip is getting shared over the Internet mostly in X and Facebook, The Video shows a car passing by the clock tower with the picture of Lord Ram. A screen showcasing songs about Lord Ram is shown when the car goes forward and to the side of the road.

The Claim is that the Video is from Kashmir, Srinagar

Similar Post:

Fact Check:
The CyberPeace Research team found that the Information is false. Firstly we did some keyword search relating to the Caption and found that the Clock Tower in Srinagar is not similar to the Video.

We found an article by NDTV mentioning Srinagar Lal Chowk’s Clock Tower, It's the only Clock Tower in the Middle of Road. We are somewhat confirmed that the Video is not From Srinagar. We then ran a reverse image search of the Video by breaking down into frames.
We found another Video that visualizes a similar structure tower in Dehradun.

Taking a cue from this we then Searched for the Tower in Dehradun and tried to see if it matches with the Video, and yes it’s confirmed that the Tower is a Clock Tower in Paltan Bazar, Dehradun and the Video is actually From Dehradun but not from Srinagar.
Conclusion:
After a thorough Fact Check Investigation of the Video and the originality of the Video, we found that the Visualisation of Lord Ram in the Clock Tower is not from Srinagar but from Dehradun. Internet users who claim the Visual of Lord Ram from Srinagar is totally Baseless and Misinformation.
- Claim: The Hologram of Lord Ram on the Clock Tower of Lal Chowk, Srinagar
- Claimed on: Facebook, X
- Fact Check: Fake
Related Blogs
.webp)
Introduction
Deepfake have become a source of worry in an age of advanced technology, particularly when they include the manipulation of public personalities for deceitful reasons. A deepfake video of cricket star Sachin Tendulkar advertising a gaming app recently went popular on social media, causing the sports figure to deliver a warning against the widespread misuse of technology.
Scenario of Deepfake
Sachin Tendulkar appeared in the deepfake video supporting a game app called Skyward Aviator Quest. The app's startling quality has caused some viewers to assume that the cricket legend is truly supporting it. Tendulkar, on the other hand, has resorted to social media to emphasise that these videos are phony, highlighting the troubling trend of technology being abused for deceitful ends.
Tendulkar's Reaction
Sachin Tendulkar expressed his worry about the exploitation of technology and advised people to report such videos, advertising, and applications that spread disinformation. This event emphasises the importance of raising knowledge and vigilance about the legitimacy of material circulated on social media platforms.
The Warning Signs
The deepfake video raises questions not just for its lifelike representation of Tendulkar, but also for the material it advocates. Endorsing gaming software that purports to help individuals make money is a significant red flag, especially when such endorsements come from well-known figures. This underscores the possibility of deepfakes being utilised for financial benefit, as well as the significance of examining information that appears to be too good to be true.
How to Protect Yourself Against Deepfakes
As deepfake technology advances, it is critical to be aware of potential signals of manipulation. Here are some pointers to help you spot deepfake videos:
- Look for artificial facial movements and expressions, as well as lip sync difficulties.
- Body motions and Posture: Take note of any uncomfortable body motions or discrepancies in the individual's posture.
- Lip Sync and Audio Quality: Look for mismatches between the audio and lip motions.
- background and Content: Consider the video's background, especially if it has a popular figure supporting something in an unexpected way.
- Verify the legitimacy of the video by verifying the official channels or accounts of the prominent person.
Conclusion
The popularity of deepfake videos endangers the legitimacy of social media material. Sachin Tendulkar's response to the deepfake in which he appears serves as a warning to consumers to remain careful and report questionable material. As technology advances, it is critical that individuals and authorities collaborate to counteract the exploitation of AI-generated material and safeguard the integrity of online information.
Reference
- https://www.news18.com/tech/sachin-tendulkar-disturbed-by-his-new-deepfake-video-wants-swift-action-8740846.html
- https://www.livemint.com/news/india/sachin-tendulkar-becomes-latest-victim-of-deepfake-video-disturbing-to-see-11705308366864.html

Introduction
In an age where the lines between truth and fiction blur with an alarming regularity, we stand at the precipice of a new and dangerous era. Amidst the wealth of information that characterizes the digital age, deep fakes and disinformation rise like ghosts, haunting our shared reality. These manifestations of a technological revolution that promised enlightenment instead threaten the foundations upon which our societies are built: trust, truth, and collective understanding.
These digital doppelgängers, enabled by advanced artificial intelligence, and their deceitful companion—disinformation—are not mere ghosts in the machine. They are active agents of chaos, capable of undermining the core of democratic values, human rights, and even the safety of individuals who dare to question the status quo.
The Perils of False Narratives in the Digital Age
As a society, we often throw around terms such as 'fake news' with a mixture of disdain and a weary acceptance of their omnipresence. However, we must not understate their gravity. Misinformation and disinformation represent the vanguard of the digital duplicitous tide, a phenomenon growing more complex and dire each day. Misinformation, often spread without malicious intent but with no less damage, can be likened to a digital 'slip of the tongue' — an error in dissemination or interpretation. Disinformation, its darker counterpart, is born of deliberate intent to deceive, a calculated move in the chess game of information warfare.
Their arsenal is varied and ever-evolving: from misleading memes and misattributed quotations to wholesale fabrications in the form of bogus news sites and carefully crafted narratives. Among these weapons of deceit, deepfakes stand out for their audacity and the striking challenge they pose to the concept of seeing to believe. Through the unwelcome alchemy of algorithms, these video and audio forgeries place public figures, celebrities, and even everyday individuals into scenarios they never experienced, uttering words they never said.
The Human Cost: Threats to Rights and Liberties
The impact of this disinformation campaign transcends inconvenience or mere confusion; it strikes at the heart of human rights and civil liberties. It particularly festers at the crossroads of major democratic exercises, such as elections, where the right to a truthful, unmanipulated narrative is not just a political nicety but a fundamental human right, enshrined in Article 25 of the International Convention on Civil and Political Rights (ICCPR).
In moments of political change, whether during elections or pivotal referenda, the deliberate seeding of false narratives is a direct assault on the electorate's ability to make informed decisions. This subversion of truth infects the electoral process, rendering hollow the promise of democratic choice.
This era of computational propaganda has especially chilling implications for those at the frontline of accountability—journalists and human rights defenders. They find themselves targets of character assassinations and smear campaigns that not only put their safety at risk but also threaten to silence the crucial voices of dissent.
It should not be overlooked that the term 'fake news' has, paradoxically, been weaponized by governments and political entities against their detractors. In a perverse twist, this label becomes a tool to shut down legitimate debate and shield human rights violations from scrutiny, allowing for censorship and the suppression of opposition under the guise of combatting disinformation.
Deepening the societal schisms, a significant portion of this digital deceit traffic in hate speech. Its contents are laden with xenophobia, racism, and calls to violence, all given a megaphone through the anonymity and reach the internet so readily provides, feeding a cycle of intolerance and violence vastly disproportionate to that seen in traditional media.
Legislative and Technological Countermeasures: The Ongoing Struggle
The fight against this pervasive threat, as illustrated by recent actions and statements by the Indian government, is multifaceted. Notably, Union Minister Rajeev Chandrasekhar's commitment to safeguarding the Indian populace from the dangers of AI-generated misinformation signals an important step in the legislative and policy framework necessary to combat deepfakes.
Likewise, Prime Minister Narendra Modi's personal experience with a deepfake video accentuates the urgency with which policymakers, technologists, and citizens alike must view this evolving threat. The disconcerting experience of actor Rashmika Mandanna serves as a sobering reminder of the individual harm these false narratives can inflict and reinforces the necessity of a robust response.
In their pursuit to negate these virtual apparitions, policymakers have explored various avenues ranging from legislative action to penalizing offenders and advancing digital watermarks. However, it is not merely in the realm of technology that solutions must be sought. Rather, the confrontation with deepfakes and disinformation is also a battle for the collective soul of societies across the globe.
As technological advancements continue to reshape the battleground, figures like Kris Gopalakrishnan and Manish Gangwar posit that only a mix of rigorous regulatory frameworks and savvy technological innovation can hold the front line against this rising tidal wave of digital distrust.
This narrative is not a dystopian vision of a distant future - it is the stark reality of our present. And as we navigate this new terrain, our best defenses are not just technological safeguards, but also the nurturing of an informed and critical citizenry. It is essential to foster media literacy, to temper the human inclination to accept narratives at face value and to embolden the values that encourage transparency and the robust exchange of ideas.
As we peer into the shadowy recesses of our increasingly digital existence, may we hold fast to our dedication to the truth, and in doing so, preserve the essence of our democratic societies. For at stake is not just a technological arms race, but the very quality of our democratic discourse and the universal human rights that give it credibility and strength.
Conclusion
In this age of digital deceit, it is crucial to remember that the battle against deep fakes and disinformation is not just a technological one. It is also a battle for our collective consciousness, a battle to preserve the sanctity of truth in an era of falsehoods. As we navigate the labyrinthine corridors of the digital world, let us arm ourselves with the weapons of awareness, critical thinking, and a steadfast commitment to truth. In the end, it is not just about winning the battle against deep fakes and disinformation, but about preserving the very essence of our democratic societies and the human rights that underpin them.

Executive Summary:
A viral video circulating on social media that appears to be deliberately misleading and manipulative is shown to have been done by comedian Samay Raina casually making a lighthearted joke about actress Rekha in the presence of host Amitabh Bachchan which left him visibly unsettled while shooting for an episode of Kaun Banega Crorepati (KBC) Influencer Special. The joke pointed to the gossip and rumors of unspoken tensions between the two Bollywood Legends. Our research has ruled out that the video is artificially manipulated and reflects a non genuine content. However, the specific joke in the video does not appear in the original KBC episode. This incident highlights the growing misuse of AI technology in creating and spreading misinformation, emphasizing the need for increased public vigilance and awareness in verifying online information.

Claim:
The claim in the video suggests that during a recent "Influencer Special" episode of KBC, Samay Raina humorously asked Amitabh Bachchan, "What do you and a circle have in common?" and then delivered the punchline, "Neither of you and circle have Rekha (line)," playing on the Hindi word "rekha," which means 'line'.ervicing routes between Amritsar, Chandigarh, Delhi, and Jaipur. This assertion is accompanied by images of a futuristic aircraft, implying that such technology is currently being used to transport commercial passengers.

Fact Check:
To check the genuineness of the claim, the whole Influencer Special episode of Kaun Banega Crorepati (KBC) which can also be found on the Sony Set India YouTube channel was carefully reviewed. Our analysis proved that no part of the episode had comedian Samay Raina cracking a joke on actress Rekha. The technical analysis using Hive moderator further found that the viral clip is AI-made.

Conclusion:
A viral video on the Internet that shows Samay Raina making a joke about Rekha during KBC was released and completely AI-generated and false. This poses a serious threat to manipulation online and that makes it all the more important to place a fact-check for any news from credible sources before putting it out. Promoting media literacy is going to be key to combating misinformation at this time, with the danger of misuse of AI-generated content.
- Claim: Fake AI Video: Samay Raina’s Rekha Joke Goes Viral
- Claimed On: X (Formally known as Twitter)
- Fact Check: False and Misleading