#FactCheck – False Claim of Lord Ram's Hologram in Srinagar - Video Actually from Dehradun
Executive Summary:
A video purporting to be from Lal Chowk in Srinagar, which features Lord Ram's hologram on a clock tower, has gone popular on the internet. The footage is from Dehradun, Uttarakhand, not Jammu and Kashmir, the CyberPeace Research Team discovered.
Claims:
A Viral 48-second clip is getting shared over the Internet mostly in X and Facebook, The Video shows a car passing by the clock tower with the picture of Lord Ram. A screen showcasing songs about Lord Ram is shown when the car goes forward and to the side of the road.

The Claim is that the Video is from Kashmir, Srinagar

Similar Post:

Fact Check:
The CyberPeace Research team found that the Information is false. Firstly we did some keyword search relating to the Caption and found that the Clock Tower in Srinagar is not similar to the Video.

We found an article by NDTV mentioning Srinagar Lal Chowk’s Clock Tower, It's the only Clock Tower in the Middle of Road. We are somewhat confirmed that the Video is not From Srinagar. We then ran a reverse image search of the Video by breaking down into frames.
We found another Video that visualizes a similar structure tower in Dehradun.

Taking a cue from this we then Searched for the Tower in Dehradun and tried to see if it matches with the Video, and yes it’s confirmed that the Tower is a Clock Tower in Paltan Bazar, Dehradun and the Video is actually From Dehradun but not from Srinagar.
Conclusion:
After a thorough Fact Check Investigation of the Video and the originality of the Video, we found that the Visualisation of Lord Ram in the Clock Tower is not from Srinagar but from Dehradun. Internet users who claim the Visual of Lord Ram from Srinagar is totally Baseless and Misinformation.
- Claim: The Hologram of Lord Ram on the Clock Tower of Lal Chowk, Srinagar
- Claimed on: Facebook, X
- Fact Check: Fake
Related Blogs

Introduction
In the digital landscape, there is a rapid advancement of technologies such as generative AI(Artificial Intelligence), deepfake, machine learning, etc. Such technologies offer convenience to users in performing several tasks and are capable of assisting individuals and business entities. Certain regulatory mechanisms are also established for the ethical and reasonable use of such advanced technologies. However, these technologies are easily accessible; hence, cyber-criminals leverage AI tools and technologies for malicious activities or for committing various cyber frauds. By such misuse of advanced technologies, new cyber threats have emerged.
Deepfake Scams
Deepfake is an AI-based technology. Deepfake is capable of creating realistic images or videos which in actuality are created by machine algorithms. Deepfake technology, since easily accessible, is misused by fraudsters to commit various cyber crimes or deceive and scam people through fake images or videos that look realistic. By using the Deepfake technology, cybercriminals manipulate audio and video content, which looks very realistic but, in actuality, is fake.
Voice cloning
To create a voice clone of anyone's, audio can be deepfaked too, which closely resembles a real one but, in actuality, is a fake voice created through deepfake technology. Recently, in Kerala, a man fell victim to an AI-based video call on WhatsApp. He received a video call from a person claiming to be his former colleague. The scammer, using AI deepfake technology, impersonated the face of his former colleague and asked for financial help of 40,000.
Uttarakhand Police issues warning admitting the rising trend of AI-based scams
Recently, Uttarakhand police’s Special Task Force (STF) has issued a warning admitting the widespread of AI technology-based scams such as deepfake or voice cloning scams targeting innocent people. Police expressed concern that several incidents have been reported where innocent people are lured by cybercriminals. Cybercriminals exploit advanced technologies and manipulate innocent people to believe that they are talking to their close ones or friends, but in actuality, they are fake voice clones or deepfake video calls. In this way, cybercriminals ask for immediate financial help, which ultimately leads to financial losses for victims of such scams.
Tamil Nadu Police Issues advisory on deepfake scams
To deceive people, cyber criminals misuse deepfake technologies and target them for financial gain. Recently, Tamilnadu Police Cyberwing have issued an advisory on rising deepfake scams. Fraudsters are creating highly convincing images, videos or voice clones to defraud innocent people and make them victims of financial fraud. The advisory states that you limit the personal data you share you share online and adjust privacy settings. Advisory says to promptly report any suspicious activity or cyber crimes to 1930 or the National Cyber Crime Reporting portal.
Best practices
- Pay attention if you notice compromised video quality because deepfake videos often have compromised or poor video quality and unusual blur resolution, which poses a question to its genuineness. Deepfake videos often loop or unusually freeze, which indicates that the video content might be fabricated.
- Whenever you receive requests for any immediate financial help, act responsively and verify the situation by directly contacting the person on his primary contact number.
- You need to be vigilant and cautious, since scammers often possess a sense of urgency, leading to giving no time for the victim to think about it and deceiving them by making a quick decision. Scammers pose sudden emergencies and demand financial support on an urgent basis.
- Be aware of the recent scams and follow the best practices to stay protected from rising cyber frauds.
- Verify the identity of unknown callers.
- Utilise privacy settings on your social media.
- Pay attention if you notice any suspicious nature, and avoid sharing voice notes with unknown users because scammers might use them as voice samples and create your voice clone.
- If you fall victim to such frauds, one powerful resource available is the National Cyber Crime Reporting Portal (www.cybercrime.gov.in) and the 1930 toll-free helpline number where you can report cyber fraud, including any financial crimes.
Conclusion
AI-powered technologies are leveraged by cybercriminals to commit cyber crimes such as deepfake scams, voice clone scams, etc. Where innocent people are lured by scammers. Hence there is a need for awareness and caution among the people. We should be vigilant and aware of the growing incidents of AI-based cyber scams. Must follow the best practices to stay protected.
References:
- https://www.the420.in/ai-voice-cloning-cyber-crime-alert-uttarakhand-police/
- https://www.trendmicro.com/vinfo/us/security/news/cybercrime-and-digital-threats/exploiting-ai-how-cybercriminals-misuse-abuse-ai-and-ml#:~:text=AI%20and%20ML%20Misuses%20and%20Abuses%20in%20the%20Future&text=Through%20the%20use%20of%20AI,and%20business%20processes%20are%20compromised.
- https://www.ndtv.com/india-news/kerala-man-loses-rs-40-000-to-ai-based-deepfake-scam-heres-what-it-is-4217841
- https://news.bharattimes.co.in/t-n-cybercrime-police-issue-advisory-on-deepfake-scams/

The European Union (EU) has made trailblazing efforts regarding protection and privacy, coming up with the most comprehensive and detailed regulation called the GDPR (General Data Protection Regulation). As countries worldwide continue to grapple with setting their laws, the EU is already taking on issues with tech giants and focusing on the road ahead. Its contentious issues with Meta and the launch of Meta’s AI assistant in the EU are thus seen as a complex process, shaped by stringent data privacy regulations, ongoing debates over copyright, and ethical AI practices. This development is considered important as previously, the EU and Meta have had issues (including fines and and also received a pushback concerning its services), which broadly include data privacy regarding compliance with GDPR, antitrust law concerns- targeting ads, facebook marketplace activities and content moderation with respect to the spread of misinformation.
Privacy and Data Protection Concerns
A significant part of operating Large Language Models (LLMs) is the need to train them with a repository of data/ plausible answers from which they can source. If it doesn’t find relevant information or the request is out of its scope, programmed to answer, it shall continue to follow orders, but with a reduction in the accuracy of its response. Meta's initial plans to train its AI models using publicly available content from adult users in the EU received a setback from privacy regulators. The Irish Data Protection Commission (DPC), acting as Meta's lead privacy regulator in Europe, raised the issue and requested a delay in the rollout to assess its compliance with GDPR. It has also raised similar concerns with Grok, the AI tool of X, to assess whether the EU users’ data was lawfully processed for training it.
In response, Meta stalled the release of this feature for around a year and agreed to exclude private messages and data from users under the age of 18 and implemented an opt-out mechanism for users who do not wish their public data to be used for AI training. This approach aligns with GDPR requirements, which mandate a clear legal basis for processing personal data, such as obtaining explicit consent or demonstrating legitimate interest, along with the option of removal of consent at a later stage, as the user wishes. The version/service available at the moment is a text-based assistant which is not capable of things like image generation, but can provide services and assistance which include brainstorming, planning, and answering queries from web-based information. However, Meta has assured its users of expansion and exploration regarding the AI features in the near future as it continues to cooperate with the regulators.
Regulatory Environment and Strategic Decisions
The EU's regulatory landscape, characterised by the GDPR and the forthcoming AI Act, presents challenges for tech companies like Meta. Citing the "unpredictable nature" of EU regulations, Meta has decided not to release its multimodal Llama AI model—capable of processing text, images, audio, and video—in the EU. This decision underscores the tension between innovation and regulatory compliance, as companies navigate the complexities of deploying advanced AI technologies within strict legal frameworks.
Implications and Future Outlook
Meta's experience highlights the broader challenges faced by AI developers operating in jurisdictions with robust data protection laws. The most critical issue that remains for now is to strike a balance between leveraging user data for AI advancement while respecting individual privacy rights.. As the EU continues to refine its regulatory approach to AI, companies need to adapt their strategies to ensure compliance while fostering innovation. Stringent measures and regular assessment also keep in check the accountability of big tech companies as they make for profit as well as for the public.
Reference:
- https://thehackernews.com/2025/04/meta-resumes-eu-ai-training-using.html
- https://www.thehindu.com/sci-tech/technology/meta-to-train-ai-models-on-european-users-public-data/article69451271.ece
- https://about.fb.com/news/2025/04/making-ai-work-harder-for-europeans/
- https://www.theregister.com/2025/04/15/meta_resume_ai_training_eu_user_posts/
- https://noyb.eu/en/twitters-ai-plans-hit-9-more-gdpr-complaints
- https://www.businesstoday.in/technology/news/story/meta-ai-finally-comes-to-europe-after-a-year-long-delay-but-with-some-limitations-468809-2025-03-21
- https://www.bloomberg.com/news/articles/2025-02-13/meta-opens-facebook-marketplace-to-rivals-in-eu-antitrust-clash
- https://www.nytimes.com/2023/05/22/business/meta-facebook-eu-privacy-fine.html#:~:text=Many%20civil%20society%20groups%20and,million%20for%20a%20data%20leak.
- https://ec.europa.eu/commission/presscorner/detail/en/ip_24_5801
- https://www.thehindu.com/sci-tech/technology/european-union-accuses-facebook-owner-meta-of-breaking-digital-rules-with-paid-ad-free-option/article68358039.ece
- https://www.theregister.com/2025/04/14/ireland_investigation_into_x/
- https://www.theverge.com/2024/7/18/24201041/meta-multimodal-llama-ai-model-launch-eu-regulations?utm_source=chatgpt.com
- https://www.axios.com/2024/07/17/meta-future-multimodal-ai-models-eu?utm_source=chatgpt.com

Introduction
The Ministry of Electronics and Information Technology (MeitY) recently issued the “Email Policy of Government of India, 2024.” It is an updated email policy for central government employees, requiring the exclusive use of official government emails managed by the National Informatics Centre (NIC) for public duties. The policy replaces 2015 guidelines and prohibits government employees, contractors, and consultants from using their official email addresses on social media or other websites unless authorised for official functions. The policy aims to reinforce cybersecurity measures and protocols, maintain secure communications, and ensure compliance across departments. It is not legally binding, but its gazette notification ensures compliance and maintains cyber resilience in communications. The updated policy is also aligned with the newly enacted Digital Personal Data Protection Act, 2023.
Brief Highlights of Email Policy of Government of India, 2024
- The Email Policy of the Government of India, 2024 is divided into three parts namely, Part I: Introduction, Part II: Terms of Use, Part III: Functions, duties and Responsibilities, and with an annexe attached to it defining the meaning of certain organisation types in relation to this policy.
- The policy direct to not use NICeMail address for registering on any social media or other websites or mobile applications, save for the performance of official duties or with due authorisation from the authority competent.
- Under this new policy, “core use organisations” (central government departments and other government-controlled entities that do not provide goods or services on commercial terms) and its users shall use only NICeMail for official purposes.
- However, where the Core Use Organisation has an office or establishment outside India, to ensure availability of local communication channels under exigent circumstances may use alternative email services hosted outside India with all due approval.
- Core Use Organisations, including those dealing with national security, have their own independent email servers and can continue operating their independent email servers provided the servers are hosted in India. They should also consider migrating their email services to NICeMail Services for security and uniform policy enforcement.
- The policy also requires departments that currently use @gov.in or @nic.in to instead migrate to @departmentname.gov.in mail domains so that information sanctity and integrity can be maintained when officials are transferred from one department/ministry to another, and so that the ministry/department doesn’t lose access to the official communication. For this, the department or ministry in question must register the domain name with NIC. For instance, MeitY has registered the mail domain @meity.gov.in. The policy gives government departments six months time period complete this migration.
- The policy also makes distinction between (1) Organisation-linked email addresses and (2) Service-linked email addresses. The policy in respect of “organisation-linked email addresses” is laid down in paragraphs 5.3.2(a) and 5.4 to 5.6.3. And the policy in respect of “service-linked email addresses” is laid down in paragraphs 5.3.2(b) and 5.7 to 5.7.2 under the official document of said policy.
- Further, the new policy includes specific directives on separating the email addresses of regular government employees from those of contractors or consultants to improve operational clarity.
CyberPeace Policy Outlook
The revised Email Policy of the Government of India reflects the government’s proactive response to countering the evolving cybersecurity challenges and aims to maintain cyber resilience across the government department’s email communications. The policy represents a significant step towards securing inter government and intra-government communications. We as a cybersecurity expert organisation emphasise the importance of protecting sensitive data against cyber threats, particularly in a world increasingly targeted by sophisticated phishing and malware attacks, and we advocate for safe and secure online communication and information exchange. Email communications hold sensitive information and therefore require robust policies and mechanisms in place to safeguard the communications and ensure that sensitive data is shielded through regulated and secure email usage with technical capabilities for safe use. The proactive step taken by MeitY is commendable and aligned with securing governmental communication channels.
References:
- https://www.meity.gov.in/writereaddata/files/Email-policy-30-10-2024.pdf-(Official document for Email Policy of Government of India, 2024.
- https://www.hindustantimes.com/india-news/dont-use-govt-email-ids-for-social-media-central-govt-policy-for-employees-101730312997936.html#:~:text=Government%20employees%20must%20not%20use,email%20policy%20issued%20on%20Wednesday
- https://bwpeople.in/article/new-email-policy-issued-for-central-govt-employees-to-strengthen-cybersecurity-measures-537805
- https://www.thehindu.com/news/national/centre-notifies-email-policy-for-ministries-central-departments/article68815537.ece