#FactCheck - Stunning 'Mount Kailash' Video Exposed as AI-Generated Illusion!
EXECUTIVE SUMMARY:
A viral video is surfacing claiming to capture an aerial view of Mount Kailash that has breathtaking scenery apparently providing a rare real-life shot of Tibet's sacred mountain. Its authenticity was investigated, and authenticity versus digitally manipulative features were analyzed.
CLAIMS:
The viral video claims to reveal the real aerial shot of Mount Kailash, as if exposing us to the natural beauty of such a hallowed mountain. The video was circulated widely in social media, with users crediting it to be the actual footage of Mount Kailash.


FACTS:
The viral video that was circulated through social media was not real footage of Mount Kailash. The reverse image search revealed that it is an AI-generated video created by Sonam and Namgyal, two Tibet based graphic artists on Midjourney. The advanced digital techniques used helped to provide a realistic lifelike scene in the video.
No media or geographical source has reported or published the video as authentic footage of Mount Kailash. Besides, several visual aspects, including lighting and environmental features, indicate that it is computer-generated.
For further verification, we used Hive Moderation, a deep fake detection tool to conclude whether the video is AI-Generated or Real. It was found to be AI generated.

CONCLUSION:
The viral video claiming to show an aerial view of Mount Kailash is an AI-manipulated creation, not authentic footage of the sacred mountain. This incident highlights the growing influence of AI and CGI in creating realistic but misleading content, emphasizing the need for viewers to verify such visuals through trusted sources before sharing.
- Claim: Digitally Morphed Video of Mt. Kailash, Showcasing Stunning White Clouds
- Claimed On: X (Formerly Known As Twitter), Instagram
- Fact Check: AI-Generated (Checked using Hive Moderation).
Related Blogs

Introduction
According to a draft of the Digital Personal Data Protection Bill, 2023, the Indian government may have the authority to reduce the age at which users can agree to data processing to 14 years. Companies requesting consent to process children’s data, on the other hand, must demonstrate that the information is handled in a “verifiably safe” manner.
The Central Government might change the age limit for consent
The proposed Digital Personal Data Protection Bill 2022 in India attempts to protect child’s personal data under the age of 14 through several provisions. The proposed lower age of consent in India under the Digital Personal Data Protection Bill 2022 is to loosen relevant norms and fulfil the demands of Internet corporations. After a year, the government may reconsider the definition of a child with the goal of expanding coverage to children under the age of 14. The proposed shift in the age of consent has elicited varied views, with some experts suggesting that it might potentially expose children to data processing concerns.
The definition of a child is understood to have been amended in the data protection Bill, which is anticipated to be submitted in Parliament’s Monsoon session, to an “individual who has not completed the age of eighteen years or such lower age as the central government may notify.” A child was defined as an “individual who has not completed eighteen years of age” in the 2022 draft.
Under deemed consent, the government has also added the 'legitimate business interest' clause
This clause allows businesses to process personal data without obtaining explicit consent if it is required for their legitimate business interests. The measure recognises that corporations have legitimate objectives, such as innovation, that can be pursued without jeopardising privacy.
Change in Data Protection Boards
The Digital Personal Data Protection Bill 2022, India’s new plan to secure personal data, represents a significant shift in strategy by emphasising outcomes rather than legislative compliance. This amendment will strengthen the Data Protection Board’s position, as its judgments on noncompliance complaints will establish India’s first systematic jurisprudence on data protection. The Cabinet has approved the bill and may be introduced in Parliament in the Monsoon session starting on July 20.
The draft law leaves the selection of the Data Protection Board’s chairperson and members solely to the discretion of the central government, making it a central government set-up board. The government retains control over the board’s composition, terms of service, and so on. The bill does specify, however, that the Data Protection Board would be completely independent and will have a strictly adjudicatory procedure to adjudicate data breaches. It has the same status as a civil court, and its rulings can be appealed.
India's first regulatory body in Charge of preserving privacy
Some expected amendments to the law include a blacklist of countries to which Indian data cannot be transferred and fewer penalties for data breaches. The bill’s scope is limited to processing digital personal data within Indian territory, which means that any offline personal data and anything not digitised will be exempt from the legislation’s jurisdiction. Furthermore, the measure is silent on the governance of digital paper records.
Conclusion
The Digital Personal Data Protection Bill 2022 is a much-needed piece of legislation that will replace India’s current data protection regime and assist in preserving individuals’ rights. Central Government is looking for a change in the age for consent from 18 to 14 years. The bill underlines the need for verifiable parental consent before processing a child’s personal data, including those under 18. This section seeks to ensure that parents or legal guardians have a say in the processing of their child’s personal data.

Introduction
In today's digital age, we consume a lot of information and content on social media apps, and it has become a daily part of our lives. Additionally, the algorithm of these apps is such that once you like a particular category of content or show interest in it, the algorithm starts showing you a lot of similar content. With this, the hype around becoming a content creator has also increased, and people have started making short reel videos and sharing a lot of information. There are influencers in every field, whether it's lifestyle, fitness, education, entertainment, vlogging, and now even legal advice.
The online content, reels, and viral videos by social media influencers giving legal advice can have far-reaching consequences. ‘LAW’ is a vast subject where even a single punctuation mark holds significant meaning. If it is misinterpreted or only partially explained in social media reels and short videos, it can lead to serious consequences. Laws apply based on the facts and circumstances of each case, and they can differ depending on the nature of the case or offence. This trend of ‘swipe for legal advice’ or ‘law in 30 seconds’, along with the rise of the increasing number of legal influencers, poses a serious problem in the online information landscape. It raises questions about the credibility and accuracy of such legal advice, as misinformation can mislead the masses, fuel legal confusion, and create risks.
Bar Council of India’s stance against legal misinformation on social media platforms
The Bar Council of India (BCI) on Monday (March 17, 2025) expressed concern over the rise of self-styled legal influencers on social media, stating that many without proper credentials spread misinformation on critical legal issues. Additionally, “Incorrect or misleading interpretations of landmark judgments like the Citizenship Amendment Act (CAA), the Right to Privacy ruling in Justice K.S. Puttaswamy (Retd.) v. Union of India, and GST regulations have resulted in widespread confusion, misguided legal decisions, and undue judicial burden,” the body said. The BCI also ordered the mandatory cessation of misleading and unauthorised legal advice dissemination by non-enrolled individuals and called for the establishment of stringent vetting mechanisms for legal content on digital platforms. The BCI emphasised the need for swift removal of misleading legal information.
Conclusion
Legal misinformation on social media is a growing issue that not only disrupts public perception but also influences real-life decisions. The internet is turning complex legal discourse into a chaotic game of whispers, with influencers sometimes misquoting laws and self-proclaimed "legal experts" offering advice that wouldn't survive in a courtroom. The solution is not censorship, but counterbalance. Verified legal voices need to step up, fact-checking must be relentless, and digital literacy must evolve to keep up with the fast-moving world of misinformation. Otherwise, "legal truth" could be determined by whoever has the best engagement rate, rather than by legislation or precedent.
References:
.webp)
Introduction
MEITY’s Indian Computer Emergency Response Team (CERT-In) in collaboration with SISA, a global leader in forensics-driven cyber security company, launched the ‘Certified Security Professional for Artificial Intelligence’ (CSPAI) program on 23rd September. This initiative marks the first of its kind ANAB-accredited AI security certification. The CSPAI also complements global AI governance efforts. International efforts like the OECD AI Principles and the European Union's AI Act, which aim to regulate AI technologies to ensure fairness, transparency, and accountability in AI systems are the sounding board for this initiative.
About the Initiative
The Certified Security Professional for Artificial Intelligence (CSPAI) is the world’s first ANAB-accredited certification program that focuses on Cyber Security for AI. The collaboration between CERT-In and SISA plays a pivotal role in shaping AI security policies. Such partnerships between the public and private players bridge the gap between government regulatory needs and the technological expertise of private players, creating comprehensive and enforceable AI security policies. The CSPAI has been specifically designed to integrate AI and GenAI into business applications while aligning security measures to meet the unique challenges that AI systems pose. The program emphasises the strategic application of Generative AI and Large Language Models in future AI deployments. It also highlights the significant advantages of integrating LLMs into business applications.
The program is tailored for security professionals to understand the do’s and don’ts of AI integration into business applications, with a comprehensive focus on sustainable practices for securing AI-based applications. This is achieved through comprehensive risk identification and assessment frameworks recommended by ISO and NIST. The program also emphasises continuous assessment and conformance to AI laws across various nations, ensuring that AI applications adhere to standards for trustworthy and ethical AI practices.
Aim of the Initiative
As AI technology integrates itself to become an intrinsic part of business operations, a growing need for AI security expertise across industries is visible. Keeping this thought in the focal point, the accreditation program has been created to equip professionals with the knowledge and tools to secure AI systems. The CSPAI program aims to make a safer digital future while creating an environment that fosters innovation and responsibility in the evolving cybersecurity landscape focusing on Generative AI (GenAI) and Large Language Models (LLMs).
Conclusion
This Public-Private Partnership between the CERT-In and SISA, which led to the creation of the Certified Security Professional for Artificial Intelligence (CSPAI) represents a groundbreaking initiative towards AI and its responsible usage. CSPAI can be seen as an initiative addressing the growing demand for cybersecurity expertise in AI technologies. As AI becomes more embedded in business operations, the program aims to equip security professionals with the knowledge to assess, manage, and mitigate risks associated with AI applications. CSPAI as a programme aims to promote trustworthy and ethical AI usage by aligning with frameworks from ISO and NIST and ensuring adherence to AI laws globally. The approach is a significant step towards creating a safer digital ecosystem while fostering responsible AI innovation. This certification will significantly impact the healthcare, finance, and defence sectors, where AI is rapidly becoming indispensable. By ensuring that AI applications meet the requirements of security and ethical standards in these sectors, CSPAI can help build public trust and encourage broader AI adoption.
References
- https://pib.gov.in/PressReleasePage.aspx?PRID=2057868
- https://www.sisainfosec.com/training/payment-data-security-programs/cspai/
- https://timesofindia.indiatimes.com/business/india-business/cert-in-and-sisa-launch-ai-security-certification-program-to-integrate-ai-into-business-applications/articleshow/113622067.cms