#FactCheck - Stunning 'Mount Kailash' Video Exposed as AI-Generated Illusion!
EXECUTIVE SUMMARY:
A viral video is surfacing claiming to capture an aerial view of Mount Kailash that has breathtaking scenery apparently providing a rare real-life shot of Tibet's sacred mountain. Its authenticity was investigated, and authenticity versus digitally manipulative features were analyzed.
CLAIMS:
The viral video claims to reveal the real aerial shot of Mount Kailash, as if exposing us to the natural beauty of such a hallowed mountain. The video was circulated widely in social media, with users crediting it to be the actual footage of Mount Kailash.


FACTS:
The viral video that was circulated through social media was not real footage of Mount Kailash. The reverse image search revealed that it is an AI-generated video created by Sonam and Namgyal, two Tibet based graphic artists on Midjourney. The advanced digital techniques used helped to provide a realistic lifelike scene in the video.
No media or geographical source has reported or published the video as authentic footage of Mount Kailash. Besides, several visual aspects, including lighting and environmental features, indicate that it is computer-generated.
For further verification, we used Hive Moderation, a deep fake detection tool to conclude whether the video is AI-Generated or Real. It was found to be AI generated.

CONCLUSION:
The viral video claiming to show an aerial view of Mount Kailash is an AI-manipulated creation, not authentic footage of the sacred mountain. This incident highlights the growing influence of AI and CGI in creating realistic but misleading content, emphasizing the need for viewers to verify such visuals through trusted sources before sharing.
- Claim: Digitally Morphed Video of Mt. Kailash, Showcasing Stunning White Clouds
- Claimed On: X (Formerly Known As Twitter), Instagram
- Fact Check: AI-Generated (Checked using Hive Moderation).
Related Blogs

Brief Overview of the EU AI Act
The EU AI Act, Regulation (EU) 2024/1689, was officially published in the EU Official Journal on 12 July 2024. This landmark legislation on Artificial Intelligence (AI) will come into force just 20 days after publication, setting harmonized rules across the EU. It amends key regulations and directives to ensure a robust framework for AI technologies. The AI Act, a set of EU rules governing AI, has been in development for two years and now, the EU AI Act enters into force across all 27 EU Member States on 1 August 2024, with certain future deadlines tied up and the enforcement of the majority of its provisions will commence on 2 August 2026. The law prohibits certain uses of AI tools, including those that threaten citizens' rights, such as biometric categorization, untargeted scraping of faces, and systems that try to read emotions are banned in the workplace and schools, as are social scoring systems. It also prohibits the use of predictive policing tools in some instances. The law takes a phased approach to implementing the EU's AI rulebook, meaning there are various deadlines between now and then as different legal provisions will start to apply.
The framework puts different obligations on AI developers, depending on use cases and perceived risk. The bulk of AI uses will not be regulated as they are considered low-risk, but a small number of potential AI use cases are banned under the law. High-risk use cases, such as biometric uses of AI or AI used in law enforcement, employment, education, and critical infrastructure, are allowed under the law but developers of such apps face obligations in areas like data quality and anti-bias considerations. A third risk tier also applies some lighter transparency requirements for makers of tools like AI chatbots.
In case of failure to comply with the Act, the companies in the EU providing, distributing, importing, and using AI systems and GPAI models, are subject to fines of up to EUR 35 million or seven per cent of the total worldwide annual turnover, whichever is higher.
Key highlights of EU AI Act Provisions
- The AI Act classifies AI according to its risk. It prohibits Unacceptable risks such as social scoring systems and manipulative AI. The regulation mostly addresses high-risk AI systems.
- Limited-risk AI systems are subject to lighter transparency obligations and according to the act, the developers and deployers must ensure that the end-users are aware that the interaction they are having is with AI such as Chatbots and Deepfakes. The AI Act allows the free use of minimal-risk AI. This includes the majority of AI applications currently available in the EU single market like AI-enabled video games, and spam filters, but with the advancement of Gen AI changes with regards to this might be done. The majority of obligations fall on providers (developers) of high-risk AI systems that intend to place on the market or put into service high-risk AI systems in the EU, regardless of whether they are based in the EU or a third country. And also, a third-country provider where the high-risk AI system’s output is used in the EU.
- Users are natural or legal persons who deploy an AI system in a professional capacity, not affected end-users. Users (deployers) of high-risk AI systems have some obligations, though less than providers (developers). This applies to users located in the EU, and third-country users where the AI system’s output is used in the EU.
- General purpose AI or GPAI model providers must provide technical documentation, and instructions for use, comply with the Copyright Directive, and publish a summary of the content used for training. Free and open license GPAI model providers only need to comply with copyright and publish the training data summary, unless they present a systemic risk. All providers of GPAI models that present a systemic risk – open or closed – must also conduct model evaluations, and adversarial testing, and track and report serious incidents and ensure cybersecurity protections.
- The Codes of Practice will account for international approaches. It will cover but not necessarily be limited to the obligations, particularly the relevant information to include in technical documentation for authorities and downstream providers, identification of the type and nature of systemic risks and their sources, and the modalities of risk management accounting for specific challenges in addressing risks due to the way they may emerge and materialize throughout the value chain. The AI Office may invite GPAI model providers, and relevant national competent authorities to participate in drawing up the codes, while civil society, industry, academia, downstream providers and independent experts may support the process.
Application & Timeline of Act
The EU AI Act will be fully applicable 24 months after entry into force, but some parts will be applicable sooner, for instance the ban on AI systems posing unacceptable risks will apply six months after the entry into force. The Codes of Practice will apply nine months after entry into force. Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force. High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force. The expected timeline for the same is:
- August 1st, 2024: The AI Act will enter into force.
- February 2025: Prohibition of certain AI systems - Chapters I (general provisions) & II (prohibited AI systems) will apply; Prohibition of certain AI systems.
- August 2025: Chapter III Section 4 (notifying authorities), Chapter V (general purpose AI models), Chapter VII (governance), Chapter XII (confidentiality and penalties), and Article 78 (confidentiality) will apply, except for Article 101 (fines for General Purpose AI providers); Requirements for new GPAI models.
- August 2026: The whole AI Act applies, except for Article 6(1) & corresponding obligations (one of the categories of high-risk AI systems);
- August 2027: Article 6(1) & corresponding obligations apply.
The AI Act sets out clear definitions for the different actors involved in AI, such as the providers, deployers, importers, distributors, and product manufacturers. This means all parties involved in the development, usage, import, distribution, or manufacturing of AI systems will be held accountable. Along with this, the AI Act also applies to providers and deployers of AI systems located outside of the EU, e.g., in Switzerland, if output produced by the system is intended to be used in the EU. The Act applies to any AI system within the EU that is on the market, in service, or in use, covering both AI providers (the companies selling AI systems) and AI deployers (the organizations using those systems).
In short, the AI Act will apply to different companies across the AI distribution chain, including providers, deployers, importers, and distributors (collectively referred to as “Operators”). The EU AI Act also has extraterritorial application and can also apply to companies not established in the EU, or providers outside the EU if they -make an AI system or GPAI model available on the EU market. Even if only the output generated by the AI system is used in the EU, the Act still applies to such providers and deployers.
CyberPeace Outlook
The EU AI Act, approved by EU lawmakers in 2024, is a landmark legislation designed to protect citizens' health, safety, and fundamental rights from potential harm caused by AI systems. The AI Act will apply to AI systems and GPAI models. The Act creates a tiered risk categorization system with various regulations and stiff penalties for noncompliance. The Act adopts a risk-based approach to AI governance, categorizing potential risks into four tiers: unacceptable, high, limited, and low. Violations of banned systems carry the highest fine: €35 million, or 7 percent of global annual revenue. It establishes transparency requirements for general-purpose AI systems. The regulation also provides specific rules for general-purpose AI (GPAI) models and lays down more stringent requirements for GPAI models with 'high-impact capabilities' that could pose a systemic risk and have a significant impact on the internal market. For high-risk AI systems, the AI Act addresses the issues of fundamental rights impact assessment and data protection impact assessment.
The EU AI Act aims to enhance trust in AI technologies by establishing clear regulatory standards governing AI. We encourage regulatory frameworks that strive to balance the desire to foster innovation with the critical need to prevent unethical practices that may cause user harm. The legislation can be seen as strengthening the EU's position as a global leader in AI innovation and developing regulatory frameworks for emerging technologies. It sets a global benchmark for regulating AI. The companies to which the act applies will need to make sure their practices align with the same. The act may inspire other nations to develop their own legislation contributing to global AI governance. The world of AI is complex and challenging, the implementation of regulatory checks, and compliance by the concerned companies, all pose a conundrum. However, in the end, balancing innovation with ethical considerations is paramount.
At the same hand, the tech sector welcomes regulatory progress but warns that overly-rigid regulations could stifle innovation. Hence flexibility and adaptability are key to effective AI governance. The journey towards robust AI regulation has begun in major countries, and it is important that we find the right balance between safety and innovation and also take into consideration the industry reactions.
References:
- https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401689
- https://www.theverge.com/2024/7/12/24197058/eu-ai-act-regulations-bans-deadline
- https://techcrunch.com/2024/07/12/eus-ai-act-gets-published-in-blocs-official-journal-starting-clock-on-legal-deadlines/
- https://www.wsgr.com/en/insights/eu-ai-act-to-enter-into-force-in-august.html
- https://www.techtarget.com/searchenterpriseai/tip/Is-your-business-ready-for-the-EU-AI-Act
- https://www.simmons-simmons.com/en/publications/clyimpowh000ouxgkw1oidakk/the-eu-ai-act-a-quick-guide

11th November 2022 CyberPeace Foundation in association with Universal Acceptance has successfully conducted the workshop on Universal Acceptance and Multilingual Internet for the students and faculties of BIT University under CyberPeace Center of Excellence (CCoE).
CyberPeace Foundation has always been engaged towards the aim of spreading awareness regarding the various developments, avenues, opportunities and threats regarding cyberspace. The same has been the keen principle of the CyberPeace Centre of Excellence setup in collaboration with various esteemed educational institutes. We at CyberPeace Foundation would like to take the collaborations and our efforts to a new height of knowledge and awareness by proposing a workshop on UNIVERSAL ACCEPTANCE AND MULTILINGUAL INTERNET. This workshop was instrumental in providing the academia and research community a wholesome outlook towards the multilingual spectrum of internet including Internationalized domain names and email address Internationalization.
Date –11th November 2022
Time – 10:00 AM to 12:00 PM
Duration – 2 hours
Mode - Online
Audience – Academia and Research Community
Participants Joined- 15
Crowd Classification - Engineering students (1st and 4th year, all streams) and Faculties members
Organizer : Mr. Harish Chowdhary : UA Ambassador
Moderator: Ms. Pooja Tomar, Project coordinator cum trainer
Speakers - Mr. Abdalmonem Galila, Abdalmonem: Vice Chair , Universal Acceptance Steering Group (UASG)and
Mr. Mahesh D Kulkarni Director, Evaris Systems and Former Senior Director, CDAC, Government of India,First session was delivered by Mr. Abdalmonem Galila, Abdalmonem: Vice Chair , Universal Acceptance Steering Group (UASG) “Universal Acceptance( UA) and why UA matters?”
- What is universal acceptance?
- UA is cornerstone to a digitally inclusive internet by ensuring all domain names and email addresses in all languages, script and character length.
- Achieving UA ensures that every person has the ability to navigate the internet.
- Different UA issues were also discussed and explained.
- Tagated systems by the UA and implication were discussed in detail.
Second session was delivered by Mr. Mahesh D Kulkarni, ES Director Evaris on the topic of “IDNs in Indian languages perspective- challenges and solutions”.
- The multilingual diversity of India was focused on and its impact.
- Most students were not aware of what Unicode, IDNS is and their usage.
- Students were briefed by giving real time examples on IDN, Domain name implementation using local language.
- In depth knowledge of and practical exposure of Universal Acceptance and Multilingual Internet has been served to the students.
- Tools and Resources for Domain Name and Domain Languages were explained.
- Languages nuances of Multilingual diversity of India explained with real time facts and figures.
- Given the idea of IDN Email,Homograph attack,Homographic variant with proper real time examples.
- Explained about the security threats and IDNA protocols.
- Given the explanation on ABNF.
- Explained the stages of Universal Acceptance.

In the digital era of the present day, a nation’s strength no longer gets measured only by the number of missiles or aircraft it has in its inventory. Rather, it also calls for defending the digital borders. Major infrastructures like power grids and dams are increasingly being targeted by cyberattacks in the global security environment that modern militaries operate in. When communication channels are vulnerable to an information breach, cybersecurity becomes a crucial component of national defence.
Why is cybersecurity a crucial national security concern in the modern era?
The technologies and procedures that shield digital devices, networks, and systems from unwanted access or attacks are referred to as cybersecurity. Cyberattacks are silent in the context of national security, in contrast to conventional warfare. They are swift and are also capable of causing a massive disruption without even a single case of physical infiltration. However, hostile states, terrorist organisations, or criminal networks may be able to steal any classified information or disrupt military infrastructure due to a cybersecurity breach in a military network.
To fully comprehend the significance of cybersecurity, let's examine the various approaches, such as:
- Protecting critical infrastructures- Today's nations rely heavily on digital networks to run vital services like banking, transportation, electricity, water supply, and healthcare. Therefore, a cyberattack on these systems could cause problems across the country and interfere with our daily activities. Therefore, it is also seen that the military forces of a nation closely work in synergy with other government agencies and private organizations to create a strong ecosystem of security in this sector.
- Safeguarding military operations in the present age- The armed forces heavily rely on digital tools for communication, mission planning, surveillance, and coordination. In case the cyber intruders get access to those systems, then a lot of major operational hurdles can come up in the form of breach of mission details, disruption of channels, and compromise of the confidentiality of military operations. These are certain conditions that make cybersecurity an important aspect for protecting the physical bases and the security architectures.
- Preventing cyber warfare- With the evolution of the geopolitical landscape, state and non-state actors are now resorting to cyberattacks to gather intelligence, disrupt security networks, and influence political outcomes. Still, strong cybersecurity can help nations to ensure, detect, defend, and respond to threats in an effective manner.
- Securing government databases- The government databases are known for storing sensitive information about the citizens, military assets, diplomatic data, and vital information related to major national infrastructures. If these get compromised, then it can weaken the strategic position of the nation and put the national security of the nation at a grave risk. Therefore, it becomes necessary to protect government data as a priority.
How can countries improve their cybersecurity defences?
Countries all over the world are developing their cyber capabilities using a variety of tactics to protect against the increasing number of cyber threats. A few of these can be interpreted as;
- Creating cyber defence units- The majority of contemporary armed forces have created specialised cyber domains devoted to threat identification. Their responsibilities have been centred on keeping an eye on those dangers, stopping intrusions, and reacting quickly to cyberattacks.
- Public-Private Partnerships- To safeguard vital industries like energy grids, financial networks, and communication systems, the government collaborates with private businesses and technology suppliers. Additionally, these collaborations foster innovation to improve the overall defence against cyberattacks.
- Establishing international collaborations- Cyber threats do not respect our borders. As a result, which countries are increasing their share of intelligence, best practices, and defensive strategies with their allies? Groups like NATO have conducted a joint cyber defence exercise to prepare for dealing with a digital future.
However, these collaborations can help to develop a united front against cybercrime.
Core Pillars of the modern military cyber defence
The modern defence strategies have been built upon several key designated pillars that are designed to prevent, detect, and respond to cyber threats, which can be mentioned as;
- Cyberspace as an operational domain- Militaries have now begun to treat cyberspace like the land, air, sea, and space as domains where wars can both begin and also end. Developing some dedicated cyber units to conduct digital operations to defend networks and engage in a range of counter-cyber activities when required.
- Active and proactive defence- Instead of passively waiting for the attacks to happen, real-time monitoring tools are used for blocking the threats that arise. Proactive defence goes a step further by hunting for potential threats before they can reach the networks.
- • Protection of vital infrastructures- The armed forces collaborate closely with civilian organisations and agencies to secure vital infrastructures that are important to the country. Critical infrastructure is protected from cyberattacks by layered defence, which includes encryption, stringent access control, and ongoing monitoring.
- • Strengthening alliances- Countries can develop a strong and well-coordinated defence system by exchanging intelligence to carry out cooperative cyber operations.
- Fostering innovation for the development of a workforce- Cyber threats evolve at a rapid pace, which calls for the military to invest in advanced technologies like AI-driven systems, secure cloud technologies, besides ensure continuous training related to cybersecurity.
Conclusion
The modern militaries have adopted the method of protecting digital networks to defend their land and seas. Cybersecurity has become the new line of defence to protect government data and vital defence infrastructure from serious and unseen threats. The countries are building a secure, robust, and resilient digital future with the aid of solid alliances, cutting-edge technologies, knowledgeable workers, and a proactive defence strategy.
References
- https://www.ssh.com/academy/cyber-defense-strategy-dod-perspective#:~:text=Defence%20organizations%20are%20prime%20targets,SSH%20Key%20Management%20and%20Compliance
- https://www.fortinet.com/resources/cyberglossary/cyber-warfare#:~:text=Advanced%20endpoint%20security%20adds%20proactive,information%20by%20halting%20unauthorized%20transfers
- https://medium.com/@lynnfdsouza/the-impact-of-cyber-warfare-on-modern-military-strategies-c77cf6d1a788
- https://ccoe.dsci.in/blog/why-cybersecurity-is-critical-for-national-defense-protecting-countries-in-the-digital-age