Ghibli Trend: Another Ethical Conundrum Caused by AI?
Aditi Pangotra
Research Analyst, Policy & Advocacy, CyberPeace
PUBLISHED ON
Apr 10, 2025
10
The Ghibli trend has been in the news for the past couple of weeks for multiple reasons, be it good or bad. The nostalgia that everyone has for the art form has made people turn a blind eye to what the trend means to the artists who painstakingly create the art. The open-source platforms may be trained on artistic material without the artist's ‘explicit permission’ making it so that the rights of the artists are downgraded. The artistic community has reached a level where they are questioning their ability to create, which can be recreated by this software in a couple of seconds and without any thought as to what it is doing. OpenAI’s update on ChatGPT makes it simple for users to create illustrations that are like the style created by Hayao Miyazaki and made into anything from personal pictures to movie scenes and making them into Ghibli-style art. The updates in AI to generate art, including Ghibli-style, may raise critical questions about artistic integrity, intellectual property, and data privacy risks.
AI and the Democratization of Creativity
AI-powered tools have lowered barriers and enable more people to engage with artistic expression. AI allows people to create appealing content in the form of art regardless of their artistic capabilities. The update of ChatGPT has made it so that art has been democratized, and the abilities of the user don't matter. It makes art accessible, efficient and a creative experiment to many.
Unfortunately, these developments also pose challenges for the original artistry and the labour of human creators. The concern doesn't just stop at AI replacing artists, but also about the potential misuse it can lead to. This includes unauthorized replication of distinct styles or deepfake applications. When it is used ethically, AI can enhance artistic processes. It can assist with repetitive tasks, improving efficiency, and enabling creative experimentation.
However, its ability to mimic existing styles raises concerns. The potential that AI-generated content has could lead to a devaluation of human artists' work, potential copyright issues, and even data privacy risks. Unauthorized training of AI models that create art can be exploited for misinformation and deepfakes, making human oversight essential. Few artists believe that AI artworks are disrupting the accepted norms of the art world. Additionally, AI can misinterpret prompts, producing distorted or unethical imagery that contradicts artistic intent and cultural values, highlighting the critical need for human oversight.
The Ethical and Legal Dilemmas
The main dilemma that surrounds trends such as the Ghibli trend is whether it compromises human efforts by blurring the line between inspiration and infringement of artistic freedom. Further, an issue that is not considered by most users is whether the personal content (personal pictures in this case) uploaded on AI models is posing a risk to their privacy. This leads to the issue where the potential misuse of AI-generated content can be used to spread misinformation through misleading or inappropriate visuals.
The negative effects can only be balanced if a policy framework is created that can ensure the fair use of AI in Art. Further, this should ensure that the training of AI models is done in a manner that is fair to the artists who are the original creators of a style. Human oversight is needed to moderate the AI-generated content. This oversight can be created by creating ethical AI usage guidelines for platforms that host AI-generated art.
Conclusion: What Can Potentially Be Done?
AI is not a replacement for human effort, it is to ease human effort. We need to promote a balanced AI approach that protects the integrity of artists and, at the same time, continues to foster innovation. And finally, strengthening copyright laws to address AI-generated content. Labelling AI content and ensuring that this content is disclosed as AI-generated is the first step. Furthermore, there should be fair compensation made to the human artists based on whose work the AI model is trained. There is an increasing need to create global AI ethics guidelines to ensure that there is transparency, ethical use and human oversight in AI-driven art. The need of the hour is that industries should work collaboratively with regulators to ensure that there is responsible use of AI.
In the vast, cosmic-like expanse of international relations, a sphere marked by the gravitational pull of geopolitical interests, a singular issue has emerged, casting a long shadow over the fabric of Indo-Canadian diplomacy. It is a narrative spun from an intricate loom, interlacing the yarns of espionage and political machinations, shadowboxing with the transient, yet potent, specter of state-sanctioned violence. The recent controversy undulating across this geopolitical landscape owes its origins to the circulation of claims which the Indian Ministry of External Affairs (MEA) vehemently dismisses as a distorted tapestry of misinformation—a phantasmagoric fable divorced from reality.
This maelstrom of contention orbits around the alleged existence of a 'secret memo', a document reportedly dispatched with stealth from the helm of the Indian government to its consulates peppered across the vast North American continent. This mysterious communique, assuming its spectral presence within the report, was described as a directive catalyzing a 'sophisticated crackdown scheme' against specific Sikh diaspora organizations. A proclamation that MEA has repudiated with adamantine certainty, branding the report as a meticulously fabricated fiction.
THE MEA Stance
The official statement from the Indian Ministry of External Affairs (MEA) emerged as a paragon of clarity cutting through the dense fog of accusations, 'We strongly assert that such reports are fake and emphatically concocted. The referenced memo is non-existent. This narrative is a chapter in the protracted saga of a disinformation campaign aimed against India.' The outlet responsible for airing this contentious story, as per the Indian authorities, has a historical penchant for circulating narratives aligned with the interests of rival intelligence agencies, particularly those associated with Pakistani strategic circles—a claim infusing yet another complex layer to the situation at hand.
The report that catapulted itself onto the stage with the force of an untamed tempest insists the 'secret memo' was decked with several names—all belonging to individuals under the hawk-like gaze of Indian intelligence.
The Plague of Disinformation
The profoundly intricate confluence of diplomacy is one that commands grace, poise, and an acute sense of balance—nations effortlessly tip-toeing around sensitivities, proffering reciprocity and an equitable stance within the grand ballroom of international affairs. Hence, when S. Jaishankar, India's Minister of External Affairs, found himself fielding inquiries on the perceived inconsistent treatment afforded to Canada compared to the US—despite similar claims emanating from both—his response was the embodiment of diplomatic discretion: 'As far as Canada is concerned, there was a glaring absence of specific evidence or inputs provided to us. The robust question of equitable treatment between two nations, where only one has furnished substantive input and the other has not, is naturally unmerited.'
The articulation from the Ministry's spokesperson, Arindam Bagchi, further solidified India's stance. He calls into question the credibility of The Intercept—the publication that initially disseminated the report—accusing it of acting as a vessel for 'invented narratives' propagated under the auspices of Pakistani intelligence interests.
Conclusion
In the grand theater of international politics, the distinction between reality and deception is frequently obscured by the heavy drapes of secrecy and diplomatic guile. The persistent denial by the Indian government of any 'secret memo' serves as a critical reminder of the blurred lines between narrative and counter-narrative in the global concert of power and persuasion. As observant spectators within the arena of world politics, we are endowed with the unenviable task of untangling the convoluted web of claims and counterclaims, hoping to uncover the enduring truths that linger therein. In this domain of authentic and imaginary tales, the only unwavering certainty is the persistent rhythm of diplomatic interplay and the subtle shadows it casts upon the international stage. The Ministry of External Affairs fact-checked a claim on the secret memo, rubbishing it as fake and fabricated. The government has said there is a deliberate disinformation campaign that has been on against India.
The Expanding Governance Challenge of Artificial Intelligence
Artificial intelligence (AI) systems are increasingly embedded in economic and social infrastructure. They are being adopted in financial services, healthcare diagnostics, hiring systems, and public administration. But while these systems improve efficiency and decision-making, they also introduce new forms of technological risk.
Unlike conventional software, AI systems learn patterns from data and continue to evolve as they run. This poses governance issues since risks can arise throughout the AI life cycle, whether at the coding level or in their implementation.
The latest regulatory frameworks, such as the European Union’s AI Act (EU AI Act) and the UNESCO Recommendation on the Ethics of Artificial Intelligence, note that responsible AI governance depends on the realisation of where risks emerge across the development process.
This article maps the AI system lifecycle, identifies the risks that emerge at each stage and evaluates the policy tools used to mitigate them using the lifecycle framework developed by the Organisation of Economic Co-operation and Development (OECD).
The Lifecycle of an AI System
AI systems are developed through a structured process that includes problem definition, dataset collection and preparation, model development, testing and validation, deployment, and monitoring.
The OECD conceptualises this development process as the AI system lifecycle. Each stage entails various technical and administrative procedures, since choices made during these stages will dictate the goals and limits of an AI system. Further, the quality and representativeness of training sets will have a strong effect on the behaviour of models after implementation.
Since this is an iterative and not a linear procedure, risks can be introduced at each stage of the AI lifecycle. New data can be retrained into different models, and systems are regularly updated once they have been deployed, to address performance degradation, model errors, or unintended outputs. This iterative process means governance must address risks across the entire lifecycle, not just at deployment.
Where AI Risks Emerge
AI risks usually emerge earlier in the development process, especially in the phases when system objectives are formulated and training data are chosen. The EU AI Act and the UNESCO Recommendation on the Ethics of AI outline the following risks: bias and discrimination, privacy and data security violations, the absence of transparency in automated decision-making, and risks to fundamental rights.
AI Governance Risk Landscape: Core Risk Categories Under International Frameworks
Risk categories jointly identified by the EU AI Act and UNESCO Recommendation on the Ethics of Artificial Intelligence
Outlining the risks throughout the AI lifecycle helps understand the areas where governance interventions are most necessary. For example, discriminatory outcomes often result from biased or unrepresentative training data, while safety failures are typically linked to inadequate testing before deployment. Risks such as misinformation arise post the development process, when generative AI systems are deployed at scale on digital platforms.
AI System Lifecycle: Key Risks at Each Stage
Risks identified per the EU AI Act and UNESCO Recommendation on the Ethics of AI
Understanding where risks emerge across the lifecycle explains why governance frameworks classify AI systems by risk and apply oversight at multiple stages.
Policy Tools for Mitigating AI Risks
Governments and international organisations have developed regulatory tools to help mitigate AI risks in the lifecycle. These tools are meant to make sure that AI technologies are identified as up to standard in safety, accountability and fairness prior to and after deployment.
For example, the OECD AI Policy Observatory recommends that governments adopt policy instruments such as risk evaluations, algorithmic auditing necessities, regulatory sandboxes, and transparency necessities of AI systems. The European Union’s Artificial Intelligence Act (AI Act) is one of the most comprehensive systems of governance that introduces a risk-oriented regulation strategy. It mandates adherence to requirements concerning data governance, documentation, human oversight, and robustness, and cybersecurity. Such requirements bring regulatory checkpoints to the lifecycle of AI systems.
Mapping these policy tools across the lifecycle illustrates how governance mechanisms can intervene at different stages of AI development.
Governance Overlay: Policy Interventions Across the AI Lifecycle
Regulatory tools mapped at each stage of AI development per the EU AI Act and UNESCO Recommendation on the Ethics of AI
Several policy tools are directed at the risks that occur in the pre-developmental stages. In one example, algorithmic impact assessment has been applied in various jurisdictions to measure the possible consequences of automated decision systems on society before implementation. On the same note, the requirements of dataset documentation, including dataset transparency requirements and model cards, are aimed at enhancing accountability during the training and development stages of the AI systems. Therefore, lifecycle-based policy design allows regulators to intervene before harmful outcomes occur, rather than responding only after AI systems have caused damage in real-world environments.
The Policy Gap in AI Governance
The misalignment between risks and governance tools across the AI lifecycle indicates a critical structural gap in existing regulations. Numerous governance processes become activated after AI systems are classified as “high risk” or after they are implemented in the real world. But the most serious sources of damage have their roots in earlier stages of the development procedure.
An example is that prejudiced or unbalanced training data is almost inevitably a source of discriminative results in automated decision systems. When these types of models are applied in areas like staffing, credit rating, or in providing services to the public, such biases can quickly spread to large populations and undermine democratic rights. In the same way, the lack of transparency in model design might result in the fact that the regulator or individuals are affected by the decision-making process. This reflects a broader timing gap in AI governance, where risks originate during design and development, but regulatory intervention typically occurs only after deployment.
Analysis
1. Key risks originate before deployment: As depicted in the lifecycle mapping, the data collection and model development phase presents several significant governance risks as opposed to the deployment phase. Structural issues can be entrenched within AI systems even before they are deployed in practice due to bias in data sets, incomplete reporting of training sets, and obscured network designs.
2. Data governance is a primary point of vulnerability: Most of the instances of algorithmic discrimination listed above are associated with training material that is not representative of some population groups or is historical. Since machine learning models are optimisations of patterns that exist in datasets, these biases can be carried through the whole lifecycle and reproduced after deployment.
3. Regulatory approaches remain mismatched across jurisdictions: Different countries adopt varying approaches to AI governance, ranging from risk-based frameworks such as the EU AI Act to more sector-specific or voluntary guidelines in other regions. This divergence creates inconsistencies in safety, accountability, and enforcement standards, allowing risks to persist across borders and potentially undermining the protection of users in globally deployed AI systems.
4. Governance interventions remain uneven across the lifecycle: Whereas the various regulatory instruments aim at deployment and monitoring, fewer instruments systematically tackle the risks that are posed by the previous design and development phases.
Recommendations
1. Introduce mandatory lifecycle risk assessments: The regulatory systems need to demand systemic risk evaluation at the beginning of AI development, especially at the problem design and dataset selection phases. This would assist in detecting possible harmful applications in advance, before systems are constructed and installed.
2. Strengthen dataset governance standards: Training datasets must be supplemented with documentation as to their provenance, composition and limitations. Standardised documentation frameworks of data sets can assist in the discovery by regulators and auditors of the potential sources of bias or privacy threats.
3. Expand independent algorithmic auditing: AI systems can be assessed by regular third-party audits based on fairness, strength, and security weaknesses. The auditing mechanisms especially apply to high-risk systems employed in employment, finance or the public services.
4. Integrate continuous monitoring requirements: AI systems may be monitored regularly after implementation to identify model drift, unforeseen consequences, or abuse. Reporting systems can facilitate the process where the regulators can see the emerging risks and modify the governance systems.
Conclusion - The Need for Global AI Governance
Despite growing regulatory attention, global air governance remains fragmented. Different jurisdictions adopt varying approaches to risk classification, oversight, and enforcement, leading to inconsistencies in safety and accountability standards. Given that AI systems are often developed, deployed, and used across borders, this lack of coordination allows risks to persist beyond national regulatory frameworks.
Addressing these challenges requires a shift towards greater international cooperation and lifecycle-based governance. Developing shared standards, improving cross-border regulatory alignment, and embedding oversight across all stages of AI development will be essential to ensuring that AI systems are safe, transparent, and accountable in a globally interconnected environment.
Digitalization in India has been a transformative force, India is also marked as the second country in the world in terms of active internet users. With this adoption of digitalization and technology, the country is becoming a digitally empowered society and knowledge-based economy. However, the number of cyber crimes in the country has also seen a massive spike recently with the sophisticated cyber attacks and manipulative techniques being used by cybercriminals to lure innocent individuals and businesses.
As per recent reports, over 740,000 cybercrime cases were reported to the I4C, in the first four months of 2024, which raises serious concern on the growing nature of cyber crimes in the country. Recently Prime Minister Modi in his Mann Ki Baat address, cautioned the public about a particular rising cyber scam known as ‘digital arrest’ and highlighted the seriousness of the issue and urged people to be aware and alert about such scams to counter them. The government has been keen on making efforts to reduce and combat cyber crimes by introducing new measures and strengthening the regulatory landscape governing cyberspace in India.
Indian Cyber Crime Coordination Centre
Indian Cybercrime Coordination Centre (I4C) was established by the Ministry of Home Affairs (MHA) to provide a framework and eco-system for law enforcement agencies (LEAs) to deal with cybercrime in a coordinated and comprehensive manner. I4C handles the ‘National Cyber Crime Reporting Portal’ (https://cybercrime.gov.in) and the 1930 Cyber Crime Helpline. Recently at the Indian Cyber Crime Coordination Centre (I4C) Foundation Day celebration, Union Home Minister Amit Shah launched the Cyber Fraud Mitigation Centre (CFMC), Samanvay platform (Joint Cybercrime Investigation Facilitation System), 'Cyber Commandos' program and Online Suspect Registry as efforts to combat the cyber crimes, establish cyber resilence and awareness and strengthening capabilities of law enforcement agencies.
Regulatory landscape Governing Cyber Crimes
Information Technology Act, 2000 (IT Act) and the rules made therein, the Intermediary Guidelines, Digital Personal Data Protection Act, 2023 and Bhartiya Nyay Sanhita, 2023 are the major legislation in India governing Cyber Laws.
CyberPeace Recommendations
There has been an alarming uptick in cybercrimes in the country highlighting the need for proactive approaches to counter these emerging threats. The government should prioritise its efforts by introducing robust policies and technical measures to reduce cybercrime in the country. The law enforcement agencies' capabilities must be strengthened with advanced technologies to deal with cyber crimes especially considering the growing sophisticated nature of cyber crime tactics used by cyber criminals.
The netizens must be aware of the manipulative tactics used by cyber criminals to target them. Social media companies must also implement robust measures on their respective platforms to counter and prevent cyber crimes. Coordinated approaches by all relevant authorities, including law enforcement, cybersecurity agencies, and regulatory bodies, along with increased awareness and proactive engagement by netizens, can significantly reduce cyber threats and online criminal activities.
Your institution or organization can partner with us in any one of our initiatives or policy research activities and complement the region-specific resources and talent we need.