#FactCheck: No, PM Modi Did Not Appear in Royal Attire,Image Is AI-Generated
A photograph showing Prime Minister Narendra Modi holding a trident and dressed in royal attire is being widely shared on social media. Users circulating the image are claiming that it shows PM Modi in a regal outfit.
However, a verification by the Cyber Peace Foundation’s Research Desk has found that the claim is false. The investigation established that the viral image is not authentic and has been generated using Artificial Intelligence (AI).
Claim:
On January 11, 2026, several Instagram users shared the image with captions describing it as a photograph of Prime Minister Modi in royal attire.
Links and archived versions of the posts, along with screenshots, are provided below.

Fact Check:
To verify the claim, relevant keywords such as “PM Modi holding trishul” were searched on Google. This led to a report published by Navbharat Times on January 10, 2025. The report features photographs of Prime Minister Modi holding a trident during his visit to the Somnath Temple. However, in the original images, he is seen wearing normal attire, not royal clothing as shown in the viral image. Link and screenshot

In the next step of the investigation, the original photograph was traced to the official Instagram account of BJP Gujarat, where it was posted on January 11, 2026. The post clearly identifies the image as being from Somnath Temple. Link and screenshot: https://www.instagram.com/p/DTVlb-9Da1V

A close examination of the viral image raised suspicion about digital manipulation. The image was then analysed using the AI detection tool TruthScan. The tool’s assessment indicated a 97 percent likelihood that the image was AI-generated.
Further comparison between the viral image and the original photograph revealed that all visual elements match except the clothing, confirming that the attire was digitally altered using AI tools.

Conclusion
The claim that Prime Minister Narendra Modi appeared in royal attire is false. The Cyber Peace Foundation’s research confirms that the viral image was created using AI by altering the clothing in an original photograph taken during PM Modi’s visit to Somnath Temple. The manipulated image was shared online to mislead users.
Related Blogs

Introduction
Generative AI, particularly deepfake technology, poses significant risks to security in the financial sector. Deepfake technology can convincingly mimic voices, create lip-sync videos, execute face swaps, and carry out other types of impersonation through tools like DALL-E, Midjourney, Respeecher, Murf, etc, which are now widely accessible and have been misused for fraud. For example, in 2024, cybercriminals in Hong Kong used deepfake technology to impersonate the Chief Financial Officer of a company, defrauding it of $25 million. Surveys, including Regula’s Deepfake Trends 2024 and Sumsub reports, highlight financial services as the most targeted sector for deepfake-induced fraud.
Deepfake Technology and Its Risks to Financial Systems
India’s financial ecosystem, including banks, NBFCs, and fintech companies, is leveraging technology to enhance access to credit for households and MSMEs. The country is a leader in global real-time payments and its digital economy comprises 10% of its GDP. However, it faces unique cybersecurity challenges. According to the RBI’s 2023-24 Currency and Finance report, banks cite cybersecurity threats, legacy systems, and low customer digital literacy as major hurdles in digital adoption. Deepfake technology intensifies risks like:
- Social Engineering Attacks: Information security breaches through phishing, vishing, etc. become more convincing with deepfake imagery and audio.
- Bypassing Authentication Protocols: Deepfake audio or images may circumvent voice and image-based authentication systems, exposing sensitive data.
- Market Manipulation: Misleading deepfake content making false claims and endorsements can harm investor trust and damage stock market performance.
- Business Email Compromise Scams: Deepfake audio can mimic the voice of a real person with authority in the organization to falsely authorize payments.
- Evolving Deception Techniques: The usage of AI will allow cybercriminals to deploy malware that can adapt in real-time to carry out phishing attacks and inundate targets with increased speed and variations. Legacy security frameworks are not suited to countering automated attacks at such a scale.
Existing Frameworks and Gaps
In 2016, the RBI introduced cybersecurity guidelines for banks, neo-banking, lending, and non-banking financial institutions, focusing on resilience measures like Board-level policies, baseline security standards, data leak prevention, running penetration tests, and mandating Cybersecurity Operations Centres (C-SOCs). It also mandated incident reporting to the RBI for cyber events. Similarly, SEBI’s Cybersecurity and Cyber Resilience Framework (CSCRF) applies to regulated entities (REs) like stock brokers, mutual funds, KYC agencies, etc., requiring policies, risk management frameworks, and third-party assessments of cyber resilience measures. While both frameworks are comprehensive, they require updates addressing emerging threats from generative AI-driven cyber fraud.
Cyberpeace Recommendations
- AI Cybersecurity to Counter AI Cybercrime: AI-generated attacks can be designed to overwhelm with their speed and scale. Cybercriminals increasingly exploit platforms like LinkedIn, Microsoft Teams, and Messenger, to target people. More and more organizations of all sizes will have to use AI-based cybersecurity for detection and response since generative AI is becoming increasingly essential in combating hackers and breaches.
- Enhancing Multi-factor Authentication (MFA): With improving image and voice-generation/manipulation technologies, enhanced authentication measures such as token-based authentication or other hardware-based measures, abnormal behaviour detection, multi-device push notifications, geolocation verifications, etc. can be used to improve prevention strategies. New targeted technological solutions for content-driven authentication can also be implemented.
- Addressing Third-Party Vulnerabilities: Financial institutions often outsource operations to vendors that may not follow the same cybersecurity protocols, which can introduce vulnerabilities. Ensuring all parties follow standardized protocols can address these gaps.
- Protecting Senior Professionals: Senior-level and high-profile individuals at organizations are at a greater risk of being imitated or impersonated since they hold higher authority over decision-making and have greater access to sensitive information. Protecting their identity metrics through technological interventions is of utmost importance.
- Advanced Employee Training: To build organizational resilience, employees must be trained to understand how generative and emerging technologies work. A well-trained workforce can significantly lower the likelihood of successful human-focused human-focused cyberattacks like phishing and impersonation.
- Financial Support to Smaller Institutions: Smaller institutions may not have the resources to invest in robust long-term cybersecurity solutions and upgrades. They require financial and technological support from the government to meet requisite standards.
Conclusion
According to The India Cyber Threat Report 2025 by the Data Security Council of India (DSCI) and Seqrite, deepfake-enabled cyberattacks, especially in the finance and healthcare sectors, are set to increase in 2025. This has the potential to disrupt services, steal sensitive data, and exploit geopolitical tensions, presenting a significant risk to the critical infrastructure of India.
As the threat landscape changes, institutions will have to continue to embrace AI and Machine Learning (ML) for threat detection and response. The financial sector must prioritize robust cybersecurity strategies, participate in regulation-framing procedures, adopt AI-based solutions, and enhance workforce training, to safeguard against AI-enabled fraud. Collaborative efforts among policymakers, financial institutions, and technology providers will be essential to strengthen defenses.
Sources
- https://sumsub.com/newsroom/deepfake-cases-surge-in-countries-holding-2024-elections-sumsub-research-shows/
- https://www.globenewswire.com/news-release/2024/10/31/2972565/0/en/Deepfake-Fraud-Costs-the-Financial-Sector-an-Average-of-600-000-for-Each-Company-Regula-s-Survey-Shows.html
- https://www.sipa.columbia.edu/sites/default/files/2023-05/For%20Publication_BOfA_PollardCartier.pdf
- https://edition.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk/index.html
- https://www.rbi.org.in/Commonman/English/scripts/Notification.aspx?Id=1721
- https://elplaw.in/leadership/cybersecurity-and-cyber-resilience-framework-for-sebi-regulated-entities/
- https://economictimes.indiatimes.com/tech/artificial-intelligence/ai-driven-deepfake-enabled-cyberattacks-to-rise-in-2025-healthcarefinance-sectors-at-risk-report/articleshow/115976846.cms?from=mdr

Introduction
The ongoing debate on whether AI scaling has hit a wall has been rehashed by the underwhelming response to OpenAI’s ChatGPT v5. AI scaling laws, which describe that machine learning models perform better with increased training data, model parameters and computational resources, have guided the rapid progress of Large Language Models (LLMs) so far. But many AI researchers suggest that further improvements in LLMs will have to be effected through large computational costs by orders of magnitude, which does not justify the returns. The question, then, is whether scaling remains a viable path or whether the field must explore new approaches. This is not just a tech issue but a profound innovation challenge for countries like India, charting their own AI course.
The Scaling Wall: Gaps and Innovation Opportunities
Escalating costs, data scarcity, and diminishing gains mean that simply building larger AI models may no longer guarantee breakthroughs. In such a scenario, LLM developers will have to refine new approaches to training these models, for example, by diversifying data types and redefining training techniques.
This global challenge has a bearing on India’s AI ambitions. For India, where compute and data resources are relatively scarce, this scaling slowdown poses both a challenge and an opportunity. While the India AI Mission embodies smart priorities such as democratising compute resources and developing local datasets, looming scaling challenges could prove a roadblock. Realising these ambitions requires strong input from research and academia, and improved coordination between policymakers and startups. The scaling wall highlights systemic innovation gaps where sustained support is needed, not only in hardware but also in talent development, safety research, and efficient model design.
Way Forward
To truly harness AI’s transformative power, India must prioritise policy actions and ecosystem shifts that support smarter, safer, and context-rich research through the following measures:
- Driving Efficiency and Compute Innovation: Instead of relying on brute-force scaling, India should invest in research and startups working on efficient architectures, energy-conscious training methods, and compute optimisation.
- Investing in Multimodal and Diverse Data: While indigenous datasets are being developed under the India AI Mission through AI Kosha, they must be ethically sourced from speech, images, video, sensor data, and regional content, apart from text, to enable context-rich AI models truly tailored to Indian needs.
- Addressing Core Problems for Trustworthy AI: LLMs offered by all major companies, like OpenAI, Grok, and Deepseek, have the problem of unreliability, hallucinations, and biases, since they are primarily built on scaling large datasets and parameters, which have inherent limitations. India should invest in capabilities to solve these issues and design more trustworthy LLMs.
- Supporting Talent Development and Training: Despite its substantial AI talent pool, India faces an impending demand-supply gap. It will need to launch national programs and incentives to upskill engineers, researchers, and students in advanced AI skills such as model efficiency, safety, interpretability, and new training paradigms
Conclusion
The AI scaling wall debate is a reminder that the future of LLMs will depend not on ever-larger models but on smarter, safer, and more sustainable innovation. A new generation of AI is approaching us, and India can help shape its future. The country’s AI Mission and startup ecosystem are well-positioned to lead this shift by focusing on localised needs, efficient technologies, and inclusive growth, if implemented effectively. How India approaches this new set of challenges and translates its ambitions into action, however, remains to be seen.
References
- https://blogs.nvidia.com/blog/ai-scaling-laws/
- https://www.marketingaiinstitute.com/blog/scaling-laws-ai-wall
- https://fortune.com/2025/02/19/generative-ai-scaling-agi-deep-learning/
- https://indiaai.gov.in/
- https://www.deloitte.com/in/en/about/press-room/bridging-the-ai-talent-gap-to-boost-indias-tech-and-economic-impact-deloitte-nasscom-report.html

Executive Summary:
CVE 2024-3094 is a backdoor vulnerability recently found in Kali Linux installations that happened between March 26th to 29th. This vulnerability was found in XZ package version 5.6.0 to 5.6.1. It could allow the malicious actor to compromise SSHD authentication, and grant unauthorized access to the entire system remotely. The users who have installed or updated Kali Linux during the said time are advised to update their system to safeguard against this vulnerability.
The Dangerous Backdoor
The use of the malicious implant found in XZ Utils as a remote code execution tool makes it more dangerous, because of its ability to compromise the affected systems. Initially, researchers believed the vulnerability enabled an authentication bypass for the OpenSSH server (SSHD) process. However, further analysis revealed it is better characterized as a remote code execution (RCE) vulnerability.
The backdoor intercepts the RSA_public_decrypt function, verifies the host's signature using a fixed Ed448 key, and if successful, executes malicious code passed by the attacker via the system() function. This leaves no trace in SSHD logs and makes it difficult to detect the vulnerability.
Impacted Linux Distributions
The compromised versions of XZ Utils have been found in the following Linux distributions released in March 2024:
- Kali Linux (between March 26 and March 29)
- openSUSE Tumbleweed and openSUSE MicroOS (March 7 to March 28)
- Fedora 41, Fedora Rawhide, and Fedora Linux 40 beta
- Debian (testing, unstable, and experimental distributions only)
- Arch Linux container images (February 29 to March 29)
- Meanwhile, distributions such as Red Hat Enterprise Linux (RHEL), SUSE Linux Enterprise, openSUSE Leap, and Debian Stable are not believed to be affected.
How Did This Happen?
The malicious code appears to have been inserted by taking advantage of a typical control transfer vulnerability. The original maintainer of the XZ Libs project on GitHub handed over control of the repository to an account that had been contributing to various data compression-related projects for several years. It was at this point that the backdoor was implanted in the project code.
Fortunately, the Potential Disaster Was Averted
As per Igor Kuznetsov, head of Kaspersky's Global Research and Analysis Team (GReAT), the vulnerability CVE-2024-3094 is considered as the largest scale attack that has happened in the Linux ecosystem history. Because it targeted the primary remote management tool for Linux servers on the internet which is SSH servers.
As this vulnerability was detected in the testing and rolling distributions in the short period of time, where the latest software packages are used. This results to the minimum damage to the linux users and so far no case of CVE-2024-3094 being actively exploited have been detected.
Staying Safe
The U.S. Cybersecurity and Infrastructure Security Agency (CISA) advises that users who installed or updated the affected operating systems in March immediately roll back to XZ Utils 5.4.6 version and be on alert for any malicious activity. It is recommended to change the passwords in the case of a distribution where a weak version of XZ Utils has been installed.
The Yara rule has been released to detect any infected systems by CVE-2024-3094 Vulnerability.
Conclusion
The discovery of the XZ Utils backdoor provides a reminder to be vigilant in the open source software environment. This supply chain attack highlights the importance of strong security measures, elaborate code reviews, and regular distribution of security updates to provide shield against such vulnerabilities. Always staying informed and taking the necessary precautions, Linux users can mitigate the potential impact of this vulnerability to keep their systems safe.
References :
- https://thehackernews.com/2024/03/urgent-secret-backdoor-found-in-xz.html
- https://www.helpnetsecurity.com/2024/03/29/cve-2024-3094-linux-backdoor/
- https://www.kali.org/blog/about-the-xz-backdoor/
- https://www.kaspersky.com/blog/cve-2024-3094-vulnerability-backdoor/50873/
- https://www.rapid7.com/blog/post/2024/04/01/etr-backdoored-xz-utils-cve-2024-3094/