#FactCheck - MS Dhoni Sculpture Falsely Portrayed as Chanakya 3D Recreation
Executive Summary:
A widely used news on social media is that a 3D model of Chanakya, supposedly made by Magadha DS University matches with MS Dhoni. However, fact-checking reveals that it is a 3D model of MS Dhoni not Chanakya. This MS Dhoni-3D model was created by artist Ankur Khatri and Magadha DS University does not appear to exist in the World. Khatri uploaded the model on ArtStation, calling it an MS Dhoni similarity study.

Claims:
The image being shared is claimed to be a 3D rendering of the ancient philosopher Chanakya created by Magadha DS University. However, people are noticing a striking similarity to the Indian cricketer MS Dhoni in the image.



Fact Check:
After receiving the post, we ran a reverse image search on the image. We landed on a Portfolio of a freelance character model named Ankur Khatri. We found the viral image over there and he gave a headline to the work as “MS Dhoni likeness study”. We also found some other character models in his portfolio.



Subsequently, we searched for the mentioned University which was named as Magadha DS University. But found no University with the same name, instead the name is Magadh University and it is located in Bodhgaya, Bihar. We searched the internet for any model, made by Magadh University but found nothing. The next step was to conduct an analysis on the Freelance Character artist profile, where we found that he has a dedicated Instagram channel where he posted a detailed video of his creative process that resulted in the MS Dhoni character model.

We concluded that the viral image is not a reconstruction of Indian philosopher Chanakya but a reconstruction of Cricketer MS Dhoni created by an artist named Ankur Khatri, not any University named Magadha DS.
Conclusion:
The viral claim that the 3D model is a recreation of the ancient philosopher Chanakya by a university called Magadha DS University is False and Misleading. In reality, the model is a digital artwork of former Indian cricket captain MS Dhoni, created by artist Ankur Khatri. There is no evidence of a Magadha DS University existence. There is a university named Magadh University in Bodh Gaya, Bihar despite its similar name, we found no evidence in the model's creation. Therefore, the claim is debunked, and the image is confirmed to be a depiction of MS Dhoni, not Chanakya.
Related Blogs
.webp)
Introduction
The AI Action Summit is a global forum that brings together world leaders, policymakers, technology experts, and industry representatives to discuss AI governance, ethics, and its role in society. This year, the week-long Paris AI Action Summit officially culminated on the 11th of February, 2025. It brought together experts from the industry, policymakers, and other dignitaries to discuss Artificial Intelligence and its challenges. The event was co-chaired by Indian Prime Minister Narendra Modi and French President Emmanuel Macron. In line with the summit, the Indian delegation actively engaged in the 2nd India-France AI Policy Roundtable, an official side event of the summit, and the 14th India-France CEOs Forum. These discussions were on diverse sectors including defense, aerospace, technology, etc. among other things.
Prime Minister Modi’s Address
During the AI Action Summit in Paris, Prime Minister Narendra Modi drew attention to the revolutionary effect of AI in politics, the economy, security, and society. Stressing the requirement of international cooperation, he promoted strong frameworks of governance to combat AI-based risks and consequently, build public confidence in new technologies. Needed efforts with respect to cybersecurity issues such as deepfakes and disinformation were also acknowledged.
Democratising AI, and sharing its benefits, particularly with the Global South not only aligned with Sustainable Development Goals (SDGs) but also affirmed India’s resolve towards sharing expertise and best practices. India’s remarkable feat of creating a Digital Public Infrastructure, that caters to a population of 1.4 billion through open and accessible technology was highlighted as well.
Among the key announcements, India revealed its plans to create its own Large Language Model (LLM) that reflects the country's linguistic diversity, strengthening its AI aspirations. Further, India will be hosting the next AI Action Summit, reaffirming its position in international AI leadership. The Prime Minister also welcomed France's initiatives, such as the launch of the "AI Foundation" and the "Council for Sustainable AI", initiated by President Emmanuel Macron. He emphasized the necessity to extend the Global Partnership for AI and to get it more representative and inclusive so that Global South voices are actually incorporated into AI innovation and governance.
Other Perspectives
Though there were 58 countries that signed the international agreement on a more open, inclusive, sustainable, and ethical approach to AI development (including India, France, and China), the UK and the US have refused to sign the international agreement at the AI Summit stating their issues with global governance and national security. While the former raised concerns about the lack of sufficient details regarding the establishment of global AI governance and AI’s effect on national security as their reason, the latter showcased its reservations about the overly wide AI regulations which had the potential to hamper a transformative industry. Meanwhile, the US is also looking forward to ‘Stargate’, its $500 billion AI infrastructure project alongside the companies- OpenAI, Softbank, and Oracle.
CyberPeace Insights
The Summit has garnered greater significance with the backdrop of the release of platforms such as DeepSeek R1, China’s AI assistant system similar to that of OpenAI’s ChatGPT. On its release, it was the top-rated free application on Apple’s app store and sent the technology stocks tumbling. Moreover, investors world over appreciated the creation of the model which was made roughly in about $5 million while other AI companies spent more in comparison (keeping in mind the restrictions caused by the chip export controls in China). This breakthrough challenges the conventional notion that massive funding is a prerequisite for innovation, offering hope for India’s burgeoning AI ecosystem. With the IndiaAI mission and fewer geopolitical restrictions, India stands at a pivotal moment to drive responsible AI advancements.
References:
- https://www.mea.gov.in/press-releases.htm?dtl/39023/Prime_Minister_cochairs_AI_Action_Summit_in_Paris_February_11_2025
- https://indianexpress.com/article/explained/explained-sci-tech/what-is-stargate-trumps-500-billion-ai-project-9793165/
- https://pib.gov.in/PressReleasePage.aspx?PRID=2102056
- https://pib.gov.in/PressReleasePage.aspx?PRID=2101947
- https://pib.gov.in/PressReleasePage.aspx?PRID=2101896
- https://www.timesnownews.com/technology-science/uk-and-us-decline-to-sign-global-ai-agreement-at-paris-ai-action-summit-here-is-why-article-118164497
- https://www.thehindu.com/sci-tech/technology/india-57-others-sign-paris-joint-statement-on-inclusive-sustainable-ai/article69207937.ece

A video purportedly showing Prime Minister Narendra Modi delivering a politically charged warning during a public address has been widely circulated on social media. The clip is being shared with claims that the Prime Minister spoke about the “saffronisation” of the Indian Army and issued a stern message to Bangladesh by invoking India’s role in the 1971 war.
A detailed verification by the CyberPeace Foundation found these claims to be misleading. The investigation revealed that the viral video has been digitally altered, and the statements attributed to the Prime Minister do not appear in the original speech. The misleading narrative appears to have been created by inserting manipulated audio into authentic video footage.
Claim
An X user, “@Pakpulse247,” shared a video on December 26 claiming that Prime Minister Narendra Modi, while addressing a public gathering, made remarks about the “saffronisation” of the Indian Army and issued a warning to Bangladesh by invoking India’s role in the 1971 war.
The post’s caption alleged that the Prime Minister made these statements during the inauguration of the Rashtra Prerna Sthal in Lucknow, suggesting that Bangladesh should not expect support from Pakistan in the event of an Indian offensive and should remember India’s contribution during the 1971 conflict.
The link to the post is provided below, along with a screenshot of the viral claim.

Fact Check:
During the verification process, the Desk carried out a targeted keyword search and located the complete, original version of the video on the official YouTube channel of the Bharatiya Janata Party, uploaded on December 26, 2025. The video description confirmed that Prime Minister Narendra Modi was addressing the gathering at the inauguration of the Rashtra Prerna Sthal in Lucknow.
A comparison of the visuals showed that the venue, backdrop, and the Prime Minister’s attire were identical to those seen in the viral clip circulating on social media. However, after carefully reviewing the full speech, the Desk found no reference to Pakistan, Bangladesh, or the claims being attributed to him in the viral post.
The link to the original video is provided below, along with a relevant screenshot.
https://www.youtube.com/watch?v=L9rbzU0m30o

Upon further examination of the search results, the Desk located the official English transcript of Prime Minister Narendra Modi’s speech on the PM India website. A thorough review of the complete transcript revealed no mention of the statements attributed to him in the viral social media post.
The link to the official transcription is provided below.
On further examination of the search results, the Desk accessed the official English transcript of Prime Minister Narendra Modi’s speech published on the PM India website. A careful review of the complete transcript confirmed that none of the claims made in the viral social media post appear in the official record.
The link to the transcription is provided below.
Building on these findings, the Desk extracted the audio track from the viral video and analysed it using Resemble AI, an audio-detection tool. The analysis flagged the audio as fake, indicating that it had been digitally manipulated.

Conclusion
The CyberPeace Foundation’s research clearly establishes that the viral video claiming Prime Minister Narendra Modi made remarks about the saffronisation of the Indian Army and issued warnings to Bangladesh is false and misleading. The full original video and official transcript of the speech contain no such references to Pakistan, Bangladesh, or the 1971 war. Furthermore, audio analysis using AI-detection tools confirms that the voice in the viral clip has been digitally manipulated.

Introduction
Given the era of digital trust and technological innovation, the age of artificial intelligence has provided a new dimension to how people communicate and how they create and consume content. However, like all borrowed powers, the misuse of AI can lead to terrible consequences. One recent dark example was a cybercrime in Brazil: a sophisticated online scam using deepfake technology to impersonate celebrities of global stature, including supermodel Gisele Bündchen, in misleading Instagram ads. Luring in millions of reais in revenue, this crime clearly brings forth the concern of AI-generative content having rightfully set on the side of criminals.
Scam in Motion
Lately, the federal police of Brazil have stated that this scheme has been in circulation since 2024, when the ads were already being touted as apparently very genuine, using AI-generated video and images. The ads showed Gisele Bündchen and other celebrities endorsing skincare products, promotional giveaways, or time-limited discounts. The victims were tricked into making petty payments, mostly under 100 reais (about $19) for these fake products or were lured into paying "shipping costs" for prizes that never actually arrived.
The criminals leveraged their approach by scaling it up and focusing on minor losses accumulated from every victim, thus christening it "statistical immunity" by investigators. Victims being pocketed only a couple of dollars made most of them stay on their heels in terms of filing a complaint, thereby allowing these crooks extra limbs to shove on. Over time, authorities estimated that the group had gathered over 20 million reais ($3.9 million) in this elaborate con.
The scam was detected when a victim came forth with the information that an Instagram advertisement portraying a deepfake video of Gisele Bündchen was indeed false. With Anna looking to be Gisele and on the recommendation of a skincare company, the deepfake video was the most well-produced fake video. On going further into the matter, it became apparent that the investigations uncovered a whole network of deceptive social media pages, payment gateways, and laundering channels spread over five states in Brazil.
The Role of AI and Deepfakes in Modern Fraud
It is one of the first few large-scale cases in Brazil where AI-generated deepfakes have been used to perpetrate financial fraud. Deepfake technology, aided by machine learning algorithms, can realistically mimic human appearance and speech and has become increasingly accessible and sophisticated. Whereas before a level of expertise and computer resources were needed, one now only requires an online tool or app.
With criminals gaining a psychological advantage through deepfakes, the audiences would be more willing to accept the ad as being genuine as they saw a familiar and trusted face, a celebrity known for integrity and success. The human brain is wired to trust certain visual cues, making deepfakes an exploitation of this cognitive bias. Unlike phishing emails brimming with spelling and grammatical errors, deepfake videos are immersive, emotional, and visually convincing.
This is the growing terrain: AI-enabled misinformation. From financial scams to political propaganda, manipulated media is killing trust in the digital ecosystem.
Legalities and Platform Accountability
The Brazilian government had taken a proactive stance on the issue. In June 2025, the country's Supreme Court held that social media platforms could be held liable for failure to expeditiously remove criminal content, even in the absence of a formal order from a court. The icing on the cake is that that judgment would go a long way in architecting platform accountability in Brazil and potentially worldwide as jurisdictions adopt processes to deal with AI-generated fraud.
Meta, the parent company of Instagram, had said its policies forbid "ads that deceptively use public figures to scam people." Meta claims to use advanced detection mechanisms, trained review teams, and user tools to report violations. The persistence of such scams shows that the enforcement mechanisms still lag the pace and scale of AI-based deception.
Why These Scams Succeed
There are many reasons for the success of these AI-powered scams.
- Trust Due to Familiarity: Human beings tend to believe anything put forth by a known individual.
- Micro-Fraud: Keeping the money laundered from victims small prevents any increase in the number of complaints about these crimes.
- Speed To Create Content: New ads are being generated by criminals faster than ads can be checked for and removed by platforms via AI tools.
- Cross-Platform Propagation: A deepfake ad is then reshared onto various other social networking platforms once it starts gaining some traction, thereby worsening the problem.
- Absence of Public Awareness: Most users still cannot discern manipulated media, especially when high-quality deepfakes come into play.
Wider Implications on Cybersecurity and Society
The Brazilian case is but a microcosm of a much bigger problem. With deepfake technology evolving, AI-generated deception threatens not only individuals but also institutions, markets, and democratic systems. From investment scams and fake charters to synthetic IDs for corporate fraud, the possibilities for abuse are endless.
Moreover, with generative AIs being adopted by cybercriminals, law enforcement faces obstructions to properly attributing, validating evidence, and conducting digital forensics. Determining what is actual and what is manipulated has now given rise to the need for a forensic AI model that has triggered the deployment of the opposite on the other side, the attacker, thus initiating a rising tech arms race between the two parties.
Protecting Citizens from AI-Powered Scams
Public awareness has remained the best defence for people in such scams. Gisele Bündchen's squad encouraged members of the public to verify any advertisement through official brand or celebrity channels before engaging with said advertisements. Consumers need to be wary of offers that appear "too good to be true" and double-check the URL for authenticity before sharing any kind of personal information
Individually though, just a few acts go so far in lessening some of the risk factors:
- Verify an advertisement's origin before clicking or sharing it
- Never share any monetary or sensitive personal information through an unverifiable link
- Enable two-factor authentication on all your social accounts
- Periodically check transaction history for any unusual activity
- Report any deepfake or fraudulent advertisement immediately to the platform or cybercrime authorities
Collaboration will be the way ahead for governments and technology companies. Investing in AI-based detection systems, cooperating on international law enforcement, and building capacity for digital literacy programs will enable us to stem this rising tide of synthetic media scams.
Conclusion
The deepfake case in Brazil with Gisele Bündchen acts as a clarion for citizens and legislators alike. This shows the evolution of cybercrime that profited off the very AI technologies that were once hailed for innovation and creativity. In this new digital frontier that society is now embracing, authenticity stands closer to manipulation, disappearing faster with each dawn.
While keeping public safety will certainly still require great cybersecurity measures in this new environment, it will demand equal contributions on vigilance, awareness, and ethical responsibility. Deepfakes are not only a technology problem but a societal one-crossing into global cooperation, media literacy, and accountability at every level throughout the entire digital ecosystem.