#FactCheck - MS Dhoni Sculpture Falsely Portrayed as Chanakya 3D Recreation
Executive Summary:
A widely used news on social media is that a 3D model of Chanakya, supposedly made by Magadha DS University matches with MS Dhoni. However, fact-checking reveals that it is a 3D model of MS Dhoni not Chanakya. This MS Dhoni-3D model was created by artist Ankur Khatri and Magadha DS University does not appear to exist in the World. Khatri uploaded the model on ArtStation, calling it an MS Dhoni similarity study.

Claims:
The image being shared is claimed to be a 3D rendering of the ancient philosopher Chanakya created by Magadha DS University. However, people are noticing a striking similarity to the Indian cricketer MS Dhoni in the image.



Fact Check:
After receiving the post, we ran a reverse image search on the image. We landed on a Portfolio of a freelance character model named Ankur Khatri. We found the viral image over there and he gave a headline to the work as “MS Dhoni likeness study”. We also found some other character models in his portfolio.



Subsequently, we searched for the mentioned University which was named as Magadha DS University. But found no University with the same name, instead the name is Magadh University and it is located in Bodhgaya, Bihar. We searched the internet for any model, made by Magadh University but found nothing. The next step was to conduct an analysis on the Freelance Character artist profile, where we found that he has a dedicated Instagram channel where he posted a detailed video of his creative process that resulted in the MS Dhoni character model.

We concluded that the viral image is not a reconstruction of Indian philosopher Chanakya but a reconstruction of Cricketer MS Dhoni created by an artist named Ankur Khatri, not any University named Magadha DS.
Conclusion:
The viral claim that the 3D model is a recreation of the ancient philosopher Chanakya by a university called Magadha DS University is False and Misleading. In reality, the model is a digital artwork of former Indian cricket captain MS Dhoni, created by artist Ankur Khatri. There is no evidence of a Magadha DS University existence. There is a university named Magadh University in Bodh Gaya, Bihar despite its similar name, we found no evidence in the model's creation. Therefore, the claim is debunked, and the image is confirmed to be a depiction of MS Dhoni, not Chanakya.
Related Blogs

Introduction
Phishing as a Service (PhaaS) platform 'LabHost' has been a significant player in cybercrime targeting North American banks, particularly financial institutes in Canada. LabHost offers turnkey phishing kits, infrastructure for hosting pages, email content generation, and campaign overview services to cybercriminals in exchange for a monthly subscription. The platform's popularity surged after introducing custom phishing kits for Canadian banks in the first half of 2023.Fortra reports that LabHost has overtaken Frappo, cybercriminals' previous favorite PhaaS platform, and is now the primary driving force behind most phishing attacks targeting Canadian bank customers.
In the digital realm, where the barriers to entry for nefarious activities are crumbling, and the tools of the trade are being packaged and sold with the same customer service one might expect from a legitimate software company. This is the world of Phishing-as-a-Service (PhaaS), and at the forefront of this ominous trend is LabHost, a platform that has been instrumental in escalating attacks on North American banks, with a particular focus on Canadian financial institutions.
LabHost is not a newcomer to the cybercrime scene, but its ascent to infamy was catalyzed by the introduction of custom phishing kits tailored for Canadian banks in the first half of 2023. The platform operates on a subscription model, offering turnkey solutions that include phishing kits, infrastructure for hosting malicious pages, email content generation, and campaign overview services. For a monthly fee, cybercriminals are handed the keys to a kingdom of deception and theft.
Emergence of Labhost
The rise of LabHost has been meticulously chronicled by various cyber security firms which reports that LabHost has dethroned the previously favored PhaaS platform, Frappo. LabHost has become the primary driving force behind the majority of phishing attacks targeting customers of Canadian banks. Despite suffering a disruptive outage in early October 2023, LabHost has rebounded with vigor, orchestrating several hundreds of attacks per month.
Their investigation into LabHost's operations reveals a tiered membership system: Standard, Premium, and World, with monthly fees of $179, $249, and $300, respectively. Each tier offers an escalating scope of targets, from Canadian banks to 70 institutions worldwide, excluding North America. The phishing templates provided by LabHost are not limited to financial entities; they also encompass online services like Spotify, postal delivery services like DHL, and regional telecommunication service providers.
LabRat
The true ingenuity of LabHost lies in its integration with 'LabRat,' a real-time phishing management tool that enables cybercriminals to monitor and control an active phishing attack. This tool is a linchpin in man-in-the-middle style attacks, designed to capture two-factor authentication codes, validate credentials, and bypass additional security measures. In essence, LabRat is the puppeteer's strings, allowing the phisher to manipulate the attack with precision and evade the safeguards that are the bulwarks of our digital fortresses.
LabSend
In the aftermath of its October disruption, LabHost unveiled 'LabSend,' an SMS spamming tool that embeds links to LabHost phishing pages in text messages. This tool orchestrates a symphony of automated smishing campaigns, randomizing portions of text messages to slip past the vigilant eyes of spam detection systems. Once the SMS lure is cast, LabSend responds to victims with customizable message templates, a Machiavellian touch to an already insidious scheme.
The Proliferation of PhaaS
The proliferation of PhaaS platforms like LabHost, 'Greatness,' and 'RobinBanks' has democratized cybercrime, lowering the threshold for entry and enabling even the most unskilled hackers to launch sophisticated attacks. These platforms are the catalysts for an exponential increase in the pool of threat actors, thereby magnifying the impact of cybersecurity on a global scale.
The ease with which these services can be accessed and utilized belies the complexity and skill traditionally required to execute successful phishing campaigns. Stephanie Carruthers, who leads an IBM X-Force phishing research project, notes that crafting a single phishing email can consume upwards of 16 hours, not accounting for the time and resources needed to establish the infrastructure for sending the email and harvesting credentials.
PhaaS platforms like LabHost have commoditized this process, offering a buffet of malevolent tools that can be customized and deployed with a few clicks. The implications are stark: the security measures that businesses and individuals have come to rely on, such as multi-factor authentication (MFA), are no longer impenetrable. PhaaS platforms have engineered ways to circumvent these defenses, rendering them vulnerable to exploitation.
Emerging Cyber Defense
In the face of this escalating threat, a multi-faceted defense strategy is imperative. Cybersecurity solutions like SpamTitan employ advanced AI and machine learning to identify and block phishing threats, while end-user training platforms like SafeTitan provide ongoing education to help individuals recognize and respond to phishing attempts. However, with phishing kits now capable of bypassing MFA,it is clear that more robust solutions, such as phishing-resistant MFA based on FIDO/WebAuthn authentication or Public Key Infrastructure (PKI), are necessary to thwart these advanced attacks.
Conclusion
The emergence of PhaaS platforms represents a significant shift in the landscape of cybercrime, one that requires a vigilant and sophisticated response. As we navigate this treacherous terrain, it is incumbent upon us to fortify our defenses, educate our users, and remain ever-watchful of the evolving tactics of cyber adversaries.
References
- https://www-bleepingcomputer-com.cdn.ampproject.org/c/s/www.bleepingcomputer.com/news/security/labhost-cybercrime-service-lets-anyone-phish-canadian-bank-users/amp/
- https://www.techtimes.com/articles/302130/20240228/phishing-platform-labhost-allows-cybercriminals-target-banks-canada.htm
- https://www.spamtitan.com/blog/phishing-as-a-service-threat/
- https://timesofindia.indiatimes.com/gadgets-news/five-government-provided-botnet-and-malware-cleaning-tools/articleshow/107951686.cms
.jpg)
Introduction
The Indian Cabinet has approved a comprehensive national-level IndiaAI Mission with a budget outlay ofRs.10,371.92 crore. The mission aims to strengthen the Indian AI innovation ecosystem by democratizing computing access, improving data quality, developing indigenous AI capabilities, attracting top AI talent, enabling industry collaboration, providing startup risk capital, ensuring socially-impactful A projects, and bolstering ethical AI. The mission will be implemented by the'IndiaAI' Independent Business Division (IBD) under the Digital India Corporation (DIC) and consists of several components such as IndiaAI Compute Capacity, IndiaAI Innovation Centre (IAIC), IndiaAI Datasets Platform, India AI Application Development Initiative, IndiaAI Future Skills, IndiaAI Startup Financing, and Safe & Trusted AI over the next 5 years.
This financial outlay is intended to befulfilled through a public-private partnership model, to ensure a structured implementation of the IndiaAI Mission. The main objective is to create and nurture an ecosystem for India’s AI innovation. This mission is intended to act as a catalyst for shaping the future of AI for India and the world. AI has the potential to become an active enabler of the digital economy and the Indian government aims to harness its full potential to benefit its citizens and drive the growth of its economy.
Key Objectives of India's AI Mission
● With the advancements in data collection, processing and computational power, intelligent systems can be deployed in varied tasks and decision-making to enable better connectivity and enhance productivity.
● India’s AI Mission will concentrate on benefiting India and addressing societal needs in primary areas of healthcare, education, agriculture, smart cities and infrastructure, including smart mobility and transportation.
● This mission will work with extensive academia-industry interactions to ensure the development of core research capability at the national level. This initiative will involve international collaborations and efforts to advance technological frontiers by generating new knowledge and developing and implementing innovative applications.
The strategies developed for implementing the IndiaAI Mission are via Public-Private Partnerships, Skilling initiatives and AI Policy and Regulation. An example of the work towards the public-private partnership is the pre-bid meeting that the IT Ministry hosted on 29th August2024, which saw industrial participation from Nvidia, Intel, AMD, Qualcomm, Microsoft Azure, AWS, Google Cloud and Palo Alto Networks.
Components of IndiaAI Mission
The IndiaAI Compute Capacity: The IndiaAI Compute pillar will build a high-end scalable AI computing ecosystem to cater to India's rapidly expanding AI start-ups and research ecosystem. The ecosystem will comprise AI compute infrastructure of 10,000 or more GPUs, built through public-private partnerships. An AI marketplace will offer AI as a service and pre-trained models to AI innovators.
The IndiaAI Innovation Centre will undertake the development and deployment of indigenous Large Multimodal Models (LMMs) and domain-specific foundational models in critical sectors. The IndiaAI Datasets Platform will streamline access to quality on-personal datasets for AI innovation.
The IndiaAI Future Skills pillar will mitigate barriers to entry into AI programs and increase AI courses in undergraduate, master-level, and Ph.D. programs. Data and AI Labs will be set up in Tier 2 and Tier 3 cities across India to impart foundational-level courses.
The IndiaAI Startup Financing pillar will support and accelerate deep-tech AI startups, providing streamlined access to funding for futuristic AI projects.
The Safe & Trusted AI pillar will enable the implementation of responsible AI projects and the development of indigenous tools and frameworks, self-assessment check lists for innovators, and other guidelines and governance frameworks by recognising the need for adequate guardrails to advance the responsible development, deployment, and adoption of AI.
CyberPeace Considerations for the IndiaAI Mission
● Data privacy and security are paramount as emerging privacy instruments aim to ensure ethical AI use. Addressing bias and fairness in AI remains a significant challenge, especially with poor-quality or tampered datasets that can lead to flawed decision-making, posing risks to fairness, privacy, and security.
● Geopolitical tensions and export control regulations restrict access to cutting-edge AI technologies and critical hardware, delaying progress and impacting data security. In India, where multilingualism and regional diversity are key characteristics, the unavailability of large, clean, and labeled datasets in Indic languages hampers the development of fair and robust AI models suited to the local context.
● Infrastructure and accessibility pose additional hurdles in India’s AI development. The country faces challenges in building computing capacity, with delays in procuring essential hardware, such as GPUs like Nvidia’s A100 chip, hindering businesses, particularly smaller firms. AI development relies heavily on robust cloud computing infrastructure, which remains in its infancy in India. While initiatives like AIRAWAT signal progress, significant gaps persist in scaling AI infrastructure. Furthermore, the scarcity of skilled AI professionals is a pressing concern, alongside the high costs of implementing AI in industries like manufacturing. Finally, the growing computational demands of AI lead to increased energy consumption and environmental impact, raising concerns about balancing AI growth with sustainable practices.
Conclusion
We advocate for ethical and responsible AI development adoption to ensure ethical usage, safeguard privacy, and promote transparency. By setting clear guidelines and standards, the nation would be able to harness AI's potential while mitigating risks and fostering trust. The IndiaAI Mission will propel innovation, build domestic capacities, create highly-skilled employment opportunities, and demonstrate how transformative technology can be used for social good and enhance global competitiveness.
References
● https://pib.gov.in/PressReleasePage.aspx?PRID=2012375

A Foray into the Digital Labyrinth
In our digital age, the silhouette of truth is often obfuscated by a fog of technological prowess and cunning deception. With each passing moment, the digital expanse sprawls wider, and within it, synthetic media, known most infamously as 'deepfakes', emerge like phantoms from the machine. These adept forgeries, melding authenticity with fabrication, represent a new frontier in the malleable narrative of understood reality. Grappling with the specter of such virtual deceit, social media behemoths Facebook and YouTube have embarked on a prodigious quest. Their mission? To formulate robust bulwarks around the sanctity of fact and fiction, all the while fostering seamless communication across channels that billions consider an inextricable part of their daily lives.
In an exploration of this digital fortress besieged by illusion, we unpeel the layers of strategy that Facebook and YouTube have unfurled in their bid to stymie the proliferation of these insidious technical marvels. Though each platform approaches the issue through markedly different prisms, a shared undercurrent of necessity and urgency harmonizes their efforts.
The Detailing of Facebook's Strategic
Facebook's encampment against these modern-day chimaeras teems with algorithmic sentinels and human overseers alike—a union of steel and soul. The company’s layer upon layer of sophisticated artificial intelligence is designed to scrupulously survey, identify, and flag potential deepfake content with a precision that borders on the prophetic. Employing advanced AI systems, Facebook endeavours to preempt the chaos sown by manipulated media by detecting even the slightest signs of digital tampering.
However, in an expression of profound acumen, Facebook also serves reminder of AI's fallibility by entwining human discernment into its fabric. Each flagged video wages its battle for existence within the realm of these custodians of reality—individuals entrusted with the hefty responsibility of parsing truth from technologically enabled fiction.
Facebook does not rest on the laurels of established defense mechanisms. The platform is in a perpetual state of flux, with policies and AI models adapting to the serpentine nature of the digital threat landscape. By fostering its cyclical metamorphosis, Facebook not only sharpens its detection tools but also weaves a more resilient protective web, one capable of absorbing the shockwaves of an evolving battlefield.
YouTube’s Overture of Transparency and the Exposition of AI
Turning to the amphitheatre of YouTube, the stage is set for an overt commitment to candour. Against the stark backdrop of deepfake dilemmas, YouTube demands the unveiling of the strings that guide the puppets, insisting on full disclosure whenever AI's invisible hands sculpt the content that engages its diverse viewership.
YouTube's doctrine is straightforward: creators must lift the curtains and reveal any artificial manipulation's role behind the scenes. With clarity as its vanguard, this requirement is not just procedural but an ethical invocation to showcase veracity—a beacon to guide viewers through the murky waters of potential deceit.
The iron fist within the velvet glove of YouTube's policy manifests through a graded punitive protocol. Should a creator falter in disclosing the machine's influence, repercussions follow, ensuring that the ecosystem remains vigilant against hidden manipulation.
But YouTube's policy is one that distinguishes between malevolence and benign use. Artistic endeavours, satirical commentary, and other legitimate expositions are spared the policy's wrath, provided they adhere to the overarching principle of transparency.
The Symbiosis of Technology and Policy in a Morphing Domain
YouTube's commitment to refining its coordination between human insight and computerized examination is unwavering. As AI's role in both the generation and moderation of content deepens, YouTube—which, like a skilled cartographer, must redraw its policies increasingly—traverses this ever-mutating landscape with a proactive stance.
In a Comparative Light: Tracing the Convergence of Giants
Although Facebook and YouTube choreograph their steps to different rhythms, together they compose an intricate dance aimed at nurturing trust and authenticity. Facebook leans into the proactive might of their AI algorithms, reinforced by updates and human interjection, while YouTube wields the virtue of transparency as its sword, cutting through masquerades and empowering its users to partake in storylines that are continually rewritten.
Together on the Stage of Our Digital Epoch
The sum of Facebook and YouTube's policies is integral to the pastiche of our digital experience, a multifarious quilt shielding the sanctum of factuality from the interloping specters of deception. As humanity treads the line between the veracious and the fantastic, these platforms stand as vigilant sentinels, guiding us in our pursuit of an old-age treasure within our novel digital bazaar—the treasure of truth. In this labyrinthine quest, it is not merely about unmasking deceivers but nurturing a wisdom that appreciates the shimmering possibilities—and inherent risks—of our evolving connection with the machine.
Conclusion
The struggle against deepfakes is a complex, many-headed challenge that will necessitate a united front spanning technologists, lawmakers, and the public. In this digital epoch, where the veneer of authenticity is perilously thin, the valiant endeavours of these tech goliaths serve as a lighthouse in a storm-tossed sea. These efforts echo the importance of evergreen vigilance in discerning truth from artfully crafted deception.
References
- https://about.fb.com/news/2020/01/enforcing-against-manipulated-media/
- https://indianexpress.com/article/technology/artificial-intelligence/google-sheds-light-on-how-its-fighting-deep-fakes-and-ai-generated-misinformation-in-india-9047211/
- https://techcrunch.com/2023/11/14/youtube-adapts-its-policies-for-the-coming-surge-of-ai-videos/
- https://www.trendmicro.com/vinfo/us/security/news/cybercrime-and-digital-threats/youtube-twitter-hunt-down-deepfakes