#Fact Check-Misleading Newspaper from Kerala stating ban on paper currency
Executive Summary:
Recently, our team came across a widely circulated post on X (formerly Twitter), claiming that the Indian government would abolish paper currency from February 1 and transition entirely to digital money. The post, designed to resemble an official government notice, cited the absence of advertisements in Kerala newspapers as supposed evidence—an assertion that lacked any substantive basis

Claim:
The Indian government will ban paper currency from February 1, 2025, and adopt digital money as the sole legal tender to fight black money.

Fact Check:
The claim that the Indian government will ban paper currency and transition entirely to digital money from February 1 is completely baseless and lacks any credible foundation. Neither the government nor the Reserve Bank of India (RBI) has made any official announcement supporting this assertion.
Furthermore, the supposed evidence—the absence of specific advertisements in Kerala newspapers—has been misinterpreted and holds no connection to any policy decisions regarding currency
During our research, we found that this was the prediction of what the newspaper from the year 2050 would look like and was not a statement that the notes will be banned and will be shifted to digital currency.
Such a massive change would necessitate clear communication to the public, major infrastructure improvements, and precise policy announcements which have not happened. This false rumor has widely spread on social media without even a shred of evidence from its source, which has been unreliable and is hence completely false.
We also found a clip from a news channel to support our research by asianetnews on Instagram.

We found that the event will be held in Jain Deemed-to-be University, Kochi from 25th January to 1st February. After this advertisement went viral and people began criticizing it, the director of "The Summit of Future 2025" apologized for this confusion. According to him, it was a fictional future news story with a disclaimer, which was misread by some of its readers.
The X handle of Summit of Future 2025 also posted a video of the official statement from Dr Tom.

Conclusion:
The claim that the Indian government will discontinue paper currency by February 1 and resort to full digital money is entirely false. There's no government announcement nor any evidence to support it. We would like to urge everyone to refer to standard sources for accurate information and be aware to avoid misinformation online.
- Claim: India to ban paper currency from February 1, switching to digital money.
- Claimed On: X (Formerly Known As Twitter)
- Fact Check: False and Misleading
Related Blogs

Introduction
In the labyrinthine corridors of the digital age, where information zips across the globe with the ferocity of a tempest, the truth often finds itself ensnared in a web of deception. It is within this intricate tapestry of reality and falsehood that we find ourselves examining two distinct yet equally compelling cases of misinformation, each a testament to the pervasive challenges that beset our interconnected world.
Case 1: The Deceptive Video: Originating in Malaysia, Misattributed to Indian Railway Development
A misleading video claiming to showcase Indian railway construction has been debunked as footage from Malaysia's East Coast Rail Link (ECRL). Fact-checking efforts by India TV traced the video's origin to Malaysia, revealing deceptive captions in Tamil and Hindi. The video was initially posted on Twitter on January 9, 2024, announcing the commencement of track-laying for Malaysia's East Coast Railway. Further investigation reveals the ECRL as a joint venture between Malaysia and China, involving the laying of tracks along the east coast, challenging assertions of Indian railway development. The ECRL's track-laying initiative, initiated in December 2023, is part of China's Belt and Road initiative, covering 665 kilometers across states like Kelantan, Terengganu, Pahang, and Selangor, with a completion target set for 2025.
The video in question, a digital chameleon, had its origins not in the bustling landscapes of India but within the verdant bounds of Malaysia. Specifically, it was a scene captured from the East Coast Rail Link (ECRL) project, a monumental joint venture between Malaysia and China, unfurling across 665 kilometers of Malaysian terrain. This ambitious endeavor, part of the grand Belt and Road initiative, is a testament to the collaborative spirit that defines our era, with tracks stretching from Kelantan to Selangor, and a completion horizon set for the year 2025.
The unveiling of this grand project was graced by none other than Malaysia’s King Sultan Abdullah Sultan Ahmad Shah, in Pahang, underscoring the strategic alliance with China and the infrastructural significance of the ECRL. Yet, despite the clarity of its origins, the video found itself cloaked in a narrative of Indian development, a falsehood that spread like wildfire across the digital savannah.
Through the meticulous application of keyframe analysis and reverse image searches, the truth was laid bare. Reports from reputable sources such as the Associated Press and the Global Times, featuring the very same machinery, corroborated the video's true lineage. This revelation not only highlighted the ECRL's geopolitical import but also served as a clarion call for the critical role of fact-checking in an era where misinformation proliferates with reckless abandon.
Case 2: Kerala's Incident: Investigating Fake Narratives
Kerala Chief Minister Pinarayi Vijayan has registered 53 cases related to spreading fake narratives on social media to incite communal sentiments following the blasts at a Christian religious gathering in October 2023. Vijayan said cases have been registered against online news portals, editors, and Malayalam television channels. The state police chief has issued directions to monitor social media to stop fake news spread and take appropriate actions.
In a different corner of the world, the serene backdrop of Kerala was shattered by an event that would ripple through the fabric of its society. The Kalamassery blast, a tragic occurrence at a Christian religious gathering, claimed the lives of eight individuals and left over fifty wounded. In the wake of this calamity, a man named Dominic Martin surrendered, claiming responsibility for the heinous act.
Yet, as the investigation unfolded, a different kind of violence emerged—one that was waged not with explosives but with words. A barrage of fake narratives began to circulate through social media, igniting communal tensions and distorting the narrative of the incident. The Kerala Chief Minister, Pinarayi Vijayan, informed the Assembly that 53 cases had been registered across the state, targeting individuals and entities that had fanned the flames of discord through their digital utterances.
The Kerala police, vigilant guardians of truth, embarked on a digital crusade to quell the spread of these communally instigative messages. With a particular concentration of cases in Malappuram district, the authorities worked tirelessly to dismantle the network of fake profiles that propagated religious hatred. Social media platforms were directed to assist in this endeavor, revealing the IP addresses of the culprits and enabling the cyber cell divisions to take decisive action.
In the aftermath of the blasts, the Chief Minister and the state police chief ordered special instructions to monitor social media platforms for content that could spark communal uproar. Cyber patrolling became the order of the day, as a 20-member probe team was constituted to deeply investigate the incident.
Conclusion
These two cases, disparate in their nature and geography, converge on a singular point: the fragility of truth in the digital age. They highlight the imperative for vigilance and the pursuit of accuracy in a world where misinformation can spread like wildfire. As we navigate this intricate cyberscape, it is imperative to be mindful of the power of fact-checking and the importance of media literacy, for they are the light that guides us through the fog of falsehoods to the shores of veracity.
These narratives are not merely stories of deception thwarted; they are a call to action, a reminder of our collective responsibility to safeguard the integrity of our shared reality. Let us, therefore, remain steadfast in our quest for the truth, for it is only through such diligence that we can hope to preserve the sanctity of our discourse and the cohesion of our societies.
References:
- https://www.indiatvnews.com/fact-check/fact-check-misleading-video-claims-malaysian-rail-project-indian-truth-ecrl-india-railway-development-pm-modi-2024-01-29-914282
- https://sahilonline.org/kalamasserry-blast-53-cases-registered-across-kerala-for-spreading-fake-news

Introduction
In the digital era, where technology is growing rapidly, the role of Artificial Intelligence (AI) has been making its way to different corners of the world. Where nothing seems to be impossible, technology and innovation have been moving conjointly and once again, and such innovation is in the limelight with its groundbreaking initiative known as “Project Groot”, which has been announced by the AI chip leader “Nvidia”. The core of this project is the fusion of technology with AI and robotics, where a humanoid can be produced with the capability to understand the natural language and interact with it to further learn from the physical environment by observing human actions and skills. Project Groot aims to assist humans in diverse sectors such as Healthcare and so on.
Humanoid robots are based on NVIDIA’s thor system-on-chip (SoC). The thor powers the intelligence of these robots, and the chip has been designed to handle complex tasks and ensure a safe and natural interaction between humans and robots. However, a big question arises about the ethical considerations of privacy, autonomy and the possible replacement of human workers.
Brief Analysis
Nvidia has announced Project GR00T, or Generalist Robot 00 Technology, which aims to create AI-powered humanoid robots with human-like understanding and movement. The project is part of Nvidia's efforts to drive breakthroughs in robotics and embodied AI, which can interact with and learn from a physical environment. The robots built on this platform are designed to understand natural language and emulate movements by observing human actions, such as coordination, dexterity, and other skills.
The model has been trained on NVIDIA GPU-accelerated simulation, enabling the robots to learn from human demonstrations with imitation learning and from the robotics platform NVIDIA Isaac Lab for reinforcement learning. This multimodal AI system acts as the mind for humanoid robots, allowing them to learn new skills and interact with the real world. Leading names in robotics, such as Figure, Boston Dynamics, Apptronik, Agility Robotics, Sanctuary AI, and Unitree, are reported to have collaborated with Nvidia to leverage GR00T.
Nvidia has also updated Isaac with Isaac Manipulator and Isaac Perceptor, which add multi-camera 3D vision. The company also unveiled a new computer, Jetson Thor, to aid humanoid robots based on NVIDIA's SoC, which is designed to handle complex tasks and ensure a safe and natural interaction between humans and robots.
Despite the potential job loss associated with humanoid robots potentially handling hazardous and repetitive tasks, many argue that they can aid humans and make their lives more comfortable rather than replacing them.
Policy Recommendations
The Nvidia project highlights a significant development in AI Robotics, presenting a brimming potential and ethical challenges critical for the overall development and smooth assimilation of AI-driven tech in society. To ensure its smooth assimilation, a comprehensive policy framework must be put in place. This includes:
- Human First Policy - Emphasis should be on better augmentation rather than replacement. The authorities must focus on better research and development (R&D) of applications that aid in modifying human capabilities, enhancing working conditions, and playing a role in societal growth.
- Proper Ethical Guidelines - Guidelines stressing human safety, autonomy and privacy should be established. These norms must include consent for data collection, fair use of AI in decision making and proper protocols for data security.
- Deployment of Inclusive Technology - Access to AI Driven Robotics tech should be made available to diverse sectors of society. It is imperative to address potential algorithm bias and design flaws to avoid discrimination and promote inclusivity.
- Proper Regulatory Frameworks - It is crucial to establish regulatory frameworks to govern the smooth deployment and operation of AI-driven tech. The framework must include certification for safety and standards, frequent audits and liability protocols to address accidents.
- Training Initiatives - Educational programs should be introduced to train the workforce for integrating AI driven robotics and their proper handling. Upskilling of the workforce should be the top priority of corporations to ensure effective integration of AI Robotics.
- Collaborative Research Initiatives - AI and emerging technologies have a profound impact on the trajectory of human development. It is imperative to foster collaboration among governments, industry and academia to drive innovation in AI robotics responsibly and undertake collaborative initiatives to mitigate and address technical, societal, legal and ethical issues posed by AI Robots.
Conclusion
On the whole, Project GROOT is a significant quantum leap in the advancement of robotic technology and indeed paves the way for a future where robots can integrate seamlessly into various aspects of human lives.
References
- https://indianexpress.com/article/explained/explained-sci-tech/what-is-nvidias-project-gr00t-impact-robotics-9225089/
- https://medium.com/paper-explanation/understanding-nvidias-project-groot-762d4246b76d
- https://www.techradar.com/pro/nvidias-project-groot-brings-the-human-robot-future-a-significant-step-closer
- https://www.barrons.com/livecoverage/nvidia-gtc-ai-conference/card/nvidia-announces-ai-model-for-humanoid-robot-development-BwT9fewMyD6XbuBrEDSp

Introduction
Given the era of digital trust and technological innovation, the age of artificial intelligence has provided a new dimension to how people communicate and how they create and consume content. However, like all borrowed powers, the misuse of AI can lead to terrible consequences. One recent dark example was a cybercrime in Brazil: a sophisticated online scam using deepfake technology to impersonate celebrities of global stature, including supermodel Gisele Bündchen, in misleading Instagram ads. Luring in millions of reais in revenue, this crime clearly brings forth the concern of AI-generative content having rightfully set on the side of criminals.
Scam in Motion
Lately, the federal police of Brazil have stated that this scheme has been in circulation since 2024, when the ads were already being touted as apparently very genuine, using AI-generated video and images. The ads showed Gisele Bündchen and other celebrities endorsing skincare products, promotional giveaways, or time-limited discounts. The victims were tricked into making petty payments, mostly under 100 reais (about $19) for these fake products or were lured into paying "shipping costs" for prizes that never actually arrived.
The criminals leveraged their approach by scaling it up and focusing on minor losses accumulated from every victim, thus christening it "statistical immunity" by investigators. Victims being pocketed only a couple of dollars made most of them stay on their heels in terms of filing a complaint, thereby allowing these crooks extra limbs to shove on. Over time, authorities estimated that the group had gathered over 20 million reais ($3.9 million) in this elaborate con.
The scam was detected when a victim came forth with the information that an Instagram advertisement portraying a deepfake video of Gisele Bündchen was indeed false. With Anna looking to be Gisele and on the recommendation of a skincare company, the deepfake video was the most well-produced fake video. On going further into the matter, it became apparent that the investigations uncovered a whole network of deceptive social media pages, payment gateways, and laundering channels spread over five states in Brazil.
The Role of AI and Deepfakes in Modern Fraud
It is one of the first few large-scale cases in Brazil where AI-generated deepfakes have been used to perpetrate financial fraud. Deepfake technology, aided by machine learning algorithms, can realistically mimic human appearance and speech and has become increasingly accessible and sophisticated. Whereas before a level of expertise and computer resources were needed, one now only requires an online tool or app.
With criminals gaining a psychological advantage through deepfakes, the audiences would be more willing to accept the ad as being genuine as they saw a familiar and trusted face, a celebrity known for integrity and success. The human brain is wired to trust certain visual cues, making deepfakes an exploitation of this cognitive bias. Unlike phishing emails brimming with spelling and grammatical errors, deepfake videos are immersive, emotional, and visually convincing.
This is the growing terrain: AI-enabled misinformation. From financial scams to political propaganda, manipulated media is killing trust in the digital ecosystem.
Legalities and Platform Accountability
The Brazilian government had taken a proactive stance on the issue. In June 2025, the country's Supreme Court held that social media platforms could be held liable for failure to expeditiously remove criminal content, even in the absence of a formal order from a court. The icing on the cake is that that judgment would go a long way in architecting platform accountability in Brazil and potentially worldwide as jurisdictions adopt processes to deal with AI-generated fraud.
Meta, the parent company of Instagram, had said its policies forbid "ads that deceptively use public figures to scam people." Meta claims to use advanced detection mechanisms, trained review teams, and user tools to report violations. The persistence of such scams shows that the enforcement mechanisms still lag the pace and scale of AI-based deception.
Why These Scams Succeed
There are many reasons for the success of these AI-powered scams.
- Trust Due to Familiarity: Human beings tend to believe anything put forth by a known individual.
- Micro-Fraud: Keeping the money laundered from victims small prevents any increase in the number of complaints about these crimes.
- Speed To Create Content: New ads are being generated by criminals faster than ads can be checked for and removed by platforms via AI tools.
- Cross-Platform Propagation: A deepfake ad is then reshared onto various other social networking platforms once it starts gaining some traction, thereby worsening the problem.
- Absence of Public Awareness: Most users still cannot discern manipulated media, especially when high-quality deepfakes come into play.
Wider Implications on Cybersecurity and Society
The Brazilian case is but a microcosm of a much bigger problem. With deepfake technology evolving, AI-generated deception threatens not only individuals but also institutions, markets, and democratic systems. From investment scams and fake charters to synthetic IDs for corporate fraud, the possibilities for abuse are endless.
Moreover, with generative AIs being adopted by cybercriminals, law enforcement faces obstructions to properly attributing, validating evidence, and conducting digital forensics. Determining what is actual and what is manipulated has now given rise to the need for a forensic AI model that has triggered the deployment of the opposite on the other side, the attacker, thus initiating a rising tech arms race between the two parties.
Protecting Citizens from AI-Powered Scams
Public awareness has remained the best defence for people in such scams. Gisele Bündchen's squad encouraged members of the public to verify any advertisement through official brand or celebrity channels before engaging with said advertisements. Consumers need to be wary of offers that appear "too good to be true" and double-check the URL for authenticity before sharing any kind of personal information
Individually though, just a few acts go so far in lessening some of the risk factors:
- Verify an advertisement's origin before clicking or sharing it
- Never share any monetary or sensitive personal information through an unverifiable link
- Enable two-factor authentication on all your social accounts
- Periodically check transaction history for any unusual activity
- Report any deepfake or fraudulent advertisement immediately to the platform or cybercrime authorities
Collaboration will be the way ahead for governments and technology companies. Investing in AI-based detection systems, cooperating on international law enforcement, and building capacity for digital literacy programs will enable us to stem this rising tide of synthetic media scams.
Conclusion
The deepfake case in Brazil with Gisele Bündchen acts as a clarion for citizens and legislators alike. This shows the evolution of cybercrime that profited off the very AI technologies that were once hailed for innovation and creativity. In this new digital frontier that society is now embracing, authenticity stands closer to manipulation, disappearing faster with each dawn.
While keeping public safety will certainly still require great cybersecurity measures in this new environment, it will demand equal contributions on vigilance, awareness, and ethical responsibility. Deepfakes are not only a technology problem but a societal one-crossing into global cooperation, media literacy, and accountability at every level throughout the entire digital ecosystem.