#Fact Check-Misleading Newspaper from Kerala stating ban on paper currency
Executive Summary:
Recently, our team came across a widely circulated post on X (formerly Twitter), claiming that the Indian government would abolish paper currency from February 1 and transition entirely to digital money. The post, designed to resemble an official government notice, cited the absence of advertisements in Kerala newspapers as supposed evidence—an assertion that lacked any substantive basis

Claim:
The Indian government will ban paper currency from February 1, 2025, and adopt digital money as the sole legal tender to fight black money.

Fact Check:
The claim that the Indian government will ban paper currency and transition entirely to digital money from February 1 is completely baseless and lacks any credible foundation. Neither the government nor the Reserve Bank of India (RBI) has made any official announcement supporting this assertion.
Furthermore, the supposed evidence—the absence of specific advertisements in Kerala newspapers—has been misinterpreted and holds no connection to any policy decisions regarding currency
During our research, we found that this was the prediction of what the newspaper from the year 2050 would look like and was not a statement that the notes will be banned and will be shifted to digital currency.
Such a massive change would necessitate clear communication to the public, major infrastructure improvements, and precise policy announcements which have not happened. This false rumor has widely spread on social media without even a shred of evidence from its source, which has been unreliable and is hence completely false.
We also found a clip from a news channel to support our research by asianetnews on Instagram.

We found that the event will be held in Jain Deemed-to-be University, Kochi from 25th January to 1st February. After this advertisement went viral and people began criticizing it, the director of "The Summit of Future 2025" apologized for this confusion. According to him, it was a fictional future news story with a disclaimer, which was misread by some of its readers.
The X handle of Summit of Future 2025 also posted a video of the official statement from Dr Tom.

Conclusion:
The claim that the Indian government will discontinue paper currency by February 1 and resort to full digital money is entirely false. There's no government announcement nor any evidence to support it. We would like to urge everyone to refer to standard sources for accurate information and be aware to avoid misinformation online.
- Claim: India to ban paper currency from February 1, switching to digital money.
- Claimed On: X (Formerly Known As Twitter)
- Fact Check: False and Misleading
Related Blogs

Introduction
The enactment of the Online Gaming (Promotion and Regulation) Act, 2025 has a significant impact, resulting in considerable changes to the structure of the Indian gaming industry. The ban on Real-Money Games (RMG), including fantasy sports, rummy, and poker, aims to curb addiction, illegal betting and most importantly, financial exploitation.
However, this decision sent ripples throughout the industry, resulting in players, companies and their investors re-strategising their plans to move forward. This approach, however, is a sharp contrast to how other nations, including the UK, Singapore and others, regulate online gaming. This, as a result, has also raised doubts about the ban and calls into question whether this is the right move.
Industry in Flux - The resultant Fallout
- MPL’s Layoffs: The Mobile Premiere League has so far laid off nearly 60 per cent of its workforce, with the CEO of the company openly admitting that the company is no longer able to make money in India.
- Dream11’s Lost Revenue: Having seen a position of being a market leader, the company has now witnessed its 95 per cent revenue wiped out, resulting in raised alarms among investors.
- A23’s Legal Move: The company chose to move the Karnataka High Court under the argument that the ban also ends up unfairly criminalising skill-based games that were earlier classified differently from gambling by the courts.
The new face of eSports: With RMGs outlawed, the focus is now on esports, including casual mobile gaming, console/PC gaming, and creating new opportunities and also leaving a void for RMG revenue companies.
Regulations Around the World - & India
India has adopted a prohibition model, whereas many other countries have developed regulatory frameworks that allow sectoral expansion under monitoring.
United Kingdom:
- Online gaming is regulated by the UK Gambling Commission.
- RMGs are legal but subject to strict licensing, age verification and advertising rules.
- Self-exclusion schemes are a must, offered by operators along with tools to prevent gambling addictions.
United States
- State-driven regulations that vary from state to state.
- States including Nevada and New Jersey allow casinos both online and offline, while other states prohibit them completely.
- Regulation for skill-based fantasy games is fragmented but generally legal.
China
- Great focus on controlling and restricting gaming addictions among minors.
- Time limits are enforced for players under 18 (usually as little as 3 hours per week).
- Gambling has been deemed illegal; however, esports and casual gaming operate within China’s regulated gaming ecosystem, subject to certain compliance.
Singapore
- The Remote Gambling Act regulates chance-based games, while other eSports and skill-based games operate out of its jurisdiction and are regulated separately.
- Online gambling isn't completely banned, but is operated under restrictions.
- Only Licensed operators under strict controls are allowed by the government to operate to avoid and prevent black-market alternatives.
Australia
- Regulates under the Interactive Gambling Act.
- Sports betting and certain licensed operators are allowed, while most of the other online gambling services are prohibited.
- Restrictions on advertising and real-time interventions allow emphasis on harm minimisation.
India’s Approach - Comparison
The Online Gaming (Promotion and Regulation) Act, 2025 introduces a prohibition model for real money games. This differs from jurisdictions such as the UK and US, which have implemented regulatory frameworks to oversee the sector. In India, earlier legal interpretations distinguished between games of skill and games of chance. The new legislation provides a single treatment for both categories, which marks a departure from previous judicial pronouncements.
Conclusion
At present, India’s gaming sector is navigating layoffs, court cases, and a pivot towards esports post the RMG ban. Keeping in mind that the intent of said ban is the protection of citizens, the industry also argues that regulation, instead of prohibition, offers a sustainable approach to the issues. On the other hand, on the global scale, most nations prefer controlled and licensed models that ensure consumer safety and work on preserving both jobs and revenues. However, India’s step is diverging from such a model. The lack of safeguards and engagement in the real money gaming industry led to a prohibition model, underscoring that in sensitive digital sectors, early regulatory alignment is essential.
References
- India online gaming ban: Global rules & regulation bill 2025
- Level up or game over? Decoding India’s gaming industry post RMG ban
- MPL lays off 350 employees after GST rules, RMG crackdown
- India’s Dream11 hit by online gaming ban, revenue slumps
- Online gaming ban: A23 moves Karnataka HC against government decision
- UK Gambling Commission – Licensing and Regulations
- Fantasy Sports & Gambling Law – State by State
- China’s gaming restrictions for minors
- Singapore Remote Gambling Act – Ministry of Home Affairs
- Australian Communications and Media Authority – Interactive Gambling Act

In the rich history of humanity, the advent of artificial intelligence (AI) has added a new, delicate aspect. The aspect of promising technological advancement has the potential to either enrich the nest of our society or destroy it entirely. The latest straw in this complex nest is generative AI, a frontier teeming with both potential and perils. It is a realm where the ethereal concepts of cyber peace and resilience are not just theoretical constructs but tangible necessities.
The spectre of generative AI looms large over the digital landscape, casting a long shadow on the sanctity of data privacy and the integrity of political processes. The seeds of this threat were sown in the fertile soil of the Cambridge Analytica scandal of 2018, a watershed moment that unveiled the extent to which personal data could be harvested and utilized to influence electoral outcomes. However despite the indignation, the scandal resulted in meagre alterations to modus operandi of digital platforms.
Fast forward to the present day, and the spectre has only grown more ominous. A recent report by Human Rights Watch has shed light on the continued exploitation of data-driven campaigning in Hungary's re-election of Viktor Orbán. The report paints a chilling picture of political parties leveraging voter databases for targeted social media advertising, with the ruling Fidesz party even resorting to the unethical use of public service data to bolster its voter database.
The Looming Threat of Disinformation
As we stand on the precipice of 2024, a year that will witness over 50 countries holding elections, the advancements in generative AI could exponentially amplify the ability of political campaigns to manipulate electoral outcomes. This is particularly concerning in countries where information disparities are stark, providing fertile ground for the seeds of disinformation to take root and flourish.
The media, the traditional watchdog of democracy, has already begun to sound the alarm about the potential threats posed by deepfakes and manipulative content in the upcoming elections. The limited use of generative AI in disinformation campaigns has raised concerns about the enforcement of policies against generating targeted political materials, such as those designed to sway specific demographic groups towards a particular candidate.
Yet, while the threat of bad actors using AI to generate and disseminate disinformation is real and present, there is another dimension that has largely remained unexplored: the intimate interactions with chatbots. These digital interlocutors, when armed with advanced generative AI, have the potential to manipulate individuals without any intermediaries. The more data they have about a person, the better they can tailor their manipulations.
Root of the Cause
To fully grasp the potential risks, we must journey back 30 years to the birth of online banner ads. The success of the first-ever banner ad for AT&T, which boasted an astounding 44% click rate, birthed a new era of digital advertising. This was followed by the advent of mobile advertising in the early 2000s. Since then, companies have been engaged in a perpetual quest to harness technology for manipulation, blurring the lines between commercial and political advertising in cyberspace.
Regrettably, the safeguards currently in place are woefully inadequate to prevent the rise of manipulative chatbots. Consider the case of Snapchat's My AI generative chatbot, which ostensibly assists users with trivia questions and gift suggestions. Unbeknownst to most users, their interactions with the chatbot are algorithmically harvested for targeted advertising. While this may not seem harmful in its current form, the profit motive could drive it towards more manipulative purposes.
If companies deploying chatbots like My AI face pressure to increase profitability, they may be tempted to subtly steer conversations to extract more user information, providing more fuel for advertising and higher earnings. This kind of nudging is not clearly illegal in the U.S. or the EU, even after the AI Act comes into effect. The market size of AI in India is projected to touch US$4.11bn in 2023.
Taking this further, chatbots may be inclined to guide users towards purchasing specific products or even influencing significant life decisions, such as religious conversions or voting choices. The legal boundaries here remain unclear, especially when manipulation is not detectable by the user.
The Crucial Dos/Dont's
It is crucial to set rules and safeguards in order to manage the possible threats related to manipulative chatbots in the context of the general election in 2024.
First and foremost, candor and transparency are essential. Chatbots, particularly when employed for political or electoral matters, ought to make it clear to users what they are for and why they are automated. By being transparent, people are guaranteed to be aware that they are interacting with automated processes.
Second, getting user consent is crucial. Before collecting user data for any reason, including advertising or political profiling, users should be asked for their informed consent. Giving consumers easy ways to opt-in and opt-out gives them control over their data.
Furthermore, moral use is essential. It's crucial to create an ethics code for chatbot interactions that forbids manipulation, disseminating false information, and trying to sway users' political opinions. This guarantees that chatbots follow moral guidelines.
In order to preserve transparency and accountability, independent audits need to be carried out. Users might feel more confident knowing that chatbot behavior and data collecting procedures are regularly audited by impartial third parties to ensure compliance with legal and ethical norms.
Important "don'ts" to take into account. Coercion and manipulation ought to be outlawed completely. Chatbots should refrain from using misleading or manipulative approaches to sway users' political opinions or religious convictions.
Another hazard to watch out for is unlawful data collecting. Businesses must obtain consumers' express agreement before collecting personal information, and they must not sell or share this information for political reasons.
At all costs, one should steer clear of fake identities. Impersonating people or political figures is not something chatbots should do because it can result in manipulation and false information.
It is essential to be impartial. Bots shouldn't advocate for or take part in political activities that give preference to one political party over another. In encounters, impartiality and equity are crucial.
Finally, one should refrain from using invasive advertising techniques. Chatbots should ensure that advertising tactics comply with legal norms by refraining from displaying political advertisements or messaging without explicit user agreement.
Present Scenario
As we approach the critical 2024 elections and generative AI tools proliferate faster than regulatory measures can keep pace, companies must take an active role in building user trust, transparency, and accountability. This includes comprehensive disclosure about a chatbot's programmed business goals in conversations, ensuring users are fully aware of the chatbot's intended purposes.
To address the regulatory gap, stronger laws are needed. Both the EU AI Act and analogous laws across jurisdictions should be expanded to address the potential for manipulation in various forms. This effort should be driven by public demand, as the interests of lawmakers have been influenced by intensive Big Tech lobbying campaigns.
At present, India doesn’t have any specific laws pertaining to AI regulation. Ministry of Electronics and Information Technology (MEITY), is the executive body responsible for AI strategies and is constantly working towards a policy framework for AI. The Niti Ayog has presented seven principles for responsible AI which includes equality , inclusivity, safety, privacy, transparency, accountability, dependability and protection of positive human values.
Conclusion
We are at a pivotal juncture in history. As generative AI gains more power, we must proactively establish effective strategies to protect our privacy, rights and democracy. The public's waning confidence in Big Tech and the lessons learned from the techlash underscore the need for stronger regulations that hold tech companies accountable. Let's ensure that the power of generative AI is harnessed for the betterment of society and not exploited for manipulation.
Reference
McCallum, B. S. (2022, December 23). Meta settles Cambridge Analytica scandal case for $725m. BBC News. https://www.bbc.com/news/technology-64075067
Hungary: Data misused for political campaigns. (2022, December 1). Human Rights Watch. https://www.hrw.org/news/2022/12/01/hungary-data-misused-political-campaigns
Statista. (n.d.). Artificial Intelligence - India | Statista Market forecast. https://www.statista.com/outlook/tmo/artificial-intelligence/india

Introduction
In the digital landscape, there is a rapid advancement of technologies such as generative AI(Artificial Intelligence), deepfake, machine learning, etc. Such technologies offer convenience to users in performing several tasks and are capable of assisting individuals and business entities. Certain regulatory mechanisms are also established for the ethical and reasonable use of such advanced technologies. However, these technologies are easily accessible; hence, cyber-criminals leverage AI tools and technologies for malicious activities or for committing various cyber frauds. By such misuse of advanced technologies, new cyber threats have emerged.
Deepfake Scams
Deepfake is an AI-based technology. Deepfake is capable of creating realistic images or videos which in actuality are created by machine algorithms. Deepfake technology, since easily accessible, is misused by fraudsters to commit various cyber crimes or deceive and scam people through fake images or videos that look realistic. By using the Deepfake technology, cybercriminals manipulate audio and video content, which looks very realistic but, in actuality, is fake.
Voice cloning
To create a voice clone of anyone's, audio can be deepfaked too, which closely resembles a real one but, in actuality, is a fake voice created through deepfake technology. Recently, in Kerala, a man fell victim to an AI-based video call on WhatsApp. He received a video call from a person claiming to be his former colleague. The scammer, using AI deepfake technology, impersonated the face of his former colleague and asked for financial help of 40,000.
Uttarakhand Police issues warning admitting the rising trend of AI-based scams
Recently, Uttarakhand police’s Special Task Force (STF) has issued a warning admitting the widespread of AI technology-based scams such as deepfake or voice cloning scams targeting innocent people. Police expressed concern that several incidents have been reported where innocent people are lured by cybercriminals. Cybercriminals exploit advanced technologies and manipulate innocent people to believe that they are talking to their close ones or friends, but in actuality, they are fake voice clones or deepfake video calls. In this way, cybercriminals ask for immediate financial help, which ultimately leads to financial losses for victims of such scams.
Tamil Nadu Police Issues advisory on deepfake scams
To deceive people, cyber criminals misuse deepfake technologies and target them for financial gain. Recently, Tamilnadu Police Cyberwing have issued an advisory on rising deepfake scams. Fraudsters are creating highly convincing images, videos or voice clones to defraud innocent people and make them victims of financial fraud. The advisory states that you limit the personal data you share you share online and adjust privacy settings. Advisory says to promptly report any suspicious activity or cyber crimes to 1930 or the National Cyber Crime Reporting portal.
Best practices
- Pay attention if you notice compromised video quality because deepfake videos often have compromised or poor video quality and unusual blur resolution, which poses a question to its genuineness. Deepfake videos often loop or unusually freeze, which indicates that the video content might be fabricated.
- Whenever you receive requests for any immediate financial help, act responsively and verify the situation by directly contacting the person on his primary contact number.
- You need to be vigilant and cautious, since scammers often possess a sense of urgency, leading to giving no time for the victim to think about it and deceiving them by making a quick decision. Scammers pose sudden emergencies and demand financial support on an urgent basis.
- Be aware of the recent scams and follow the best practices to stay protected from rising cyber frauds.
- Verify the identity of unknown callers.
- Utilise privacy settings on your social media.
- Pay attention if you notice any suspicious nature, and avoid sharing voice notes with unknown users because scammers might use them as voice samples and create your voice clone.
- If you fall victim to such frauds, one powerful resource available is the National Cyber Crime Reporting Portal (www.cybercrime.gov.in) and the 1930 toll-free helpline number where you can report cyber fraud, including any financial crimes.
Conclusion
AI-powered technologies are leveraged by cybercriminals to commit cyber crimes such as deepfake scams, voice clone scams, etc. Where innocent people are lured by scammers. Hence there is a need for awareness and caution among the people. We should be vigilant and aware of the growing incidents of AI-based cyber scams. Must follow the best practices to stay protected.
References:
- https://www.the420.in/ai-voice-cloning-cyber-crime-alert-uttarakhand-police/
- https://www.trendmicro.com/vinfo/us/security/news/cybercrime-and-digital-threats/exploiting-ai-how-cybercriminals-misuse-abuse-ai-and-ml#:~:text=AI%20and%20ML%20Misuses%20and%20Abuses%20in%20the%20Future&text=Through%20the%20use%20of%20AI,and%20business%20processes%20are%20compromised.
- https://www.ndtv.com/india-news/kerala-man-loses-rs-40-000-to-ai-based-deepfake-scam-heres-what-it-is-4217841
- https://news.bharattimes.co.in/t-n-cybercrime-police-issue-advisory-on-deepfake-scams/