#Fact Check-Misleading Newspaper from Kerala stating ban on paper currency
Executive Summary:
Recently, our team came across a widely circulated post on X (formerly Twitter), claiming that the Indian government would abolish paper currency from February 1 and transition entirely to digital money. The post, designed to resemble an official government notice, cited the absence of advertisements in Kerala newspapers as supposed evidence—an assertion that lacked any substantive basis

Claim:
The Indian government will ban paper currency from February 1, 2025, and adopt digital money as the sole legal tender to fight black money.

Fact Check:
The claim that the Indian government will ban paper currency and transition entirely to digital money from February 1 is completely baseless and lacks any credible foundation. Neither the government nor the Reserve Bank of India (RBI) has made any official announcement supporting this assertion.
Furthermore, the supposed evidence—the absence of specific advertisements in Kerala newspapers—has been misinterpreted and holds no connection to any policy decisions regarding currency
During our research, we found that this was the prediction of what the newspaper from the year 2050 would look like and was not a statement that the notes will be banned and will be shifted to digital currency.
Such a massive change would necessitate clear communication to the public, major infrastructure improvements, and precise policy announcements which have not happened. This false rumor has widely spread on social media without even a shred of evidence from its source, which has been unreliable and is hence completely false.
We also found a clip from a news channel to support our research by asianetnews on Instagram.

We found that the event will be held in Jain Deemed-to-be University, Kochi from 25th January to 1st February. After this advertisement went viral and people began criticizing it, the director of "The Summit of Future 2025" apologized for this confusion. According to him, it was a fictional future news story with a disclaimer, which was misread by some of its readers.
The X handle of Summit of Future 2025 also posted a video of the official statement from Dr Tom.

Conclusion:
The claim that the Indian government will discontinue paper currency by February 1 and resort to full digital money is entirely false. There's no government announcement nor any evidence to support it. We would like to urge everyone to refer to standard sources for accurate information and be aware to avoid misinformation online.
- Claim: India to ban paper currency from February 1, switching to digital money.
- Claimed On: X (Formerly Known As Twitter)
- Fact Check: False and Misleading
Related Blogs

Introduction
As technology advances, global connectivity becomes increasingly vital. Meta's "Project Waterworth" once completed, will reach five major continents and span over 50,000 km, making it the world’s longest subsea cable project using the highest-capacity technology available. This project is expected to bring industry-leading connectivity to the U.S., India, Brazil, South Africa, and other key regions. It will enable greater economic cooperation, facilitate digital inclusion, and open opportunities for technological development in these regions.
In India, a project such as this will help accelerate this progress and support the country’s ambitious plans for its digital economy in cohesion with the significant growth and investment in digital infrastructure that is already underway. Subsea cable projects, such as Project Waterworth, are the backbone of global digital infrastructure, accounting for more than 95% of intercontinental traffic across the world’s oceans to seamlessly enable digital communication, video experiences, online transactions, and more.
Enhancing India's Digital Infrastructure
A subsea cable, or submarine cable, enhances global internet speed and reliability by carrying massive data volumes across ocean floors, connecting countries and continents. Compared to satellites, these cables offer greater stability and minimal disruptions.
Project Waterworth aims to build the world's longest 24-fibre pair cable, improving resilience and deployment speed. The project is expected to prevent the damages in high-risk areas, through the use of innovative routing with maximising deep-water placement (up to 7,000 meters) and enhanced burial techniques. This project will play a crucial role in the advancement of AI and emerging technologies, ensuring widespread access to their benefits in India.
CyberPeace Takeaways
The said project has the potential to have manifold implications ranging from economic and policy to India-US relations, data privacy and security concerns emerging from the increase in the data flows and others. A segregated list of takeaways is as follows:
- Economic and policy implications: The project can lead to economic growth as it has the potential for job creation, and investment opportunities and can lead to positioning India as a digital hub globally. The creation of regulatory frameworks that can support and secure a large-scale infrastructure project such as this is necessary.
- India- US Relations: This project will align with the commitments that were made in the US-India joint statement on undersea technology collaboration and strengthen them. It will further serve as a model for future collaborations between the nations’ tech entities.
- Concerns for Data Privacy and Security: A robust cybersecurity mechanism which can combat the potential risks associated with the increased data flows is required. The concerned authorities need to be vigilant in monitoring and ensuring compliance with the applicable data protection standards set such as the IT Act of 2000, the DPDP Act of 2023 and its rules(once finalised).
Conclusion
India has been provided with a transformative opportunity to bolster its digital landscape by the advent of Project Waterworth. The enhancement of internet speed, stability, and capacity, will strengthen the country’s digital infrastructure and support economic growth. This project is also projected to accelerate AI-driven advancements in India. Moreover, this technological collaboration between India-US will strengthen their relations and set the stage for India to negotiate future global partnerships. A well-defined regulatory framework and strong cybersecurity measures will be crucial to proactively address data privacy, security, and governance challenges to ensure safe and equitable digital progress. As India continues its rapid digital expansion, engaging in informed discussions, policy planning, and strategic investments will be key to maximise Project Waterworth’s impact and propel India toward a more connected, innovative, and resilient digital future.
References

Introduction
Considering the development of technology, Voice cloning schemes are one such issue that has recently come to light. Scammers are moving forward with AI, and their methods and plans for deceiving and scamming people have also altered. Deepfake technology creates realistic imitations of a person’s voice that can be used to conduct fraud, dupe a person into giving up crucial information, or even impersonate a person for illegal purposes. We will look at the dangers and risks associated with AI voice cloning frauds, how scammers operate and how one might protect themselves from one.
What is Deepfake?
Artificial intelligence (AI), known as “deepfake,” can produce fake or altered audio, video, and film that pass for the real thing. The words “deep learning” and “fake” are combined to get the name “deep fake”. Deepfake technology creates content with a realistic appearance or sound by analysing and synthesising diverse volumes of data using machine learning algorithms. Con artists employ technology to portray someone doing something that has never been in audio or visual form. The best example is the American President, who used deep voice impersonation technology. Deep voice impersonation technology can be used maliciously, such as in deep voice fraud or disseminating false information. As a result, there is growing concerned about the potential influence of deep fake technology on society and the need for effective tools to detect and mitigate the hazards it may provide.
What exactly are deepfake voice scams?
Artificial intelligence (AI) is sometimes utilised in deepfake speech frauds to create synthetic audio recordings that seem like real people. Con artists can impersonate someone else over the phone and pressure their victims into providing personal information or paying money by using contemporary technology. A con artist may pose as a bank employee, a government official, or a friend or relative by utilising a deep false voice. It aims to earn the victim’s trust and raise the likelihood that they will fall for the hoax by conveying a false sense of familiarity and urgency. Deep fake speech frauds are increasing in frequency as deep fake technology becomes more widely available, more sophisticated, and harder to detect. In order to avoid becoming a victim of such fraud, it is necessary to be aware of the risks and take appropriate measures.
Why do cybercriminals use AI voice deep fake?
In order to mislead users into providing private information, money, or system access, cybercriminals utilise artificial intelligence (AI) speech-deep spoofing technology to claim to be people or entities. Using AI voice-deep fake technology, cybercriminals can create audio recordings that mimic real people or entities, such as CEOs, government officials, or bank employees, and use them to trick victims into taking activities that are advantageous to the criminals. This can involve asking victims for money, disclosing login credentials, or revealing sensitive information. In phishing assaults, where fraudsters create audio recordings that impersonate genuine messages from organisations or people that victims trust, deepfake AI voice technology can also be employed. These audio recordings can trick people into downloading malware, clicking on dangerous links, or giving out personal information. Additionally, false audio evidence can be produced using AI voice-deep fake technology to support false claims or accusations. This is particularly risky regarding legal processes because falsified audio evidence may lead to wrongful convictions or acquittals. Artificial intelligence voice deep fake technology gives con artists a potent tool for tricking and controlling victims. Every organisation and the general population must be informed of this technology’s risk and adopt the appropriate safety measures.
How to spot voice deepfake and avoid them?
Deep fake technology has made it simpler for con artists to edit audio recordings and create phoney voices that exactly mimic real people. As a result, a brand-new scam called the “deep fake voice scam” has surfaced. In order to trick the victim into handing over money or private information, the con artist assumes another person’s identity and uses a fake voice. What are some ways to protect oneself from deepfake voice scams? Here are some guidelines to help you spot them and keep away from them:
- Steer clear of telemarketing calls
- One of the most common tactics used by deep fake voice con artists, who pretend to be bank personnel or government officials, is making unsolicited phone calls.
- Listen closely to the voice
- Anyone who phones you pretending to be someone else should pay special attention to their voice. Are there any peculiar pauses or inflexions in their speech? Something that doesn’t seem right can be a deep voice fraud.
- Verify the caller’s identity
- It’s crucial to verify the caller’s identity in order to avoid falling for a deep false voice scam. You might ask for their name, job title, and employer when in doubt. You can then do some research to be sure they are who they say they are.
- Never divulge confidential information
- No matter who calls, never give out personal information like your Aadhar, bank account information, or passwords over the phone. Any legitimate companies or organisations will never request personal or financial information over the phone; if they do, it’s a warning sign that they’re a scammer.
- Report any suspicious activities
- Inform the appropriate authorities if you think you’ve fallen victim to a deep voice fraud. This may include your bank, credit card company, local police station, or the nearest cyber cell. By reporting the fraud, you could prevent others from being a victim.
Conclusion
In conclusion, the field of AI voice deep fake technology is fast expanding and has huge potential for beneficial and detrimental effects. While deep fake voice technology has the potential to be used for good, such as improving speech recognition systems or making voice assistants sound more realistic, it may also be used for evil, such as deep fake voice frauds and impersonation to fabricate stories. Users must be aware of the hazard and take the necessary precautions to protect themselves as AI voice deep fake technology develops, making it harder to detect and prevent deep fake schemes. Additionally, it is necessary to conduct ongoing research and develop efficient techniques to identify and control the risks related to this technology. We must deploy AI appropriately and ethically to ensure that AI voice-deep fake technology benefits society rather than harming or deceiving it.
Reference
.webp)
Introduction
India's Competition Commission of India (CCI) on 18th November 2024 imposed a ₹213 crore penalty on Meta for abusing its dominant position in internet-based messaging through WhatsApp and online display advertising. The CCI order is passed against abuse of dominance by the Meta and relates to WhatsApp’s 2021 Privacy Policy. The CCI considers Meta a dominant player in internet-based messaging through WhatsApp and also in online display advertising. WhatsApp's 2021 privacy policy update undermined users' ability to opt out of getting their data shared with the group's social media platform Facebook. The CCI directed WhatsApp not to share user data collected on its platform with other Meta companies or products for advertising purposes for five years.
CCI Contentions
The regulator contended that for purposes other than advertising, WhatsApp's policy should include a detailed explanation of the user data shared with other Meta group companies or products specifying the purpose. The regulator also stated that sharing user data collected on WhatsApp with other Meta companies or products for purposes other than providing WhatsApp services should not be a condition for users to access WhatsApp services in India. CCI order is significant as it upholds user consent as a key principle in the functioning of social media giants, similar to the measures taken by some other markets.
Meta’s Stance
WhatsApp parent company Meta has expressed its disagreement with the Competition Commission of India's(CCI) decision to impose a Rs 213 crore penalty on them over users' privacy concerns. Meta clarified that the 2021 update did not change the privacy of people's personal messages and was offered as a choice for users at the time. It also ensured no one would have their accounts deleted or lose functionality of the WhatsApp service because of this update.
Meta clarified that the update was about introducing optional business features on WhatsApp and providing further transparency about how they collect data. The company stated that WhatsApp has been incredibly valuable to people and businesses, enabling organization's and government institutions to deliver citizen services through COVID and beyond and supporting small businesses, all of which further the Indian economy. Meta plans to find a path forward that allows them to continue providing the experiences that "people and businesses have come to expect" from them. The CCI issued cease-and-desist directions and directed Meta and WhatsApp to implement certain behavioral remedies within a defined timeline.
The competition watchdog noted that WhatsApp's 2021 policy update made it mandatory for users to accept the new terms, including data sharing with Meta, and removed the earlier option to opt-out, categorized
as an "unfair condition" under the Competition Act. It was further noted that WhatsApp’s sharing of users’ business transaction information with Meta gave the group entities an unfair advantage over competing platforms.
CyberPeace Outlook
The 2021 policy update by WhatsApp mandated data sharing with Meta's other companies group, removing the opt-out option and compelling users to accept the terms to continue using the platform. This policy undermined user autonomy and was deemed as an abuse of Meta's dominant market position, violating Section 4(2)(a)(i) of the Competition Act, as noted by CCI.
The CCI’s ruling requires WhatsApp to offer all users in India, including those who had accepted the 2021 update, the ability to manage their data-sharing preferences through a clear and prominent opt-out option within the app. This decision underscores the importance of user choice, informed consent, and transparency in digital data policies.
By addressing the coercive nature of the policy, the CCI ruling establishes a significant legal precedent for safeguarding user privacy and promoting fair competition. It highlights the growing acknowledgement of privacy as a fundamental right and reinforces the accountability of tech giants to respect user autonomy and market fairness. The directive mandates that data sharing within the Meta ecosystem must be based on user consent, with the option to decline such sharing without losing access to essential services.
References