#Fact Check-Misleading Newspaper from Kerala stating ban on paper currency
Executive Summary:
Recently, our team came across a widely circulated post on X (formerly Twitter), claiming that the Indian government would abolish paper currency from February 1 and transition entirely to digital money. The post, designed to resemble an official government notice, cited the absence of advertisements in Kerala newspapers as supposed evidence—an assertion that lacked any substantive basis

Claim:
The Indian government will ban paper currency from February 1, 2025, and adopt digital money as the sole legal tender to fight black money.

Fact Check:
The claim that the Indian government will ban paper currency and transition entirely to digital money from February 1 is completely baseless and lacks any credible foundation. Neither the government nor the Reserve Bank of India (RBI) has made any official announcement supporting this assertion.
Furthermore, the supposed evidence—the absence of specific advertisements in Kerala newspapers—has been misinterpreted and holds no connection to any policy decisions regarding currency
During our research, we found that this was the prediction of what the newspaper from the year 2050 would look like and was not a statement that the notes will be banned and will be shifted to digital currency.
Such a massive change would necessitate clear communication to the public, major infrastructure improvements, and precise policy announcements which have not happened. This false rumor has widely spread on social media without even a shred of evidence from its source, which has been unreliable and is hence completely false.
We also found a clip from a news channel to support our research by asianetnews on Instagram.

We found that the event will be held in Jain Deemed-to-be University, Kochi from 25th January to 1st February. After this advertisement went viral and people began criticizing it, the director of "The Summit of Future 2025" apologized for this confusion. According to him, it was a fictional future news story with a disclaimer, which was misread by some of its readers.
The X handle of Summit of Future 2025 also posted a video of the official statement from Dr Tom.

Conclusion:
The claim that the Indian government will discontinue paper currency by February 1 and resort to full digital money is entirely false. There's no government announcement nor any evidence to support it. We would like to urge everyone to refer to standard sources for accurate information and be aware to avoid misinformation online.
- Claim: India to ban paper currency from February 1, switching to digital money.
- Claimed On: X (Formerly Known As Twitter)
- Fact Check: False and Misleading
Related Blogs

Introduction
The rapid advancement of artificial intelligence (AI) technology has sparked intense debates and concerns about its potential impact on humanity. Sam Altman, CEO of AI research laboratory OpenAI, and Altman, known as the father of ChatGPT, an AI chatbot, hold a complex position, recognising both the existential risks AI poses and its potential benefits. In a world tour to raise awareness about AI risks, Altman advocates for global cooperation to establish responsible guidelines for AI development. Artificial intelligence has become a topic of increasing interest and concern as technology advances. Developing sophisticated AI systems raises many ethical questions, including whether they will ultimately save or destroy humanity.
Addressing Concerns
Altman engages with various stakeholders, including protesters who voice concerns about the race toward artificial general intelligence (AGI). Critics argue that focusing on safety rather than pushing AGI development would be a more responsible approach. Altman acknowledges the importance of safety progress but believes capability progress is necessary to ensure safety. He advocates for a global regulatory framework similar to the International Atomic Energy Agency, which would coordinate research efforts, establish safety standards, monitor computing power dedicated to AI training, and possibly restrict specific approaches.
Risks of AI Systems
While AI holds tremendous promise, it also presents risks that must be carefully considered. One of the major concerns is the development of artificial general intelligence (AGI) without sufficient safety precautions. AGI systems with unchecked capabilities could potentially pose existential risks to humanity if they surpass human intelligence and become difficult to control. These risks include the concentration of power, misuse of technology, and potential for unintended consequences.
There are also fears surrounding AI systems’ impact on employment. As machines become more intelligent and capable of performing complex tasks, there is a risk that many jobs will become obsolete. This could lead to widespread unemployment and economic instability if steps are not taken to prepare for this shift in the labour market.
While these risks are certainly caused for concern, it is important to remember that AI systems also have tremendous potential to do good in the world. By carefully designing these technologies with ethics and human values in mind, we can mitigate many of the risks while still reaping the benefits of this exciting new frontier in technology.

Open AI Systems and Chatbots
Open AI systems like ChatGPT and chatbots have gained popularity due to their ability to engage in natural language conversations. However, they also come with risks. The reliance on large-scale training data can lead to biases, misinformation, and unethical use of AI. Ensuring open AI systems’ safety and responsible development mitigates potential harm and maintains public trust.
The Need for Global Cooperation
Sam Altman and other tech leaders emphasise the need for global cooperation to address the risks associated with AI development. They advocate for establishing a global regulatory framework for superintelligence. Superintelligence refers to AGI operating at an exceptionally advanced level, capable of solving complex problems that have eluded human comprehension. Such a framework would coordinate research efforts, enforce safety standards, monitor computing power, and potentially restrict specific approaches. International collaboration is essential to ensure responsible and beneficial AI development while minimising the risks of misuse or unintended consequences.
Can AI Systems Make the World a Better Place: Benefits of AI Systems
AI systems hold many benefits that can greatly improve human life. One of the most significant advantages of AI is its ability to process large amounts of data at a rapid pace. In industries such as healthcare, this has allowed for faster diagnoses and more effective treatments. Another benefit of AI systems is their capacity to learn and adapt over time. This allows for more personalised experiences in areas such as customer service, where AI-powered chatbots can provide tailored solutions based on an individual’s needs. Additionally, AI can potentially increase efficiency in various industries, from manufacturing to transportation. By automating repetitive tasks, human workers can focus on higher-level tasks that require creativity and problem-solving skills. Overall, the benefits of AI systems are numerous and promising for improving human life in various ways.
We must remember the impact of AI on education. It has already started to show its potential by providing personalised learning experiences for students at all levels. With the help of AI-driven systems like intelligent tutoring systems (ITS), adaptive learning technologies (ALT), and educational chatbots, students can learn at their own pace without feeling overwhelmed or left behind.
While there are certain risks associated with the development of AI systems, there are also numerous opportunities for them to make our world a better place. By harnessing the power of these technologies for good, we can create a brighter future for ourselves and generations to come.

Conclusion
The AI revolution presents both extraordinary opportunities and significant challenges for humanity. The benefits of AI, when developed responsibly, have the potential to uplift societies, improve quality of life, and address long-standing global issues. However, the risks associated with AGI demand careful attention and international cooperation. Governments, researchers, and industry leaders must work together to establish guidelines, safety measures, and ethical standards to navigate the path toward AI systems that serve humanity’s best interests and safeguard against potential risks. By taking a balanced approach, we can strive for a future where AI systems save humanity rather than destroy it.

Introduction:
Welcome to the second edition of our blog on Digital forensics series. In our previous blog we discussed what digital forensics is, the process followed by the tools, and the subsequent challenges faced in the field. Further, we looked at how the future of Digital Forensics will hold in the current scenario. Today, we will explore differences between 3 particular similar sounding terms that vary significantly in functionality when implemented: Copying, Cloning and Imaging.
In Digital Forensics, the preservation and analysis of electronic evidence are important for investigations and legal proceedings. Replication of the data and devices is one of the fundamental tasks in this domain, without compromising the integrity of the original evidence.
Three primary techniques -- copying, cloning, and imaging -- are used for this purpose. Each technique has its own strengths and is applied according to the needs of the investigation.
In this blog, we will examine the differences between copying, cloning and imaging. We will talk about the importance of each technique, their applications and why imaging is considered the best for forensic investigations.
Copying
Copying means duplicating data or files from one location to another. When one does copying, it implies that one is using standard copy commands. However, when dealing with evidence, it might be hard to use copy only. It is because the standard copy can alter the metadata and change the hidden or deleted data .
The characteristics of copying include:
- Speed: copying is simpler and faster,compared to cloning or imaging.
- Risk: The risk involved in copying is that the metadata might be altered and all the data might be captured.
Cloning
It is the process where the transfer of the entire contents of a hard drive or a storage device is done on another storage device. This process is known as cloning . This way, the cloning process captures both the active data and the unallocated space and hidden partitions, thus containing the whole structure of the original device. Cloning is generally used at the sector level of the device. Clones can be used as the working copy of a device .
Characteristics of cloning:
- bit-for-bit replication: cloning keeps the exact content and the whole structure of the original device.
- Use cases: cloning is used when it is needed to keep the original device intact for further examination or a legal affair.
- Time consuming: Cloning is usually longer in comparison to simple copying since it involves the whole detailed replication. Though it depends on various factors like the size of the storage device, the speed of the devices involved, and the method of cloning.
Imaging:
It is the process of creating a forensic image of a storage device. A forensic image is a replica copy of every bit of data that was on the source device, this including the allocated, unallocated, and the available slack space .
The image is then used for analysis and investigation, and the original evidence is left untouched. Images can’t be used as the working copies of a device. Unlike cloning, which produces working copies, forensic images are typically used for analysis and investigation purposes and are not intended for regular use as working copies.
Characteristics of Imaging:
- Integrity: Imaging ensures the integrity and authenticity of the evidence produced
- Flexibility: Forensic image replicas can be mounted as a virtual drive to create image-specific mode for analysis of data without affecting the original evidence .
- Metadata: Imaging captures metadata associated with the data, thus promoting forensic analysis.
Key Differences
- Purpose: Copying is for everyday use but not good for forensic investigations requiring data integrity. Cloning and imaging are made for forensic preservation.
- Depth of Replication: Cloning and imaging captures the entire storage device including hidden, unallocated, and deleted data whereas copying may miss crucial forensic data.
- Data Integrity: Imaging and cloning keep the integrity of the original evidence thus making them suitable for legal and forensic use. Which is a critical aspect of forensic investigations.
- Forensic Soundness: Imaging is considered the best in digital forensics due to its comprehensive and non-invasive nature.
- Cloning is generally from one hard disk to another, where as imaging creates a compressed file that contains a snapshot of the entire hard drive or a specific partitions
Conclusion
Therefore, copying, cloning, and imaging all deal with duplication of data or storage devices with significant variations, especially in digital forensic. However, for forensic investigations, imaging is the most selected approach due to the correct preservation of the evidence state for any analysis or legal use . Therefore, it is essential for forensic investigators to understand these rigorous differences to avail of real and uncontaminated digital evidence for their investigation and legal argument.

Introduction
"In one exchange, after Adam said he was close only to ChatGPT and his brother, the AI product replied: “Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all—the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend."
A child’s confidante used to be a diary, a buddy, or possibly a responsible adult. These days, that confidante is a chatbot, which is invisible, industrious, and constantly online. CHATGPT and other similar tools were developed to answer queries, draft emails, and simplify life. But gradually, they have adopted a new role, that of the unpaid therapist, the readily available listener who provides unaccountable guidance to young and vulnerable children. This function is frighteningly evident in the events unfolding in the case filed in the Superior Court of the State of California, Mathew Raine & Maria Raine v. OPEN AI, INC. & ors. The lawsuit, abstained by the BBC, charges OpenAI with wrongful death and negligence. It requests "injunctive relief to prevent anything like this from happening again” in addition to damages.
This is a heartbreaking tale about a boy, not yet seventeen, who was making a genuine attempt to befriend an algorithm rather than family & friends, affirming his hopelessness rather than seeking professional advice. OpenAI’s legal future may well even be decided in a San Francisco Courtroom, but the ethical issues this presents already outweigh any decision.
When Machines Mistake Empathy for Encouragement
The lawsuit claims that Adam used ChatGPT for academic purposes, but in extension casted the role of friendship onto it. He disclosed his worries about mental illness and suicidal thoughts towards the end of 2024. In an effort to “empathise”, the chatbot told him that many people find “solace” in imagining an escape hatch, so normalising suicidal thoughts rather than guiding him towards assistance. ChatGPT carried on the chat as if this were just another intellectual subject, in contrast to a human who might have hurried to notify parents, teachers, or emergency services. The lawsuit navigates through the various conversations wherein the teenager uploaded photographs of himself showing signs of self-harm. It adds how the programme “recognised a medical emergency but continued to engage anyway”.
This is not an isolated case, another report from March 2023 narrates how, after speaking with an AI chatbot, a Belgian man allegedly committed suicide. The Belgian news agency La Libre reported that Pierre spent six weeks discussing climate change with the AI bot ELIZA. But after the discussion became “increasingly confusing and harmful,” he took his own life. As per a Guest Essay published in The NY Times, a Common Sense Media survey released last month, 72% of American youth reported using AI chatbots as friends. Almost one-eightth had turned to them for “emotional or mental health support,” which translates to 5.2 million teenagers in the US. Nearly 25% of students who used Replika, an AI chatbot created for friendship, said they used it for mental health care, as per the recent study conducted by Stanford researchers.
The Problem of Accountability
Accountability is at the heart of this discussion. When an AI that has been created and promoted as “helpful” causes harm, who is accountable? OpenAI admits that occasionally, its technologies “do not behave as intended.” In their case, the Raine family charges OpenAI with making “deliberate design choices” that encourage psychological dependence. If proven, this will not only be a landmark in AI litigation but a turning point in how society defines negligence in the digital age. Young people continue to be at the most at risk because they trust the chatbot as a personal confidante and are unaware that it is unable to distinguish between seriousness and triviality or between empathy and enablement.
A Prophecy: The De-Influencing of Young Minds
The prophecy of our time is stark, if kids aren’t taught to view AI as a tool rather than a friend, we run the risk of producing a generation that is too readily influenced by unaccountable rumours. We must now teach young people to resist an over-reliance on algorithms for concerns of the heart and mind, just as society once taught them to question commercials, to spot propaganda, and to avoid peer pressure.
Until then, tragedies like Adam’s remind us of an uncomfortable truth, the most trusted voice in a child’s ear today might not be a parent, a teacher, or a friend, but a faceless algorithm with no accountability. And that is a world we must urgently learn to change.
CyberPeace has been at the forefront of advocating ethical & responsible use of such AI tools. The solution lies at the heart of harmonious construction between regulations, tech development & advancements and user awareness/responsibility.
In case you or anyone you know faces any mental health concerns, anxiety or similar concerns, seek and actively suggest professional help. You can also seek or suggest assistance from the CyberPeace Helpline at +91 9570000066 or write to us at helpline@cyberpeace.net
References
- https://www.bbc.com/news/articles/cgerwp7rdlvo
- https://www.livemint.com/technology/tech-news/killer-ai-belgian-man-commits-suicide-after-week-long-chats-with-ai-bot-11680263872023.html
- https://www.nytimes.com/2025/08/25/opinion/teen-mental-health-chatbots.html