#FactCheck: Beware of Fake Emails Distributing Fraudulent e-PAN Cards
Executive Summary:
We have identified a post addressing a scam email that falsely claims to offer a download link for an e-PAN Card. This deceptive email is designed to mislead recipients into disclosing sensitive financial information by impersonating official communication from Income Tax Department authorities. Our report aims to raise awareness about this fraudulent scheme and emphasize the importance of safeguarding personal data against such cyber threats.

Claim:
Scammers are sending fake emails, asking people to download their e-PAN cards. These emails pretend to be from government authorities like the Income Tax Department and contain harmful links that can steal personal information or infect devices with malware.
Fact Check:
Through our research, we have found that scammers are sending fake emails, posing as the Income Tax Department, to trick users into downloading e-PAN cards from unofficial links. These emails contain malicious links that can lead to phishing attacks or malware infections. Genuine e-PAN services are only available through official platforms such as the Income Tax Department's website (www.incometaxindia.gov.in) and the NSDL/UTIITSL portals. Despite repeated warnings, many individuals still fall victim to such scams. To combat this, the Income Tax Department has a dedicated page for reporting phishing attempts: Report Phishing - Income Tax India. It is crucial for users to stay cautious, verify email authenticity, and avoid clicking on suspicious links to protect their personal information.

Conclusion:
The emails currently in circulation claiming to provide e-PAN card downloads are fraudulent and should not be trusted. These deceptive messages often impersonate government authorities and contain malicious links that can result in identity theft or financial fraud. Clicking on such links may compromise sensitive personal information, putting individuals at serious risk. To ensure security, users are strongly advised to verify any such communication directly through official government websites and avoid engaging with unverified sources. Additionally, any phishing attempts should be reported to the Income Tax Department and also to the National Cyber Crime Reporting Portal to help prevent the spread of such scams. Staying vigilant and exercising caution when handling unsolicited emails is crucial in safeguarding personal and financial data.
- Claim: Fake emails claim to offer e-PAN card downloads.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
In the era of the internet where everything is accessible at your fingertip, a disturbing trend is on the rise- over 90% of websites containing child abuse material now have self-generated images, obtained from victims as young as three years old. A shocking revelation, shared by the (IWF) internet watch foundation, The findings of the IWF have caused concern about the increasing exploitation of children below the age of 10. The alarming trend highlights the increasing exploitation of children under the age of 10, who are coerced, blackmailed, tricked, or groomed into participating in explicit acts online. The IWF's data for 2023 reveals a record-breaking 275,655 websites hosting child sexual abuse material, with 92% of them containing such "self-generated" content.
Disturbing Tactics Shift
Disturbing numbers came that, highlight a distressing truth. In 2023, 275,655 websites were discovered to hold child sexual abuse content, reaching a new record and reflecting an alarming 8% increase over the previous year. What's more concerning is that 92% of these websites had photos or videos generated by the website itself. Surprisingly, 107,615 of these websites had content involving children under the age of ten, with 2,500 explicitly featuring youngsters aged three to six.
Profound worries
Deep concern about the rising incidence of images taken by extortion or coercion from elementary school-aged youngsters. This footage is currently being distributed on very graphic and specialised websites devoted to child sexual assault. The process begins in a child's bedroom with the use of a camera and includes the exchange, dissemination, and gathering of explicit content by devoted and determined persons who engage in sexual exploitation. These criminals are ruthless. The materials are being circulated via mail, instant messaging, chat rooms, and social media platforms, (WhatsApp, Telegram, Skype, etc.)
Live Streaming of such material involves real-time broadcast which again is a major concern as the nature of the internet is borderless the access to such material is international, national, and regional, which even makes it difficult to get the predators and convict them. With the growth, it has become easy for predators to generate “self-generated “images or videos.
Financial Exploitation in the Shadows: The Alarming Rise of Sextortion
Looking at the statistics globally there have been studies that show an extremely shocking pattern known as “sextortion”, in which adolescents are targeted for extortion and forced to pay money under the threat of exposing images to their families or relatives and friends or on social media. The offender's goal is to obtain sexual gratification.
The financial variation of sextortion takes a darker turn, with criminals luring kids into making sexual content and then extorting them for money. They threaten to reveal the incriminating content unless their cash demands, which are frequently made in the form of gift cards, mobile payment services, wire transfers, or cryptocurrencies, are satisfied. In this situation, the predators are primarily driven by money gain, but the psychological impact on their victims is as terrible. A shocking case was highlighted where an 18-year-old was landed in jail for blackmailing a young girl, sending indecent images and videos to threaten her via Snapchat. The offender was pleaded guilty.
The Question on Security?
The introduction of end-to-end encryption in platforms like Facebook Messenger has triggered concerns within law enforcement agencies. While enhancing user privacy, critics argue that it may inadvertently facilitate criminal activities, particularly the exploitation of vulnerable individuals. The alignment with other encrypted services is seen as a potential challenge, making it harder to detect and investigate crimes, thus raising questions about finding a balance between privacy and public safety.
One of the major concerns in the online safety of children is the implementation of encryption by asserting that it enhances the security of individuals, particularly children, by safeguarding them from hackers, scammers, and criminals. They underscored their dedication to enforcing safety protocols, such as prohibiting adults from texting teenagers who do not follow them and employing technology to detect and counteract bad conduct.
These distressing revelations highlight the urgent need for comprehensive action to protect our society's most vulnerable citizens i.e., children, youngsters, and adolescents throughout the era of digital progress. As experts and politicians grapple with these troubling trends, the need for action to safeguard kids online becomes increasingly urgent.
Role of Technology in Combating Online Exploitation
With the rise of technology, there has been a rise in online child abuse, technology also serves as a powerful tool to combat it. The advanced algorithms and use of Artificial intelligence tools can be used to disseminate ‘self-generated’ images. Additional tech companies can collaborate and develop some effective solutions to safeguard every child and individual.
Role of law enforcement agencies
Child abuse knows no borders, and addressing the issues requires legal intervention at all levels. National, regional, and international law enforcement agencies investigate online child sexual exploitation and abuse and cooperate in the investigation of these cybercrimes, Various investigating agencies need to have mutual legal assistance and extradition, bilateral, and multilateral conventions to conduct to identify, investigate, and prosecute perpetrators of online child sexual exploitation and abuse. Apart from this cooperation between private and government agencies is important, sharing the database of perpetrators can help the agencies to get them caught.
How do you safeguard your children?
Looking at the present scenario it has become a crucial part of protecting and safeguarding our children online against online child abuse here are some practical steps that can help in safeguarding your loved one.
- Open communication: Establish open communication with your children, make them feel comfortable, and share your experiences with them, make them understand what good internet surfing is and educate them about the possible risks without generating fear.
- Teach Online Safety: educate your children about the importance of privacy and the risks associated with it. Teach them strong privacy habits like not sharing any personal information with a stranger on any social media platform. Teach them to create some unique passwords and to make them aware not to click on any suspicious links or download files from unknown sources.
- Set boundaries: As a parent set rules and guidelines for internet usage, set time limits, and monitor their online activities without infringing their privacy. Monitor their social media platforms and discuss inappropriate behaviour or online harassment. As a parent take an interest in your children's online activities, websites, and apps inform them, and teach them online safety measures.
Conclusion
The predominance of self-generated' photos in online child abuse content necessitates immediate attention and coordinated action from governments, technology corporations, and society as a whole. As we negotiate the complicated environment of the digital age, we must be watchful, modify our techniques, and collaborate to defend the innocence of the most vulnerable among us. To combat online child exploitation, we must all work together to build a safer, more secure online environment for children all around the world.
References
- https://www.the420.in/over-90-of-websites-containing-child-abuse-feature-self-generated-images-warns-iwf/
- https://news.sky.com/story/self-generated-images-found-on-92-of-websites-containing-child-sexual-abuse-with-victims-as-young-as-three-13049628
- https://www.firstpost.com/world/russia-rejects-us-proposal-to-resume-talks-on-nuclear-arms-control-13630672.html
- https://www.news4hackers.com/iwf-warns-that-more-than-90-of-websites-contain-self-generated-child-abuse-images/

Introduction
With the increasing frequency and severity of cyber-attacks on critical sectors, the government of India has formulated the National Cyber Security Reference Framework (NCRF) 2023, aimed to address cybersecurity concerns in India. In today’s digital age, the security of critical sectors is paramount due to the ever-evolving landscape of cyber threats. Cybersecurity measures are crucial for protecting essential sectors such as banking, energy, healthcare, telecommunications, transportation, strategic enterprises, and government enterprises. This is an essential step towards safeguarding these critical sectors and preparing for the challenges they face in the face of cyber threats. Protecting critical sectors from cyber threats is an urgent priority that requires the development of robust cybersecurity practices and the implementation of effective measures to mitigate risks.
Overview of the National Cyber Security Policy 2013
The National Cyber Security Policy of 2013 was the first attempt to address cybersecurity concerns in India. However, it had several drawbacks that limited its effectiveness in mitigating cyber risks in the contemporary digital age. The policy’s outdated guidelines, insufficient prevention and response measures, and lack of legal implications hindered its ability to protect critical sectors adequately. Moreover, the policy should have kept up with the rapidly evolving cyber threat landscape and emerging technologies, leaving organisations vulnerable to new cyber-attacks. The 2013 policy failed to address the evolving nature of cyber threats, leaving organisations needing updated guidelines to combat new and sophisticated attacks.
As a result, an updated and more comprehensive policy, the National Cyber Security Reference Framework 2023, was necessary to address emerging challenges and provide strategic guidance for protecting critical sectors against cyber threats.

Highlights of NCRF 2023
Strategic Guidance: NCRF 2023 has been developed to provide organisations with strategic guidance to address their cybersecurity concerns in a structured manner.
Common but Differentiated Responsibility (CBDR): The policy is based on a CBDR approach, recognising that different organisations have varying levels of cybersecurity needs and responsibilities.
Update of National Cyber Security Policy 2013: NCRF supersedes the National Cyber Security Policy 2013, which was due for an update to align with the evolving cyber threat landscape and emerging challenges.
Different from CERT-In Directives: NCRF is distinct from the directives issued by the Indian Computer Emergency Response Team (CERT-In) published in April 2023. It provides a comprehensive framework rather than specific directives for reporting cyber incidents.
Combination of robust strategies: National Cyber Security Reference Framework 2023 will provide strategic guidance, a revised structure, and a proactive approach to cybersecurity, enabling organisations to tackle the growing cyberattacks in India better and safeguard critical sectors. Rising incidents of malware attacks on critical sectors
In recent years, there has been a significant increase in malware attacks targeting critical sectors. These sectors, including banking, energy, healthcare, telecommunications, transportation, strategic enterprises, and government enterprises, play a crucial role in the functioning of economies and the well-being of societies. The escalating incidents of malware attacks on these sectors have raised concerns about the security and resilience of critical infrastructure.
Banking: The banking sector handles sensitive financial data and is a prime target for cybercriminals due to the potential for financial fraud and theft.
Energy: The energy sector, including power grids and oil companies, is critical for the functioning of economies, and disruptions can have severe consequences for national security and public safety.
Healthcare: The healthcare sector holds valuable patient data, and cyber-attacks can compromise patient privacy and disrupt healthcare services. Malware attacks on healthcare organisations can result in the theft of patient records, ransomware incidents that cripple healthcare operations, and compromise medical devices.
Telecommunications: Telecommunications infrastructure is vital for reliable communication, and attacks targeting this sector can lead to communication disruptions and compromise the privacy of transmitted data. The interconnectedness of telecommunications networks globally presents opportunities for cybercriminals to launch large-scale attacks, such as Distributed Denial-of-Service (DDoS) attacks.
Transportation: Malware attacks on transportation systems can lead to service disruptions, compromise control systems, and pose safety risks.
Strategic Enterprises: Strategic enterprises, including defence, aerospace, intelligence agencies, and other sectors vital to national security, face sophisticated malware attacks with potentially severe consequences. Cyber adversaries target these enterprises to gain unauthorised access to classified information, compromise critical infrastructure, or sabotage national security operations.
Government Enterprises: Government organisations hold a vast amount of sensitive data and provide essential services to citizens, making them targets for data breaches and attacks that can disrupt critical services.

Conclusion
The sectors of banking, energy, healthcare, telecommunications, transportation, strategic enterprises, and government enterprises face unique vulnerabilities and challenges in the face of cyber-attacks. By recognising the significance of safeguarding these sectors, we can emphasise the need for proactive cybersecurity measures and collaborative efforts between public and private entities. Strengthening regulatory frameworks, sharing threat intelligence, and adopting best practices are essential to ensure our critical infrastructure’s resilience and security. Through these concerted efforts, we can create a safer digital environment for these sectors, protecting vital services and preserving the integrity of our economy and society. The rising incidents of malware attacks on critical sectors emphasise the urgent need for updated cybersecurity policy, enhanced cybersecurity measures, a collaboration between public and private entities, and the development of proactive defence strategies. National Cyber Security Reference Framework 2023 will help in addressing the evolving cyber threat landscape, protect critical sectors, fill the gaps in sector-specific best practices, promote collaboration, establish a regulatory framework, and address the challenges posed by emerging technologies. By providing strategic guidance, this framework will enhance organisations’ cybersecurity posture and ensure the protection of critical infrastructure in an increasingly digitised world.

The World Economic Forum reported that AI-generated misinformation and disinformation are the second most likely threat to present a material crisis on a global scale in 2024 at 53% (Sept. 2023). Artificial intelligence is automating the creation of fake news at a rate disproportionate to its fact-checking. It is spurring an explosion of web content mimicking factual articles that instead disseminate false information about grave themes such as elections, wars and natural disasters.
According to a report by the Centre for the Study of Democratic Institutions, a Canadian think tank, the most prevalent effect of Generative AI is the ability to flood the information ecosystem with misleading and factually-incorrect content. As reported by Democracy Reporting International during the 2024 elections of the European Union, Google's Gemini, OpenAI’s ChatGPT 3.5 and 4.0, and Microsoft’s AI interface ‘CoPilot’ were inaccurate one-third of the time when engaged for any queries regarding the election data. Therefore, a need for an innovative regulatory approach like regulatory sandboxes which can address these challenges while encouraging responsible AI innovation is desired.
What Is AI-driven Misinformation?
False or misleading information created, amplified, or spread using artificial intelligence technologies is AI-driven misinformation. Machine learning models are leveraged to automate and scale the creation of false and deceptive content. Some examples are deep fakes, AI-generated news articles, and bots that amplify false narratives on social media.
The biggest challenge is in the detection and management of AI-driven misinformation. It is difficult to distinguish AI-generated content from authentic content, especially as these technologies advance rapidly.
AI-driven misinformation can influence elections, public health, and social stability by spreading false or misleading information. While public adoption of the technology has undoubtedly been rapid, it is yet to achieve true acceptance and actually fulfill its potential in a positive manner because there is widespread cynicism about the technology - and rightly so. The general public sentiment about AI is laced with concern and doubt regarding the technology’s trustworthiness, mainly due to the absence of a regulatory framework maturing on par with the technological development.
Regulatory Sandboxes: An Overview
Regulatory sandboxes refer to regulatory tools that allow businesses to test and experiment with innovative products, services or businesses under the supervision of a regulator for a limited period. They engage by creating a controlled environment where regulators allow businesses to test new technologies or business models with relaxed regulations.
Regulatory sandboxes have been in use for many industries and the most recent example is their use in sectors like fintech, such as the UK’s Financial Conduct Authority sandbox. These models have been known to encourage innovation while allowing regulators to understand emerging risks. Lessons from the fintech sector show that the benefits of regulatory sandboxes include facilitating firm financing and market entry and increasing speed-to-market by reducing administrative and transaction costs. For regulators, testing in sandboxes informs policy-making and regulatory processes. Looking at the success in the fintech industry, regulatory sandboxes could be adapted to AI, particularly for overseeing technologies that have the potential to generate or spread misinformation.
The Role of Regulatory Sandboxes in Addressing AI Misinformation
Regulatory sandboxes can be used to test AI tools designed to identify or flag misinformation without the risks associated with immediate, wide-scale implementation. Stakeholders like AI developers, social media platforms, and regulators work in collaboration within the sandbox to refine the detection algorithms and evaluate their effectiveness as content moderation tools.
These sandboxes can help balance the need for innovation in AI and the necessity of protecting the public from harmful misinformation. They allow the creation of a flexible and adaptive framework capable of evolving with technological advancements and fostering transparency between AI developers and regulators. This would lead to more informed policymaking and building public trust in AI applications.
CyberPeace Policy Recommendations
Regulatory sandboxes offer a mechanism to predict solutions that will help to regulate the misinformation that AI tech creates. Some policy recommendations are as follows:
- Create guidelines for a global standard for including regulatory sandboxes that can be adapted locally and are useful in ensuring consistency in tackling AI-driven misinformation.
- Regulators can propose to offer incentives to companies that participate in sandboxes. This would encourage innovation in developing anti-misinformation tools, which could include tax breaks or grants.
- Awareness campaigns can help in educating the public about the risks of AI-driven misinformation and the role of regulatory sandboxes can help manage public expectations.
- Periodic and regular reviews and updates to the sandbox frameworks should be conducted to keep pace with advancements in AI technology and emerging forms of misinformation should be emphasized.
Conclusion and the Challenges for Regulatory Frameworks
Regulatory sandboxes offer a promising pathway to counter the challenges that AI-driven misinformation poses while fostering innovation. By providing a controlled environment for testing new AI tools, these sandboxes can help refine technologies aimed at detecting and mitigating false information. This approach ensures that AI development aligns with societal needs and regulatory standards, fostering greater trust and transparency. With the right support and ongoing adaptations, regulatory sandboxes can become vital in countering the spread of AI-generated misinformation, paving the way for a more secure and informed digital ecosystem.
References
- https://www.thehindu.com/sci-tech/technology/on-the-importance-of-regulatory-sandboxes-in-artificial-intelligence/article68176084.ece
- https://www.oecd.org/en/publications/regulatory-sandboxes-in-artificial-intelligence_8f80a0e6-en.html
- https://www.weforum.org/publications/global-risks-report-2024/
- https://democracy-reporting.org/en/office/global/publications/chatbot-audit#Conclusions