#FactCheck: Beware of Fake Emails Distributing Fraudulent e-PAN Cards
Executive Summary:
We have identified a post addressing a scam email that falsely claims to offer a download link for an e-PAN Card. This deceptive email is designed to mislead recipients into disclosing sensitive financial information by impersonating official communication from Income Tax Department authorities. Our report aims to raise awareness about this fraudulent scheme and emphasize the importance of safeguarding personal data against such cyber threats.

Claim:
Scammers are sending fake emails, asking people to download their e-PAN cards. These emails pretend to be from government authorities like the Income Tax Department and contain harmful links that can steal personal information or infect devices with malware.
Fact Check:
Through our research, we have found that scammers are sending fake emails, posing as the Income Tax Department, to trick users into downloading e-PAN cards from unofficial links. These emails contain malicious links that can lead to phishing attacks or malware infections. Genuine e-PAN services are only available through official platforms such as the Income Tax Department's website (www.incometaxindia.gov.in) and the NSDL/UTIITSL portals. Despite repeated warnings, many individuals still fall victim to such scams. To combat this, the Income Tax Department has a dedicated page for reporting phishing attempts: Report Phishing - Income Tax India. It is crucial for users to stay cautious, verify email authenticity, and avoid clicking on suspicious links to protect their personal information.

Conclusion:
The emails currently in circulation claiming to provide e-PAN card downloads are fraudulent and should not be trusted. These deceptive messages often impersonate government authorities and contain malicious links that can result in identity theft or financial fraud. Clicking on such links may compromise sensitive personal information, putting individuals at serious risk. To ensure security, users are strongly advised to verify any such communication directly through official government websites and avoid engaging with unverified sources. Additionally, any phishing attempts should be reported to the Income Tax Department and also to the National Cyber Crime Reporting Portal to help prevent the spread of such scams. Staying vigilant and exercising caution when handling unsolicited emails is crucial in safeguarding personal and financial data.
- Claim: Fake emails claim to offer e-PAN card downloads.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs
.webp)
Introduction
The unprecedented rise of social media, challenges with regional languages, and the heavy use of messaging apps like WhatsApp have all led to an increase in misinformation in India. False stories spread quickly and can cause significant harm, like political propaganda and health-related mis/misinformation. Programs that teach people how to use social media responsibly and attempt to check facts are essential, but they do not always connect with people deeply. Reading stories, attending lectures, and using tools that check facts are standard passive learning methods used in traditional media literacy programs.
Adding game-like features to non-game settings is called "gamification," it could be a new and interesting way to answer this question. Gamification involves engaging people by making them active players instead of just passive consumers of information. Research shows that interactive learning improves interest, thinking skills, and memory. People can learn to recognise fake news safely by turning fact-checking into a game before encountering it in real life. A study by Roozenbeek and van der Linden (2019) showed that playing misinformation games can significantly enhance people's capacity to recognise and avoid false information.
Several misinformation-related games have been successfully implemented worldwide:
- The Bad News Game – This browser-based game by Cambridge University lets players step into the shoes of a fake news creator, teaching them how misinformation is crafted and spread (Roozenbeek & van der Linden, 2019).
- Factitious – A quiz game where users swipe left or right to decide whether a news headline is real or fake (Guess et al., 2020).
- Go Viral! – A game designed to inoculate people against COVID-19 misinformation by simulating the tactics used by fake news peddlers (van der Linden et al., 2020).
For programs to effectively combat misinformation in India, they must consider factors such as the responsible use of smartphones, evolving language trends, and common misinformation patterns in the country. Here are some key aspects to keep in mind:
- Vernacular Languages
There should be games in Hindi, Tamil, Bengali, Telugu, and other major languages since that is how rumours spread in different areas and diverse cultural contexts. AI voice conversation and translation can help reduce literacy differences. Research shows that people are more likely to engage with and trust information in their native language (Pennycook & Rand, 2019).
- Games Based on WhatsApp
Interactive tests and chatbot-powered games can educate consumers directly within the app they use most frequently since WhatsApp is a significant hub for false information. A game with a WhatsApp-like interface where players may feel like they are in real life, having to decide whether to avoid, check the facts of, or forward messages that are going viral could be helpful in India.
- Detecting False Information
As part of a mobile-friendly game, players can pretend to be reporters or fact-checkers and have to prove stories that are going viral. They can do the same with real-life tools like reverse picture searches or reliable websites that check facts. Research shows that doing interactive tasks to find fake news makes people more aware of it over time (Lewandowsky et al., 2017).
- Reward-Based Participation
Participation could be increased by providing rewards for finishing misleading challenges, such as badges, diplomas, or even incentives on mobile data. This might be easier to do if there are relationships with phone companies. Reward-based learning has made people more interested and motivated in digital literacy classes (Deterding et al., 2011).
- Universities and Schools
Educational institutions can help people spot false information by adding game-like elements to their lessons. Hamari et al. (2014) say that students are more likely to join and remember what they learn when there are competitive and interactive parts to the learning. Misinformation games can be used in media studies classes at schools and universities by using models to teach students how to check sources, spot bias, and understand the psychological tricks that misinformation campaigns use.
What Artificial Intelligence Can Do for Gamification
Artificial intelligence can tailor learning experiences to each player in false games. AI-powered misinformation detection bots could lead participants through situations tailored to their learning level, ensuring they are consistently challenged. Recent natural language processing (NLP) developments enable AI to identify nuanced misinformation patterns and adjust gameplay accordingly (Zellers et al., 2019). This could be especially helpful in India, where fake news is spread differently depending on the language and area.
Possible Opportunities
Augmented reality (AR) scavenger hunts for misinformation, interactive misinformation events, and educational misinformation tournaments are all examples of games that help fight misinformation. India can help millions, especially young people, think critically and combat the spread of false information by making media literacy fun and interesting. Using Artificial Intelligence (AI) in gamified treatments for misinformation could be a fascinating area of study in the future. AI-powered bots could mimic real-time cases of misinformation and give quick feedback, which would help students learn more.
Problems and Moral Consequences
While gaming is an interesting way to fight false information, it also comes with some problems that you should think about:
- Ethical Concerns: Games that try to imitate how fake news spreads must ensure players do not learn how to spread false information by accident.
- Scalability: Although worldwide misinformation initiatives exist, developing and expanding localised versions for India's varied language and cultural contexts provide significant challenges.
- Assessing Impact: There is a necessity for rigorous research approaches to evaluate the efficacy of gamified treatments in altering misinformation-related behaviours, keeping cultural and socio-economic contexts in the picture.
Conclusion
A gamified approach can serve as an effective tool in India's fight against misinformation. By integrating game elements into digital literacy programs, it can encourage critical thinking and help people recognize misinformation more effectively. The goal is to scale these efforts, collaborate with educators, and leverage India's rapidly evolving technology to make fact-checking a regular practice rather than an occasional concern.
As technology and misinformation evolve, so must the strategies to counter them. A coordinated and multifaceted approach, one that involves active participation from netizens, strict platform guidelines, fact-checking initiatives, and support from expert organizations that proactively prebunk and debunk misinformation can be a strong way forward.
References
- Deterding, S., Dixon, D., Khaled, R., & Nacke, L. (2011). From game design elements to gamefulness: defining "gamification". Proceedings of the 15th International Academic MindTrek Conference.
- Guess, A., Nagler, J., & Tucker, J. (2020). Less than you think: Prevalence and predictors of fake news dissemination on Facebook. Science Advances.
- Hamari, J., Koivisto, J., & Sarsa, H. (2014). Does gamification work?—A literature review of empirical studies on gamification. Proceedings of the 47th Hawaii International Conference on System Sciences.
- Lewandowsky, S., Ecker, U. K., & Cook, J. (2017). Beyond misinformation: Understanding and coping with the “post-truth” era. Journal of Applied Research in Memory and Cognition.
- Pennycook, G., & Rand, D. G. (2019). Fighting misinformation on social media using “accuracy prompts”. Nature Human Behaviour.
- Roozenbeek, J., & van der Linden, S. (2019). The fake news game: actively inoculating against the risk of misinformation. Journal of Risk Research.
- van der Linden, S., Roozenbeek, J., Compton, J. (2020). Inoculating against fake news about COVID-19. Frontiers in Psychology.
- Zellers, R., Holtzman, A., Rashkin, H., Bisk, Y., Farhadi, A., Roesner, F., & Choi, Y. (2019). Defending against neural fake news. Advances in Neural Information Processing Systems.
.webp)
Introduction
YouTube is testing a new feature called ‘Notes,’ which allows users to add community-sourced context to videos. The feature allows users to clarify if a video is a parody or if it is misrepresenting information. The feature builds on existing features to provide helpful content alongside videos. Currently under testing, the feature will be available to a limited number of eligible contributors who will be invited to write notes on videos. These notes will appear publicly under a video if they are found to be broadly helpful. Viewers will be able to rate notes into three categories: ‘Helpful,’ ‘Somewhat helpful,’ or ‘Unhelpful’. Based on the ratings, YouTube will determine which notes are published. The feature will first be rolled out on mobile devices in the U.S. in English. The Google-owned platform will look at ways to improve the feature over time, including whether it makes sense to expand it to other markets.
YouTube To Roll Out The New ‘Notes’ Feature
YouTube is testing an experimental feature that allows users to add notes to provide relevant, timely, and easy-to-understand context for videos. This initiative builds on previous products that display helpful information alongside videos, such as information panels and disclosure requirements when content is altered or synthetic. YouTube in its blog clarified that the pilot will be available on mobiles in the U.S. and in the English language, to start with. During this test phase, viewers, participants, and creators are invited to give feedback on the quality of the notes.
YouTube further stated in its blog that a limited number of eligible contributors will be invited via email or Creator Studio notifications to write notes so that they can test the feature and add value to the system before the organisation decides on next steps and whether or not to expand the feature. Eligibility criteria include having an active YouTube channel in good standing with Yotube’s Community Guidelines.
Viewers in the U.S. will start seeing notes on videos in the coming weeks and months. In this initial pilot, third-party evaluators will rate the helpfulness of notes, which will help train the platform’s systems. As the pilot moves forward, contributors themselves will rate notes as well.
Notes will appear publicly under a video if they are found to be broadly helpful. People will be asked whether they think a note is helpful, somewhat helpful, or unhelpful and the reasons for the same. For example, if a note is marked as ‘Helpful,’ the evaluator will have the opportunity to specify if it is so because it cites high-quality sources or is written clearly and neutrally. A bridging-based algorithm will be used to consider these ratings and determine what notes are published. YouTube is excited to explore new ways to make context-setting even more relevant, dynamic, and unique to the videos we are watching, at scale, across the huge variety of content on YouTube.
CyberPeace Analysis: How Can Notes Help Counter Misinformation
The potential effectiveness of countering misinformation on YouTube using the proposed ‘Notes’ feature is significant. Enabling contributors to include notes on videos can offer relevant and accurate context to clarify any misleading or false information in the video. These notes can aid in enhancing viewers' comprehension of the content and detecting misinformation. The participation from users to rate the added notes as helpful, somewhat helpful, and unhelpful adds a heightened layer of transparency and public participation in identifying the accuracy of the content.
As YouTube intends to gather feedback from its various stakeholders to improve the feature over time, one can look forward to improved policy and practical over time: the feedback mechanism will allow for continuous refinement of the feature, ensuring it effectively addresses misinformation. The platform employs algorithms to identify helpful notes that cater to a broad audience across different perspectives. This helps showcase accurate information and combat misinformation.
Furthermore, along with the Notes feature, YouTube should explore and implement prebunking and debunking strategies on the platform by promoting educational content and empowering users to discern between fact and any misleading information.
Conclusion
The new feature, currently in the testing phase, aims to counter misinformation by providing context, enabling user feedback, leveraging algorithms, promoting transparency, and continuously improving information quality. Considering the diverse audience on the platform and high volumes of daily content consumption, it is important for both the platform operators and users to engage with factual, verifiable information. The fallout of misinformation on such a popular platform can be immense, and so, any mechanism or feature that can help counter the same must be developed to its full potential. Apart from this new Notes feature, YouTube has also implemented certain measures in the past to counter misinformation, such as providing authenticated sources to counter any election misinformation during the recent 2024 elections in India. These efforts are a welcome contribution to our shared responsibility as netizens to create a trustworthy, factual and truly-informational digital ecosystem.
References:
- https://blog.youtube/news-and-events/new-ways-to-offer-viewers-more-context/
- https://www.thehindu.com/sci-tech/technology/internet/youtube-tests-feature-that-will-let-users-add-context-to-videos/article68302933.ece

Introduction
In the digital realm of social media, Meta Platforms, the driving force behind Facebook and Instagram, faces intense scrutiny following The Wall Street Journal's investigative report. This exploration delves deeper into critical issues surrounding child safety on these widespread platforms, unravelling algorithmic intricacies, enforcement dilemmas, and the ethical maze surrounding monetisation features. Instances of "parent-managed minor accounts" leveraging Meta's subscription tools to monetise content featuring young individuals have raised eyebrows. While skirting the line of legality, this practice prompts concerns due to its potential appeal to adults and the associated inappropriate interactions. It's a nuanced issue demanding nuanced solutions.
Failed Algorithms
The very heartbeat of Meta's digital ecosystem, its algorithms, has come under intense scrutiny. These algorithms, designed to curate and deliver content, were found to actively promoting accounts featuring explicit content to users with known pedophilic interests. The revelation sparks a crucial conversation about the ethical responsibilities tied to the algorithms shaping our digital experiences. Striking the right balance between personalised content delivery and safeguarding users is a delicate task.
While algorithms play a pivotal role in tailoring content to users' preferences, Meta needs to reevaluate the algorithms to ensure they don't inadvertently promote inappropriate content. Stricter checks and balances within the algorithmic framework can help prevent the inadvertent amplification of content that may exploit or endanger minors.
Major Enforcement Challenges
Meta's enforcement challenges have come to light as previously banned parent-run accounts resurrect, gaining official verification and accumulating large followings. The struggle to remove associated backup profiles adds layers to concerns about the effectiveness of Meta's enforcement mechanisms. It underscores the need for a robust system capable of swift and thorough actions against policy violators.
To enhance enforcement mechanisms, Meta should invest in advanced content detection tools and employ a dedicated team for consistent monitoring. This proactive approach can mitigate the risks associated with inappropriate content and reinforce a safer online environment for all users.
The financial dynamics of Meta's ecosystem expose concerns about the exploitation of videos that are eligible for cash gifts from followers. The decision to expand the subscription feature before implementing adequate safety measures poses ethical questions. Prioritising financial gains over user safety risks tarnishing the platform's reputation and trustworthiness. A re-evaluation of this strategy is crucial for maintaining a healthy and secure online environment.
To address safety concerns tied to monetisation features, Meta should consider implementing stricter eligibility criteria for content creators. Verifying the legitimacy and appropriateness of content before allowing it to be monetised can act as a preventive measure against the exploitation of the system.
Meta's Response
In the aftermath of the revelations, Meta's spokesperson, Andy Stone, took centre stage to defend the company's actions. Stone emphasised ongoing efforts to enhance safety measures, asserting Meta's commitment to rectifying the situation. However, critics argue that Meta's response lacks the decisive actions required to align with industry standards observed on other platforms. The debate continues over the delicate balance between user safety and the pursuit of financial gain. A more transparent and accountable approach to addressing these concerns is imperative.
To rebuild trust and credibility, Meta needs to implement concrete and visible changes. This includes transparent communication about the steps taken to address the identified issues, continuous updates on progress, and a commitment to a user-centric approach that prioritises safety over financial interests.
The formation of a task force in June 2023 was a commendable step to tackle child sexualisation on the platform. However, the effectiveness of these efforts remains limited. Persistent challenges in detecting and preventing potential child safety hazards underscore the need for continuous improvement. Legislative scrutiny adds an extra layer of pressure, emphasising the urgency for Meta to enhance its strategies for user protection.
To overcome ongoing challenges, Meta should collaborate with external child safety organisations, experts, and regulators. Open dialogues and partnerships can provide valuable insights and recommendations, fostering a collaborative approach to creating a safer online environment.
Drawing a parallel with competitors such as Patreon and OnlyFans reveals stark differences in child safety practices. While Meta grapples with its challenges, these platforms maintain stringent policies against certain content involving minors. This comparison underscores the need for universal industry standards to safeguard minors effectively. Collaborative efforts within the industry to establish and adhere to such standards can contribute to a safer digital environment for all.
To align with industry standards, Meta should actively participate in cross-industry collaborations and adopt best practices from platforms with successful child safety measures. This collaborative approach ensures a unified effort to protect users across various digital platforms.
Conclusion
Navigating the intricate landscape of child safety concerns on Meta Platforms demands a nuanced and comprehensive approach. The identified algorithmic failures, enforcement challenges, and controversies surrounding monetisation features underscore the urgency for Meta to reassess and fortify its commitment to being a responsible digital space. As the platform faces this critical examination, it has an opportunity to not only rectify the existing issues but to set a precedent for ethical and secure social media engagement.
This comprehensive exploration aims not only to shed light on the existing issues but also to provide a roadmap for Meta Platforms to evolve into a safer and more responsible digital space. The responsibility lies not just in acknowledging shortcomings but in actively working towards solutions that prioritise the well-being of its users.
References
- https://timesofindia.indiatimes.com/gadgets-news/instagram-facebook-prioritised-money-over-child-safety-claims-report/articleshow/107952778.cms
- https://www.adweek.com/blognetwork/meta-staff-found-instagram-tool-enabled-child-exploitation-the-company-pressed-ahead-anyway/107604/
- https://www.tbsnews.net/tech/meta-staff-found-instagram-subscription-tool-facilitated-child-exploitation-yet-company