#FactCheck: Viral Fake Post Claims Central Government Offers Unemployment Allowance Under ‘PM Berojgari Bhatta Yojna’
Executive Summary:
A viral thumbnail and numerous social posts state that the government of India is giving unemployed youth ₹4,500 a month under a program labeled "PM Berojgari Bhatta Yojana." This claim has been shared on multiple online platforms.. It has given many job-seeking individuals hope, however, when we independently researched the claim, there was no verified source of the scheme or government notification.

Claim:
The viral post states: "The Central Government is conducting a scheme called PM Berojgari Bhatta Yojana in which any unemployed youth would be given ₹ 4,500 each month. Eligible candidates can apply online and get benefits." Several videos and posts show suspicious and unverified website links for registration, trying to get the general public to share their personal information.

Fact check:
In the course of our verification, we conducted a research of all government portals that are official, in this case, the Ministry of Labour and Employment, PMO India, MyScheme, MyGov, and Integrated Government Online Directory, which lists all legitimate Schemes, Programmes, Missions, and Applications run by the Government of India does not posted any scheme related to the PM Berojgari Bhatta Yojana.

Numerous YouTube channels seem to be monetizing false narratives at the expense of sentiment, leading users to misleading websites. The purpose of these scams is typically to either harvest data or market pay-per-click ads that suspend disbelief in outrageous claims.
Our research findings were backed up later by the PIB Fact Check which shared a clarification on social media. stated that: “No such scheme called ‘PM Berojgari Bhatta Yojana’ is in existence. The claim that has gone viral is fake”.

To provide some perspective, in 2021-22, the Rajasthan government launched a state-level program under the Mukhyamantri Udyog Sambal Yojana (MUSY) that provided ₹4,500/month to unemployed women and transgender persons, and ₹4000/month to unemployed males. This was not a Central Government program, and the current viral claim falsely contextualizes past, local initiatives as nationwide policy.

Conclusion:
The claim of a ₹4,500 monthly unemployment benefit under the PM Berojgari Bhatta Yojana is incorrect. The Central Government or any government department has not launched such a scheme. Our claim aligns with PIB Fact Check, which classifies this as a case of misinformation. We encourage everyone to be vigilant and avoid reacting to viral fake news. Verify claims through official sources before sharing or taking action. Let's work together to curb misinformation and protect citizens from false hopes and data fraud.
- Claim: A central policy offers jobless individuals ₹4,500 monthly financial relief
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
In 2022, Oxfam’s India Inequality report revealed the worsening digital divide, highlighting that only 38% of households in the country are digitally literate. Further, only 31% of the rural population uses the internet, as compared to 67% of the urban population. Over time, with the increasing awareness about the importance of digital privacy globally, the definition of digital divide has translated into a digital privacy divide, whereby different levels of privacy are afforded to different sections of society. This further promotes social inequalities and impedes access to fundamental rights.
Digital Privacy Divide: A by-product of the digital divide
The digital divide has evolved into a multi-level issue from its earlier interpretations; level I implies the lack of physical access to technologies, level II refers to the lack of digital literacy and skills and recently, level III relates to the impacts of digital access. Digital Privacy Divide (DPD) refers to the various gaps in digital privacy protection provided to users based on their socio-demographic patterns. It forms a subset of the digital divide, which involves uneven distribution, access and usage of information and communication technology (ICTs). Typically, DPD exists when ICT users receive distinct levels of digital privacy protection. As such, it forms a part of the conversation on digital inequality.
Contrary to popular perceptions, DPD, which is based on notions of privacy, is not always based on ideas of individualism and collectivism and may constitute internal and external factors at the national level. A study on the impacts of DPD conducted in the U.S., India, Bangladesh and Germany highlighted that respondents in Germany and Bangladesh expressed more concerns about their privacy compared to respondents in the U.S. and India. This suggests that despite the U.S. having a strong tradition of individualistic rights, that is reflected in internal regulatory frameworks such as the Fourth Amendment, the topic of data privacy has not garnered enough interest from the population. Most individuals consider forgoing the right to privacy as a necessary evil to access many services, and schemes and to stay abreast with technological advances. Research shows that 62%- 63% of Americans believe that companies and the government collecting data have become an inescapable necessary evil in modern life. Additionally, 81% believe that they have very little control over what data companies collect and about 81% of Americans believe that the risk of data collection outweighs the benefits. Similarly, in Japan, data privacy is thought to be an adopted concept emerging from international pressure to regulate, rather than as an ascribed right, since collectivism and collective decision-making are more valued in Japan, positioning the concept of privacy as subjective, timeserving and an idea imported from the West.
Regardless, inequality in privacy preservation often reinforces social inequality. Practices like surveillance that are geared towards a specific group highlight that marginalised communities are more likely to have less data privacy. As an example, migrants, labourers, persons with a conviction history and marginalised racial groups are often subject to extremely invasive surveillance under suspicions of posing threats and are thus forced to flee their place of birth or residence. This also highlights the fact that focus on DPD is not limited to those who lack data privacy but also to those who have (either by design or by force) excess privacy. While on one end, excessive surveillance, carried out by both governments and private entities, forces immigrants to wait in deportation centres during the pendency of their case, the other end of the privacy extreme hosts a vast number of undocumented individuals who avoid government contact for fear of deportation, despite noting high rates of crime victimization.
DPD is also noted among groups with differential knowledge and skills in cyber security. For example, in India, data privacy laws mandate that information be provided on order of a court or any enforcement agency. However, individuals with knowledge of advanced encryption are adopting communication channels that have encryption protocols that the provider cannot control (and resultantly able to exercise their right to privacy more effectively), in contrast with individuals who have little knowledge of encryption, implying a security as well as an intellectual divide. While several options for secure communication exist, like Pretty Good Privacy, which enables encrypted emailing, they are complex and not easy to use in addition to having negative reputations, like the Tor Browser. Cost considerations also are a major factor in propelling DPD since users who cannot afford devices like those by Apple, which have privacy by default, are forced to opt for devices that have relatively poor in-built encryption.
Children remain the most vulnerable group. During the pandemic, it was noted that only 24% of Indian households had internet facilities to access e-education and several reported needing to access free internet outside of their homes. These public networks are known for their lack of security and privacy, as traffic can be monitored by the hotspot operator or others on the network if proper encryption measures are not in place. Elsewhere, students without access to devices for remote learning have limited alternatives and are often forced to rely on Chromebooks and associated Google services. In response to this issue, Google provided free Chromebooks and mobile hotspots to students in need during the pandemic, aiming to address the digital divide. However, in 2024, New Mexico was reported to be suing Google for allegedly collecting children’s data through its educational products provided to the state's schools, claiming that it tracks students' activities on their personal devices outside of the classroom. It signified the problems in ensuring the privacy of lower-income students while accessing basic education.
Policy Recommendations
Digital literacy is one of the critical components in bridging the DPD. It enables individuals to gain skills, which in turn effectively addresses privacy violations. Studies show that low-income users remain less confident in their ability to manage their privacy settings as compared to high-income individuals. Thus, emphasis should be placed not only on educating on technology usage but also on privacy practices since it aims to improve people’s Internet skills and take informed control of their digital identities.
In the U.S., scholars have noted the role of libraries and librarians in safeguarding intellectual privacy. The Library Freedom Project, for example, has sought to ensure that the skills and knowledge required to ensure internet freedoms are available to all. The Project channelled one of the core values of the library profession i.e. intellectual freedom, literacy, equity of access to recorded knowledge and information, privacy and democracy. As a result, the Project successfully conducted workshops on internet privacy for the public and also openly objected to the Department of Homeland Security’s attempts to shut down the use of encryption technologies in libraries. The International Federation of Library Association adopted a Statement of Privacy in the Library Environment in 2015 that specified “when libraries and information services provide access to resources, services or technologies that may compromise users’ privacy, libraries should encourage users to be aware of the implications and provide guidance in data protection and privacy.” The above should be used as an indicative case study for setting up similar protocols in inclusive public institutions like Anganwadis, local libraries, skill development centres and non-government/non-profit organisations in India, where free education is disseminated. The workshops conducted must inculcate two critical aspects; firstly, enhancing the know-how of using public digital infrastructure and popular technologies (thereby de-alienating technology) and secondly, shifting the viewpoint of privacy as a right an individual has and not something that they own.
However, digital literacy should not be wholly relied on, since it shifts the responsibility of privacy protection to the individual, who may not either be aware or cannot be controlled. Data literacy also does not address the larger issue of data brokers, consumer profiling, surveillance etc. Resultantly, an obligation on companies to provide simplified privacy summaries, in addition to creating accessible, easy-to-use technical products and privacy tools, should be necessitated. Most notable legislations address this problem by mandating notices and consent for collecting personal data of users, despite slow enforcement. However, the Digital Personal Data Protection Act 2023 in India aims to address DPD by not only mandating valid consent but also ensuring that privacy policies remain accessible in local languages, given the diversity of the population.
References
- https://idronline.org/article/inequality/indias-digital-divide-from-bad-to-worse/
- https://arxiv.org/pdf/2110.02669
- https://arxiv.org/pdf/2201.07936#:~:text=The%20DPD%20index%20is%20a,(33%20years%20and%20over).
- https://www.pewresearch.org/internet/2019/11/15/americans-and-privacy-concerned-confused-and-feeling-lack-of-control-over-their-personal-information/
- https://eprints.lse.ac.uk/67203/1/Internet%20freedom%20for%20all%20Public%20libraries%20have%20to%20get%20serious%20about%20tackling%20the%20digital%20privacy%20divi.pdf
- /https://openscholarship.wustl.edu/cgi/viewcontent.cgi?article=6265&context=law_lawreview
- https://eprints.lse.ac.uk/67203/1/Internet%20freedom%20for%20all%20Public%20libraries%20have%20to%20get%20serious%20about%20tackling%20the%20digital%20privacy%20divi.pdf
- https://bosniaca.nub.ba/index.php/bosniaca/article/view/488/pdf
- https://www.hindustantimes.com/education/just-24-of-indian-households-have-internet-facility-to-access-e-education-unicef/story-a1g7DqjP6lJRSh6D6yLJjL.html
- https://www.forbes.com/councils/forbestechcouncil/2021/05/05/the-pandemic-has-unmasked-the-digital-privacy-divide/
- https://www.meity.gov.in/writereaddata/files/Digital%20Personal%20Data%20Protection%20Act%202023.pdf
- https://www.isc.meiji.ac.jp/~ethicj/Privacy%20protection%20in%20Japan.pdf
- https://socialchangenyu.com/review/the-surveillance-gap-the-harms-of-extreme-privacy-and-data-marginalization/

Introduction
Misinformation is no longer a challenge limited to major global platforms or widely spoken languages. In India and many other countries, false information is increasingly disseminated through local and vernacular languages, allowing it to reach communities more directly and intimately. While regional language content has played a crucial role in expanding access to information, it has also emerged as a powerful driver of misinformation by bad actors, and it often becomes harder to detect and counter. The challenge of local language misinformation is not merely digital in nature; it is deeply social, cultural, and shaped by specific local contexts.
Why Local-Language Misinformation Is More Impactful
A person’s mother tongue can be a highly effective medium for misinformation because it carries emotional resonance and a sense of authenticity. Information that aligns with an individual’s linguistic and cultural background is often trusted the most. When false narratives are framed using familiar expressions, local references, or community-specific concerns, they are more readily accepted and shared more widely.
Misinformation in a language like English, which is more heavily moderated, does not usually have the same impact as content in vernacular languages. In the latter case, such content tends to circulate within closed networks such as family WhatsApp groups, regional Facebook pages, local YouTube channels, and community forums. These spaces are often perceived as safe or trusted, which lowers scepticism and encourages the spread of unverified information.
The Role of Digital Platforms and Algorithms
Although social media platforms have opened up access to the content of regional languages, the moderation mechanisms have not kept up. The automated control systems for content are frequently trained mainly on the dominant languages, thus missing the detection of vernacular speech, slang, dialects, and code-mixing.
This results in a disparity in the enforcement of laws where misinformation in local languages:
- Doesn’t go through automated fact-checking tools
- Is subject to human moderation takes place at a slower pace
- Is less prone to being reported or flagged
- Gains unrestrained access for a longer time period than first imagined
The problem is further magnified by algorithmic amplification. Content that triggers very strong emotional reactions fear, anger, pride, or outrage, has a higher chance of being promoted, irrespective of its truthfulness. In regional situations, such content may very quickly sway public opinion even in very closely knit communities.
Forms of Vernacular Misinformation
Local-language misinformation appears in various forms:
- Health misinformation, with such examples as panic remedies, vaccine myths, and misleading medical prescriptions
- Political misinformation, which is mostly identified with regional identity, local grievances, or community narratives
- Rumours regarding disasters that are very hard to control and spread hatred during floods, earthquakes, or other public emergencies
- Economic and financial frauds that are perpetrated via the local dialect authorities or trusted institutions
- Cultural and religious untruths, which are based on exploiting the core of the beliefs
The regional aspect of such misinformation makes it very difficult to be corrected because the fact-checks in other languages may not get to that audience.
Community-Level Consequences
The effect of misinformation in local languages is not only about the misdirection of individuals. It can also:
- Negatively affect the process of public institutions gaining trust
- Support social polarisation and communal strife
- Get in the way of public health measures
- Help shape the decision-making process in elections at the grassroots level
- Take advantage of the digitally illiterate poor people
In a lot of scenarios, the damage done is not instant but rather accumulative, thus changing perceptions and supporting false worldviews more.
Why Countering Vernacular Misinformation Is Difficult
Multiple structural layers make it difficult to respond effectively:
- Variety of Languages: Just in India, there are many languages and dialects, which are very hard to monitor universally.
- Culturally Aware Systems: The local languages sometimes bear meanings that are deeply rooted in the culture, such as by using sarcasm or referring to history, and automated systems are unable to interpret it correctly.
- Reporting Not Common: Users might not spot misinformation or may not want to be a part of the struggle by showing the content shared by reliable members of the community.
- Insufficient Fact-Checking Capacity: Resources are often unavailable for fact-checking organisations to perform their duties worldwide in different languages effectively.
Building a Community-Centric Response
Overcoming misinformation in local languages needs a community-driven resilience approach instead of a platform-centric one. Some of the key actions are:
- Boosting Digital Literacy: Users will be able to question, verify, and put the content on hold before sharing it, thanks to the regional language awareness campaigns that will be conducted.
- Facilitating Local Fact-Checkers: Local journalists, educators, and NGOs are the main players in providing the context for verification.
- Accountability of Platforms: It is necessary for technology companies to support global moderation in several languages, the hiring of local experts, and the implementation of transparent enforcement mechanisms.
- Contemplating Policy and Governance: Regulatory frameworks should facilitate proactive risk assessment while controlling the right to free expression.
- Establishment of Trusted Local Intermediaries: Community leaders, health workers, teachers, and local organisations can engage in preventing misinformation among the networks that they are trusted in.
The Way Forward
Misinformation in local languages is not a minor concern; it is an issue that directly affects the future of digital trust. As the number of users accessing the internet through local language interfaces continues to grow, the volume and influence of regional content will also increase. If measures do not include all language groups, misinformation will remain least corrected and most influential at the community level, where it is also the hardest to identify and address.
Such a problem exists only if the power of language is not recognised. Therefore, one can say that it is necessary to protect the quality of information in local languages, not only for digital safety but for other factors as well, such as social cohesion, democratic participation, and public well-being.
Conclusion
Vernacular content has the potential to be very powerful in the ways it can inform, include and empower; meanwhile, if it goes unmonitored, it has the same potential to mislead, divide, and harm. Mis-disinformation in local languages calls for the cooperation of platforms, regulators, NGOs, and the communities involved. To win over the digital ecosystem, it has to speak all languages, not only for communication but also for protection.
References
- https://www.mdpi.com/2304-6775/10/2/15
- https://afpr.in/regional-languages-shaping-indias-online-discourse/
- https://medium.com/@pratikgsalvi03/how-indias-misinformation-surge-and-media-credibility-crisis-are-undermining-democracy-public-dc8ad7be8e12
- https://projectshakti.in/
- https://journals.sagepub.com/doi/10.1177/02683962211037693
- https://rsisinternational.org/journals/ijriss/Digital-Library/volume-8-issue-11/505-518.pdf
- https://www.irjmets.com/upload_newfiles/irjmets71200016652/paper_file/irjmets71200016652.pdf
.webp)
The 2020s mark the emergence of deepfakes in general media discourse. The rise in deepfake technology is defined by a very simple yet concerning fact: it is now possible to create perfect imitations of anyone using AI tools that can create audio in any person's voice and generate realistic images and videos of almost anyone doing pretty much anything. The proliferation of deepfake content in the media poses great challenges to the functioning of democracies. especially as such materials can deprive the public of the accurate information it needs to make informed decisions in elections. Deepfakes are created using AI, which combines different technologies to produce synthetic content.
Understanding Deepfakes
Deepfakes are synthetically generated content created using artificial intelligence (AI). This technology works on an advanced algorithm that creates hyper-realistic videos by using a person’s face, voice or likeness utilising techniques such as machine learning. The utilisation and progression of deepfake technology holds vast potential, both benign and malicious.
An example is when the NGO Malaria No More which had used deepfake technology in 2019 to sync David Beckham’s lip movements with different voices in nine languages, amplified its anti-malaria message.
Deepfakes have a dark side too. They have been used to spread false information, manipulate public opinion, and damage reputations. They can harm mental health and have significant social impacts. The ease of creating deepfakes makes it difficult to verify media authenticity, eroding trust in journalism and creating confusion about what is true and what is not. Their potential to cause harm has made it necessary to consider legal and regulatory approaches.
India’s Legal Landscape Surrounding Deepfakes
India presently lacks a specific law dealing with deepfakes, but the existing legal provisions offer some safeguards against mischief caused.
- Deepfakes created with the intent of spreading misinformation or damaging someone’s reputation can be prosecuted under the Bharatiya Nyaya Sanhita of 2023. It deals with the consequences of such acts under Section 356, governing defamation law.
- The Information Technology Act of 2000, the primary law that regulates Indian cyberspace. Any unauthorised disclosure of personal information which is used to create deepfakes for harassment or voyeurism is a violation of the act.
- The unauthorised use of a person's likeness in a deepfake can become a violation of their intellectual property rights and lead to copyright infringement.
- India’s privacy law, the Digital Personal Data Protection Act, regulates and limits the misuse of personal data. It has the potential to address deepfakes by ensuring that individuals’ likenesses are not used without their consent in digital contexts.
India, at present, needs legislation that can specifically address the challenges deepfakes pose. The proposed legislation, aptly titled, ‘the Digital India Act’ aims to tackle various digital issues, including the misuse of deepfake technology and the spread of misinformation. Additionally, states like Maharashtra have proposed laws targeting deepfakes used for defamation or fraud, highlighting growing concerns about their impact on the digital landscape.
Policy Approaches to Regulation of Deepfakes
- Criminalising and penalising the making, creation and distribution of harmful deepfakes as illegal will act as a deterrent.
- There should be a process that mandates the disclosures for synthetic media. This would be to inform viewers that the content has been created using AI.
- Encouraging tech companies to implement stricter policies on deepfake content moderation can enhance accountability and reduce harmful misinformation.
- The public’s understanding of deepfakes should be promoted. Especially, via awareness campaigns that will empower citizens to critically evaluate digital content and make informed decisions.
Deepfake, Global Overview
There has been an increase in the momentum to regulate deepfakes globally. In October 2023, US President Biden signed an executive order on AI risks instructing the US Commerce Department to form labelling standards for AI-generated content. California and Texas have passed laws against the dangerous distribution of deepfake images that affect electoral contexts and Virginia has targeted a law on the non-consensual distribution of deepfake pornography.
China promulgated regulations requiring explicit marking of doctored content. The European Union has tightened its Code of Practice on Disinformation by requiring social media to flag deepfakes, otherwise they risk facing hefty fines and proposed transparency mandates under the EU AI Act. These measures highlight a global recognition of the risks that deepfakes pose and the need for a robust regulatory framework.
Conclusion
With deepfakes being a significant source of risk to trust and democratic processes, a multi-pronged approach to regulation is in order. From enshrining measures against deepfake technology in specific laws and penalising the same, mandating transparency and enabling public awareness, the legislators have a challenge ahead of them. National and international efforts have highlighted the urgent need for a comprehensive framework to enable measures to curb the misuse and also promote responsible innovation. Cooperation during these trying times will be important to shield truth and integrity in the digital age.
References
- https://digitalcommons.usf.edu/cgi/viewcontent.cgi?article=2245&context=jss
- https://www.thehindu.com/news/national/regulating-deepfakes-generative-ai-in-india-explained/article67591640.ece
- https://www.brennancenter.org/our-work/research-reports/regulating-ai-deepfakes-and-synthetic-media-political-arena
- https://www.responsible.ai/a-look-at-global-deepfake-regulation-approaches/
- https://thesecretariat.in/article/wake-up-call-for-law-making-on-deepfakes-and-misinformation