No Gaming or Social Media during Work Hours Kerala HC to ban employees from using their phones for non-official purposes during working hours
Introduction
The Kerala High Court banned the use of mobile phones during office hours on the 2nd of December 2024, and issued an Official Memorandum titled, ‘Indulgence In Online Gaming And Watching Social Media Content During Office Hours’. This memorandum, issued by the Registrar General, prohibits mobile phone usage for personal activities such as gaming and social media during working hours. This memorandum aims to curb the productivity woes and reinforce professional discipline and further ensure the smooth functioning of the office operations.
The memorandum reiterated its earlier notices from 2009 and 2013, where the High Court had emphasised that violations would be taken seriously. This reflects the High Court’s commitment to maintaining efficiency and professionalism in the workplace. According to the memorandum, controlling officers will monitor the staff for violations and strict actions will be taken if the rules are flouted.
Background
The circumstances that led to the Kerala HC’s decision are as follows: staff engaged in playing online games, browsing social media, watching videos or movies and even engaging in online shopping or trading during work hours, excluding the allocated lunch recess (as per the memorandum).
As mentioned earlier, this memorandum is not the first of its kind. There were similar directives that were issued in 2009 and 2013 to target the poor productivity standards, rooted in the staff members' behaviours. The present memorandum is unlike the previously mentioned ones as, it specifically addresses the rise in mobile-based distractions, like online gaming and trading. The present directive does not outline any exceptions to senior officials with designated responsibilities, and emphasises universal adherence for all levels of the workforce.
According to Cell Phones at Workplace Statistics, around 97% of workers use their smartphones during work hours, mixing personal and job-related activities. And more than 55% of managers say that cell phones are a major reason for lower productivity among employees.
Therefore, it can be safely concluded that even though smartphones have become indispensable tools for communication, their misuse has wider implications for overall organisational productivity.
CyberPeace Outlook
The Kerala High Court's decision to restrict personal mobile phone usage during work hours underscores the importance of fostering a disciplined and focused workplace environment. While smartphones are vital for communication, their misuse poses significant productivity challenges. Some proactive steps that employers can take are implementing clear policies, conducting regular training sessions and promoting a culture of accountability. Balancing digital freedom and professional responsibility is the key to ensuring that technological tools serve as enablers of efficiency rather than distractions in the workplace.
References
- https://www.thehindu.com/sci-tech/technology/kerala-high-court-issues-memo-banning-staff-from-gaming-and-social-media-during-work-hours/article68963949.ece
- https://timesofindia.indiatimes.com/technology/tech-news/kerala-high-court-bans-mobile-gaming-and-social-media-for-staff-during-work-hours/articleshow/116101149.cms
- https://images.assettype.com/barandbench/2024-12-05/1hiq8ffv/Kerala_High_Court_OM.pdf
- https://www.coolest-gadgets.com/cell-phones-at-workplace-statistics/
Related Blogs

Introduction
Prebunking is a technique that shifts the focus from directly challenging falsehoods or telling people what they need to believe to understanding how people are manipulated and misled online to begin with. It is a growing field of research that aims to help people resist persuasion by misinformation. Prebunking, or "attitudinal inoculation," is a way to teach people to spot and resist manipulative messages before they happen. The crux of the approach is rooted in taking a step backwards and nipping the problem in the bud by deepening our understanding of it, instead of designing redressal mechanisms to tackle it after the fact. It has been proven effective in helping a wide range of people build resilience to misleading information.
Prebunking is a psychological strategy for countering the effect of misinformation with the goal of assisting individuals in identifying and resisting deceptive content, hence increasing resilience against future misinformation. Online manipulation is a complex issue, and multiple approaches are needed to curb its worst effects. Prebunking provides an opportunity to get ahead of online manipulation, providing a layer of protection before individuals encounter malicious content. Prebunking aids individuals in discerning and refuting misleading arguments, thus enabling them to resist a variety of online manipulations.
Prebunking builds mental defenses for misinformation by providing warnings and counterarguments before people encounter malicious content. Inoculating people against false or misleading information is a powerful and effective method for building trust and understanding along with a personal capacity for discernment and fact-checking. Prebunking teaches people how to separate facts from myths by teaching them the importance of thinking in terms of ‘how you know what you know’ and consensus-building. Prebunking uses examples and case studies to explain the types and risks of misinformation so that individuals can apply these learnings to reject false claims and manipulation in the future as well.
How Prebunking Helps Individuals Spot Manipulative Messages
Prebunking helps individuals identify manipulative messages by providing them with the necessary tools and knowledge to recognize common techniques used to spread misinformation. Successful prebunking strategies include;
- Warnings;
- Preemptive Refutation: It explains the narrative/technique and how particular information is manipulative in structure. The Inoculation treatment messages typically include 2-3 counterarguments and their refutations. An effective rebuttal provides the viewer with skills to fight any erroneous or misleading information they may encounter in the future.
- Micro-dosing: A weakened or practical example of misinformation that is innocuous.
All these alert individuals to potential manipulation attempts. Prebunking also offers weakened examples of misinformation, allowing individuals to practice identifying deceptive content. It activates mental defenses, preparing individuals to resist persuasion attempts. Misinformation can exploit cognitive biases: people tend to put a lot of faith in things they’ve heard repeatedly - a fact that malicious actors manipulate by flooding the Internet with their claims to help legitimise them by creating familiarity. The ‘prebunking’ technique helps to create resilience against misinformation and protects our minds from the harmful effects of misinformation.
Prebunking essentially helps people control the information they consume by teaching them how to discern between accurate and deceptive content. It enables one to develop critical thinking skills, evaluate sources adequately and identify red flags. By incorporating these components and strategies, prebunking enhances the ability to spot manipulative messages, resist deceptive narratives, and make informed decisions when navigating the very dynamic and complex information landscape online.
CyberPeace Policy Recommendations
- Preventing and fighting misinformation necessitates joint efforts between different stakeholders. The government and policymakers should sponsor prebunking initiatives and information literacy programmes to counter misinformation and adopt systematic approaches. Regulatory frameworks should encourage accountability in the dissemination of online information on various platforms. Collaboration with educational institutions, technological companies and civil society organisations can assist in the implementation of prebunking techniques in a variety of areas.
- Higher educational institutions should support prebunking and media literacy and offer professional development opportunities for educators, and scholars by working with academics and professionals on the subject of misinformation by producing research studies on the grey areas and challenges associated with misinformation.
- Technological companies and social media platforms should improve algorithm transparency, create user-friendly tools and resources, and work with fact-checking organisations to incorporate fact-check labels and tools.
- Civil society organisations and NGOs should promote digital literacy campaigns to spread awareness on misinformation and teach prebunking strategies and critical information evaluation. Training programmes should be available to help people recognise and resist deceptive information using prebunking tactics. Advocacy efforts should support legislation or guidelines that support and encourage prebunking efforts and promote media literacy as a basic skill in the digital landscape.
- Media outlets and journalists including print & social media should follow high journalistic standards and engage in fact-checking activities to ensure information accuracy before release. Collaboration with prebunking professionals, cyber security experts, researchers and advocacy analysts can result in instructional content and initiatives that promote media literacy, prebunking strategies and misinformation awareness.
Final Words
The World Economic Forum's Global Risks Report 2024 identifies misinformation and disinformation as the top most significant risks for the next two years. Misinformation and disinformation are rampant in today’s digital-first reality, and the ever-growing popularity of social media is only going to see the challenges compound further. It is absolutely imperative for all netizens and stakeholders to adopt proactive approaches to counter the growing problem of misinformation. Prebunking is a powerful problem-solving tool in this regard because it aims at ‘protection through prevention’ instead of limiting the strategy to harm reduction and redressal. We can draw parallels with the concept of vaccination or inoculation, reducing the probability of a misinformation infection. Prebunking exposes us to a weakened form of misinformation and provides ways to identify it, reducing the chance false information takes root in our psyches.
The most compelling attribute of this approach is that the focus is not only on preventing damage but also creating widespread ownership and citizen participation in the problem-solving process. Every empowered individual creates an additional layer of protection against the scourge of misinformation, not only making safer choices for themselves but also lowering the risk of spreading false claims to others.
References
- [1] https://www3.weforum.org/docs/WEF_The_Global_Risks_Report_2024.pdf
- [2] https://prebunking.withgoogle.com/docs/A_Practical_Guide_to_Prebunking_Misinformation.pdf
- [3] https://ijoc.org/index.php/ijoc/article/viewFile/17634/3565

Introduction
In an age where the lines between truth and fiction blur with an alarming regularity, we stand at the precipice of a new and dangerous era. Amidst the wealth of information that characterizes the digital age, deep fakes and disinformation rise like ghosts, haunting our shared reality. These manifestations of a technological revolution that promised enlightenment instead threaten the foundations upon which our societies are built: trust, truth, and collective understanding.
These digital doppelgängers, enabled by advanced artificial intelligence, and their deceitful companion—disinformation—are not mere ghosts in the machine. They are active agents of chaos, capable of undermining the core of democratic values, human rights, and even the safety of individuals who dare to question the status quo.
The Perils of False Narratives in the Digital Age
As a society, we often throw around terms such as 'fake news' with a mixture of disdain and a weary acceptance of their omnipresence. However, we must not understate their gravity. Misinformation and disinformation represent the vanguard of the digital duplicitous tide, a phenomenon growing more complex and dire each day. Misinformation, often spread without malicious intent but with no less damage, can be likened to a digital 'slip of the tongue' — an error in dissemination or interpretation. Disinformation, its darker counterpart, is born of deliberate intent to deceive, a calculated move in the chess game of information warfare.
Their arsenal is varied and ever-evolving: from misleading memes and misattributed quotations to wholesale fabrications in the form of bogus news sites and carefully crafted narratives. Among these weapons of deceit, deepfakes stand out for their audacity and the striking challenge they pose to the concept of seeing to believe. Through the unwelcome alchemy of algorithms, these video and audio forgeries place public figures, celebrities, and even everyday individuals into scenarios they never experienced, uttering words they never said.
The Human Cost: Threats to Rights and Liberties
The impact of this disinformation campaign transcends inconvenience or mere confusion; it strikes at the heart of human rights and civil liberties. It particularly festers at the crossroads of major democratic exercises, such as elections, where the right to a truthful, unmanipulated narrative is not just a political nicety but a fundamental human right, enshrined in Article 25 of the International Convention on Civil and Political Rights (ICCPR).
In moments of political change, whether during elections or pivotal referenda, the deliberate seeding of false narratives is a direct assault on the electorate's ability to make informed decisions. This subversion of truth infects the electoral process, rendering hollow the promise of democratic choice.
This era of computational propaganda has especially chilling implications for those at the frontline of accountability—journalists and human rights defenders. They find themselves targets of character assassinations and smear campaigns that not only put their safety at risk but also threaten to silence the crucial voices of dissent.
It should not be overlooked that the term 'fake news' has, paradoxically, been weaponized by governments and political entities against their detractors. In a perverse twist, this label becomes a tool to shut down legitimate debate and shield human rights violations from scrutiny, allowing for censorship and the suppression of opposition under the guise of combatting disinformation.
Deepening the societal schisms, a significant portion of this digital deceit traffic in hate speech. Its contents are laden with xenophobia, racism, and calls to violence, all given a megaphone through the anonymity and reach the internet so readily provides, feeding a cycle of intolerance and violence vastly disproportionate to that seen in traditional media.
Legislative and Technological Countermeasures: The Ongoing Struggle
The fight against this pervasive threat, as illustrated by recent actions and statements by the Indian government, is multifaceted. Notably, Union Minister Rajeev Chandrasekhar's commitment to safeguarding the Indian populace from the dangers of AI-generated misinformation signals an important step in the legislative and policy framework necessary to combat deepfakes.
Likewise, Prime Minister Narendra Modi's personal experience with a deepfake video accentuates the urgency with which policymakers, technologists, and citizens alike must view this evolving threat. The disconcerting experience of actor Rashmika Mandanna serves as a sobering reminder of the individual harm these false narratives can inflict and reinforces the necessity of a robust response.
In their pursuit to negate these virtual apparitions, policymakers have explored various avenues ranging from legislative action to penalizing offenders and advancing digital watermarks. However, it is not merely in the realm of technology that solutions must be sought. Rather, the confrontation with deepfakes and disinformation is also a battle for the collective soul of societies across the globe.
As technological advancements continue to reshape the battleground, figures like Kris Gopalakrishnan and Manish Gangwar posit that only a mix of rigorous regulatory frameworks and savvy technological innovation can hold the front line against this rising tidal wave of digital distrust.
This narrative is not a dystopian vision of a distant future - it is the stark reality of our present. And as we navigate this new terrain, our best defenses are not just technological safeguards, but also the nurturing of an informed and critical citizenry. It is essential to foster media literacy, to temper the human inclination to accept narratives at face value and to embolden the values that encourage transparency and the robust exchange of ideas.
As we peer into the shadowy recesses of our increasingly digital existence, may we hold fast to our dedication to the truth, and in doing so, preserve the essence of our democratic societies. For at stake is not just a technological arms race, but the very quality of our democratic discourse and the universal human rights that give it credibility and strength.
Conclusion
In this age of digital deceit, it is crucial to remember that the battle against deep fakes and disinformation is not just a technological one. It is also a battle for our collective consciousness, a battle to preserve the sanctity of truth in an era of falsehoods. As we navigate the labyrinthine corridors of the digital world, let us arm ourselves with the weapons of awareness, critical thinking, and a steadfast commitment to truth. In the end, it is not just about winning the battle against deep fakes and disinformation, but about preserving the very essence of our democratic societies and the human rights that underpin them.
.webp)
Introduction
In India, the rights of children with regard to protection of their personal data are enshrined under the Digital Personal Data Protection Act, 2023 which is the newly enacted digital personal data protection law of India. The DPDP Act requires that for the processing of children's personal data, verifiable consent of parents or legal guardians is a necessary requirement. If the consent of parents or legal guardians is not obtained then it constitutes a violation under the DPDP Act. Under section 2(f) of the DPDP act, a “child” means an individual who has not completed the age of eighteen years.
Section 9 under the DPDP Act, 2023
With reference to the collection of children's data section 9 of the DPDP Act, 2023 provides that for children below 18 years of age, consent from Parents/Legal Guardians is required. The Data Fiduciary shall, before processing any personal data of a child or a person with a disability who has a lawful guardian, obtain verifiable consent from the parent or the lawful guardian. Section 9 aims to create a safer online environment for children by limiting the exploitation of their data for commercial purposes or otherwise. By virtue of this section, the parents and guardians will have more control over their children's data and privacy and they are empowered to make choices as to how they manage their children's online activities and the permissions they grant to various online services.
Section 9 sub-section (3) specifies that a Data Fiduciary shall not undertake tracking or behavioural monitoring of children or targeted advertising directed at children. However, section 9 sub-section (5) further provides room for exemption from this prohibition by empowering the Central Government which may notify exemption to specific data fiduciaries or data processors from the behavioural tracking or target advertising prohibition under the future DPDP Rules which are yet to be announced or released.
Impact on social media platforms
Social media companies are raising concerns about Section 9 of the DPDP Act and upcoming Rules for the DPDP Act. Section 9 prohibits behavioural tracking or targeted advertising directed at children on digital platforms. By prohibiting intermediaries from tracking a ‘child's internet activities’ and ‘targeted advertising’ - this law aims to preserve children's privacy. However, social media corporations contended that this limitation adversely affects the efficacy of safety measures intended to safeguard young users, highlighting the necessity of monitoring specific user signals, including from minors, to guarantee the efficacy of safety measures designed for them.
Social media companies assert that tracking teenagers' behaviour is essential for safeguarding them from predators and harmful interactions. They believe that a complete ban on behavioural tracking is counterproductive to the government's objectives of protecting children. The scope to grant exemption leaves the door open for further advocacy on this issue. Hence it necessitates coordination with the concerned ministry and relevant stakeholders to find a balanced approach that maintains both privacy and safety for young users.
Furthermore, the impact on social media platforms also extends to the user experience and the operational costs required to implement the functioning of the changes created by regulations. This also involves significant changes to their algorithms and data-handling processes. Implementing robust age verification systems to identify young users and protect their data will also be a technically challenging step for the various scales of platforms. Ensuring that children’s data is not used for targeted advertising or behavioural monitoring also requires sophisticated data management systems. The blanket ban on targeted advertising and behavioural tracking may also affect the personalisation of content for young users, which may reduce their engagement with the platform.
For globally operating platforms, aligning their practices with the DPDP Act in India while also complying with data protection laws in other countries (such as GDPR in Europe or COPPA in the US) can be complex and resource-intensive. Platforms might choose to implement uniform global policies for simplicity, which could impact their operations in regions not governed by similar laws. On the same page, competitive dynamics such as market shifts where smaller or niche platforms that cater specifically to children and comply with these regulations may gain a competitive edge. There may be a drive towards developing new, compliant ways of monetizing user interactions that do not rely on behavioural tracking.
CyberPeace Policy Recommendations
A balanced strategy should be taken into account which gives weightage to the contentions of social media companies as well as to the protection of children's personal information. Instead of a blanket ban, platforms can be obliged to follow and encourage openness in advertising practices, ensuring that children are not exposed to any misleading or manipulative marketing techniques. Self-regulation techniques can be implemented to support ethical behaviour, responsibility, and the safety of young users’ online personal information through the platform’s practices. Additionally, verifiable consent should be examined and put forward in a manner which is practical and the platforms have a say in designing the said verification. Ultimately, this should be dealt with in a manner that behavioural tracking and targeted advertising are not affecting the children's well-being, safety and data protection in any way.
Final Words
Under section 9 of the DPDP Act, the prohibition of behavioural tracking and targeted advertising in case of processing children's personal data - will compel social media platforms to overhaul their data collection and advertising practices, ensuring compliance with stricter privacy regulations. The legislative intent behind this provision is to enhance and strengthen the protection of children's digital personal data security and privacy. As children are particularly vulnerable to digital threats due to their still-evolving maturity and cognitive capacities, the protection of their privacy stands as a priority. The innocence of children is a major cause for concern when it comes to digital access because children simply do not possess the discernment and caution required to be able to navigate the Internet safely. Furthermore, a balanced approach needs to be adopted which maintains both ‘privacy’ and ‘safety’ for young users.
References
- https://www.meity.gov.in/writereaddata/files/Digital%20Personal%20Data%20Protection%20Act%202023.pdf
- https://www.firstpost.com/tech/as-govt-of-india-starts-preparing-rules-for-dpdp-act-social-media-platforms-worried-13789134.html#google_vignette
- https://www.business-standard.com/industry/news/social-media-platforms-worry-new-data-law-could-affect-child-safety-ads-124070400673_1.html