CCI Imposes ₹213 Crore Penalty on Meta for Abusing Dominance via 2021 WhatsApp Privacy Policy
Mr. Neeraj Soni
Sr. Researcher - Policy & Advocacy, CyberPeace
PUBLISHED ON
Nov 20, 2024
10
Introduction
India's Competition Commission of India (CCI) on 18th November 2024 imposed a ₹213 crore penalty on Meta for abusing its dominant position in internet-based messaging through WhatsApp and online display advertising. The CCI order is passed against abuse of dominance by the Meta and relates to WhatsApp’s 2021 Privacy Policy. The CCI considers Meta a dominant player in internet-based messaging through WhatsApp and also in online display advertising. WhatsApp's 2021 privacy policy update undermined users' ability to opt out of getting their data shared with the group's social media platform Facebook. The CCI directed WhatsApp not to share user data collected on its platform with other Meta companies or products for advertising purposes for five years.
CCI Contentions
The regulator contended that for purposes other than advertising, WhatsApp's policy should include a detailed explanation of the user data shared with other Meta group companies or products specifying the purpose. The regulator also stated that sharing user data collected on WhatsApp with other Meta companies or products for purposes other than providing WhatsApp services should not be a condition for users to access WhatsApp services in India. CCI order is significant as it upholds user consent as a key principle in the functioning of social media giants, similar to the measures taken by some other markets.
Meta’s Stance
WhatsApp parent company Meta has expressed its disagreement with the Competition Commission of India's(CCI) decision to impose a Rs 213 crore penalty on them over users' privacy concerns. Meta clarified that the 2021 update did not change the privacy of people's personal messages and was offered as a choice for users at the time. It also ensured no one would have their accounts deleted or lose functionality of the WhatsApp service because of this update.
Meta clarified that the update was about introducing optional business features on WhatsApp and providing further transparency about how they collect data. The company stated that WhatsApp has been incredibly valuable to people and businesses, enabling organization's and government institutions to deliver citizen services through COVID and beyond and supporting small businesses, all of which further the Indian economy. Meta plans to find a path forward that allows them to continue providing the experiences that "people and businesses have come to expect" from them. The CCI issued cease-and-desist directions and directed Meta and WhatsApp to implement certain behavioral remedies within a defined timeline.
The competition watchdog noted that WhatsApp's 2021 policy update made it mandatory for users to accept the new terms, including data sharing with Meta, and removed the earlier option to opt-out, categorized as an "unfair condition" under the Competition Act. It was further noted that WhatsApp’s sharing of users’ business transaction information with Meta gave the group entities an unfair advantage over competing platforms.
CyberPeace Outlook
The 2021 policy update by WhatsApp mandated data sharing with Meta's other companies group, removing the opt-out option and compelling users to accept the terms to continue using the platform. This policy undermined user autonomy and was deemed as an abuse of Meta's dominant market position, violating Section 4(2)(a)(i) of the Competition Act, as noted by CCI.
The CCI’s ruling requires WhatsApp to offer all users in India, including those who had accepted the 2021 update, the ability to manage their data-sharing preferences through a clear and prominent opt-out option within the app. This decision underscores the importance of user choice, informed consent, and transparency in digital data policies.
By addressing the coercive nature of the policy, the CCI ruling establishes a significant legal precedent for safeguarding user privacy and promoting fair competition. It highlights the growing acknowledgement of privacy as a fundamental right and reinforces the accountability of tech giants to respect user autonomy and market fairness. The directive mandates that data sharing within the Meta ecosystem must be based on user consent, with the option to decline such sharing without losing access to essential services.
Language is an important part of human communication and a basic aspect of human understanding. The world is a global market and this diversity of languages has led to difficulties in engaging for effective communication and collaboration. India alone has 22 official languages and countless regional languages and dialects which change every few hundred kilometres.
AI has emerged to overcome this challenge of language barriers and has stepped into bringing about a transformative shift. It is leading the charge in breaking down traditional barriers and paving the way for more inclusive and seamless global interactions. AI’s integration into language translation has revolutionised the field, addressing longstanding challenges associated with traditional human-centric approaches. The limitations posed by reliance on human translators, such as time constraints, resource limitations, and the inability to handle the data efficiently, paved the way for the furtherance of the transformative impact of AI. However, challenges such as maintaining translation accuracy, addressing cultural nuances, and ensuring data privacy require careful attention to realize AI's full potential.
AI Technologies Bridging Language Gaps
AI tools have transformed translation, transcription, and natural language processing, providing language solutions. They can instantly translate text, transcribe audio, and analyse linguistic nuances, enabling effective cross-cultural communication. Moreover, AI's adaptive capabilities have facilitated language learning, allowing individuals to grasp new languages and adapt their communication styles to diverse cultural contexts.
AI technologies are making information and services more accessible to non-native speakers and are impacting global business, allowing effective engagement. Building on this transformative potential, various AI tools are now used to bridge language gaps in real-world applications. Some examples of AI’s role in bridging the language gap are:
Real-time translation tools that enable instant communication by providing translations between languages on the fly. This would help in effortless conversations with clients and partners worldwide.
Tools such as ‘speech-to-text’ and ‘text-to-speech’ like Murf AI, Lovo AI, and ElevenLabs work towards converting spoken language into written text and vice versa. These technologies have led to streamlined interactions, boosted productivity, and clarity in global business dealings. Businesses can extract important information, insights, and action points from meetings, interviews, and presentations.
AI chatbots like MyGov Corona Helpdesk, WhatsApp Chatbot by the Government of India, Railway Food Order & Delivery by Zoop India, and Gen AI-Powered 'Elena' by Indian School of Business (ISB) are some examples that act as intelligent virtual assistants that engage in real-time conversations, by answering queries, providing information, and facilitating transactions. They offer round-the-clock support, freeing human resources and enhancing customer experience across language barriers.
Challenges and Limitations of AI Translation
While AI’s integration in combatting language barriers is commendable, there are challenges and limitations in overcoming this endeavour. These challenges and limitations are:
AI translation systems face several challenges in handling accuracy, context, nuance, and idiomatic expressions.
These systems may encounter struggles with complex or specialised language, along with those towards regional dialects, leading to potential misinterpretations.
Biases within the AI models can further affect the inclusivity of translations, often favouring dominant languages and cultural norms while marginalising others.
Ethical concerns, regarding privacy and data security, particularly when sensitive information is processed have also been arising.
Ensuring user consent and protecting data integrity are essential to addressing these concerns. As AI continues to evolve, ongoing efforts are needed to improve fairness, transparency, and the cultural sensitivity of translation systems.
AI’s Future in Language Translation
AI technologies are moving towards improving translation accuracy and contextual understanding, allowing AI models to grasp cultural nuances and idiomatic expressions better. This can significantly enhance communication across diverse languages, fostering multilingual interactions and global collaboration in business, education, and diplomacy. Improvements in AI tech are taking place ubiquitous, and models like GPT and Google Translate are now better at capturing nuances, idioms, and cultural differences, reducing errors. AI tools like the Microsoft Translator help cross-continental teams work seamlessly by enhancing their productivity and inclusivity.
AI is capable of offering real-time translation in healthcare, education, and public services. This would enable more inclusive environments and bridging communication gaps. For example in the healthcare system, AI-powered translation tools are helping the industry to provide better care by crossing linguistic barriers. Doctors can now communicate with patients who speak different languages, ensuring equitable care even with linguistic boundaries.
Conclusion
We live in a world where diverse languages pose significant challenges to global communication, and AI has emerged as a powerful tool to bridge these gaps. AI is paving the way for more inclusive and seamless interactions by revolutionising language translation, transcription, and natural language processing. Its ability to break down barriers caused by linguistic diversity ensures effective communication in fields ranging from business to healthcare. Despite challenges like accuracy and cultural sensitivity, the potential for AI to continuously improve is undeniable. As AI technologies evolve, they stand as the key to overcoming language barriers and fostering a more connected and inclusive global community.
Notwithstanding AI's potential abilities to overcome language barriers through advances in natural language processing and translation, cybersecurity and data privacy must always come first. The same technologies that make it easier to communicate globally also put private information at risk. The likelihood of data breaches, personal information misuse, and compromised communication rises in the absence of strict cybersecurity safeguards. Thus, in order to guarantee safe and reliable international Interactions as AI develops, it is crucial to strike a balance between innovation and privacy protection.
In an era when misinformation spreads like wildfire across the digital landscape, the need for effective strategies to counteract these challenges has grown exponentially in a very short period. Prebunking and Debunking are two approaches for countering the growing spread of misinformation online. Prebunking empowers individuals by teaching them to discern between true and false information and acts as a protective layer that comes into play even before people encounter malicious content. Debunking is the correction of false or misleading claims after exposure, aiming to undo or reverse the effects of a particular piece of misinformation. Debunking includes methods such as fact-checking, algorithmic correction on a platform, social correction by an individual or group of online peers, or fact-checking reports by expert organisations or journalists. An integrated approach which involves both strategies can be effective in countering the rapid spread of misinformation online.
Brief Analysis of Prebunking
Prebunking is a proactive practice that seeks to rebut erroneous information before it spreads. The goal is to train people to critically analyse information and develop ‘cognitive immunity’ so that they are less likely to be misled when they do encounter misinformation.
The Prebunking approach, grounded in Inoculation theory, teaches people to recognise, analyse and avoid manipulation and misleading content so that they build resilience against the same. Inoculation theory, a social psychology framework, suggests that pre-emptively conferring psychological resistance against malicious persuasion attempts can reduce susceptibility to misinformation across cultures. As the term suggests, the MO is to help the mind in the present develop resistance to influence that it may encounter in the future. Just as medical vaccines or inoculations help the body build resistance to future infections by administering weakened doses of the harm agent, inoculation theory seeks to teach people fact from fiction through exposure to examples of weak, dichotomous arguments, manipulation tactics like emotionally charged language, case studies that draw parallels between truths and distortions, and so on. In showing people the difference, inoculation theory teaches them to be on the lookout for misinformation and manipulation even, or especially, when they least expect it.
The core difference between Prebunking and Debunking is that while the former is preventative and seeks to provide a broad-spectrum cover against misinformation, the latter is reactive and focuses on specific instances of misinformation. While Debunking is closely tied to fact-checking, Prebunking is tied to a wider range of specific interventions, some of which increase motivation to be vigilant against misinformation and others increase the ability to engage in vigilance with success.
There is much to be said in favour of the Prebunking approach because these interventions build the capacity to identify misinformation and recognise red flags However, their success in practice may vary. It might be difficult to scale up Prebunking efforts and ensure their reach to a larger audience. Sustainability is critical in ensuring that Prebunking measures maintain their impact over time. Continuous reinforcement and reminders may be required to ensure that individuals retain the skills and information they gained from the Prebunking training activities. Misinformation tactics and strategies are always evolving, so it is critical that Prebunking interventions are also flexible and agile and respond promptly to developing challenges. This may be easier said than done, but with new misinformation and cyber threats developing frequently, it is a challenge that has to be addressed for Prebunking to be a successful long-term solution.
Encouraging people to be actively cautious while interacting with information, acquire critical thinking abilities, and reject the effect of misinformation requires a significant behavioural change over a relatively short period of time. Overcoming ingrained habits and prejudices, and countering a natural reluctance to change is no mean feat. Developing a widespread culture of information literacy requires years of social conditioning and unlearning and may pose a significant challenge to the effectiveness of Prebunking interventions.
Brief Analysis of Debunking
Debunking is a technique for identifying and informing people that certain news items or information are incorrect or misleading. It seeks to lessen the impact of misinformation that has already spread. The most popular kind of Debunking occurs through collaboration between fact-checking organisations and social media businesses. Journalists or other fact-checkers discover inaccurate or misleading material, and social media platforms flag or label it. Debunking is an important strategy for curtailing the spread of misinformation and promoting accuracy in the digital information ecosystem.
Debunking interventions are crucial in combating misinformation. However, there are certain challenges associated with the same. Debunking misinformation entails critically verifying facts and promoting corrected information. However, this is difficult owing to the rising complexity of modern tools used to generate narratives that combine truth and untruth, views and facts. These advanced approaches, which include emotional spectrum elements, deepfakes, audiovisual material, and pervasive trolling, necessitate a sophisticated reaction at all levels: technological, organisational, and cultural.
Furthermore, It is impossible to debunk all misinformation at any given time, which effectively means that it is impossible to protect everyone at all times, which means that at least some innocent netizens will fall victim to manipulation despite our best efforts. Debunking is inherently reactive in nature, addressing misinformation after it has grown extensively. This reactionary method may be less successful than proactive strategies such as Prebunking from the perspective of total harm done. Misinformation producers operate swiftly and unexpectedly, making it difficult for fact-checkers to keep up with the rapid dissemination of erroneous or misleading information. Debunking may need continuous exposure to fact-check to prevent erroneous beliefs from forming, implying that a single Debunking may not be enough to rectify misinformation. Debunking requires time and resources, and it is not possible to disprove every piece of misinformation that circulates at any particular moment. This constraint may cause certain misinformation to go unchecked, perhaps leading to unexpected effects. The misinformation on social media can be quickly spread and may become viral faster than Debunking pieces or articles. This leads to a situation in which misinformation spreads like a virus, while the antidote to debunked facts struggles to catch up.
Prebunking vs Debunking: Comparative Analysis
Prebunking interventions seek to educate people to recognise and reject misinformation before they are exposed to actual manipulation. Prebunking offers tactics for critical examination, lessening the individuals' susceptibility to misinformation in a variety of contexts. On the other hand, Debunking interventions involve correcting specific false claims after they have been circulated. While Debunking can address individual instances of misinformation, its impact on reducing overall reliance on misinformation may be limited by the reactive nature of the approach.
CyberPeace Policy Recommendations for Tech/Social Media Platforms
With the rising threat of online misinformation, tech/social media platforms can adopt an integrated strategy that includes both Prebunking and Debunking initiatives to be deployed and supported on all platforms to empower users to recognise the manipulative messaging through Prebunking and be aware of the accuracy of misinformation through Debunking interventions.
Gamified Inoculation: Tech/social media companies can encourage gamified inoculation campaigns, which is a competence-oriented approach to Prebunking misinformation. This can be effective in helping people immunise the receiver against subsequent exposures. It can empower people to build competencies to detect misinformation through gamified interventions.
Promotion of Prebunking and Debunking Campaigns through Algorithm Mechanisms:Tech/social media platforms may promote and guarantee that algorithms prioritise the distribution of Prebunking materials to users, boosting educational content that strengthens resistance to misinformation. Platform operators should incorporate algorithms that prioritise the visibility of Debunking content in order to combat the spread of erroneous information and deliver proper corrections; this can eventually address and aid in Prebunking and Debunking methods to reach a bigger or targeted audience.
User Empowerment to Counter Misinformation:Tech/social media platforms can design user-friendly interfaces that allow people to access Prebunking materials, quizzes, and instructional information to help them improve their critical thinking abilities. Furthermore, they can incorporate simple reporting tools for flagging misinformation, as well as links to fact-checking resources and corrections.
Partnership with Fact-Checking/Expert Organizations:Tech/social media platforms can facilitate Prebunking and Debunking initiatives/campaigns by collaborating with fact-checking/expert organisations and promoting such initiatives at a larger scale and ultimately fighting misinformation with joint hands initiatives.
Conclusion
The threat of online misinformation is only growing with every passing day and so, deploying effective countermeasures is essential. Prebunking and Debunking are the two such interventions. To sum up: Prebunking interventions try to increase resilience to misinformation, proactively lowering susceptibility to erroneous or misleading information and addressing broader patterns of misinformation consumption, while Debunking is effective in correcting a particular piece of misinformation and having a targeted impact on belief in individual false claims. An integrated approach involving both the methods and joint initiatives by tech/social media platforms and expert organizations can ultimately help in fighting the rising tide of online misinformation and establishing a resilient online information landscape.
Misinformation poses a significant challenge to public health policymaking since it undermines efforts to promote effective health interventions and protect public well-being. The spread of inaccurate information, particularly through online channels such as social media and internet platforms, further complicates the decision-making process for policymakers since it perpetuates public confusion and distrust. This misinformation can lead to resistance against health initiatives, such as vaccination programs, and fuels scepticism towards scientifically-backed health guidelines.
Before the COVID-19 pandemic, misinformation surrounding healthcare largely encompassed the effects of alcohol and tobacco consumption, marijuana use, eating habits, physical exercise etc. However, there has been a marked shift in the years since. One such example is the outcry against palm oil in 2024: it is an ingredient prevalent in numerous food and cosmetic products, and came under the scanner after a number of claims that palmitic acid, which is present in palm oil, is detrimental to our health. However, scientific research by reputable institutions globally established that there is no cause for concern regarding the health risks posed by palmitic acid. Such trends and commentaries tend to create a parallel unscientific discourse that has the potential to not only impact individual choices but also public opinion and as a result, market developments and policy conversations.
A prevailing narrative during the worst of the Covid-19 pandemic was that the virus had been engineered to control society and boost hospital profits. The extensive misinformation surrounding COVID-19 and its management and care increased vaccine hesitancy amongst people worldwide. It is worth noting that vaccine hesitancy has been a consistent trend historically; the World Health Organisation flagged vaccine hesitancy as one of the main threats to global health, and there have been other instances where a majority of the population refused to get vaccinated anticipating unverified, long-lasting side effects. For example, research from 2016 observed a significant level of public skepticism regarding the development and approval process of the Zika vaccine in Africa. Further studies emphasised the urgent need to disseminate accurate information about the Zika virus on online platforms to help curb the spread of the pandemic.
In India during the COVID-19 pandemic, despite multiple official advisories, notifications and guidelines issued by the government and ICMR, people continued to remain opposed to vaccination, which resulted in inflated mortality rates within the country. Vaccination hesitancy was also compounded by anti-vaccination celebrities who claimed that vaccines were dangerous and contributed in large part to the conspiracy theories doing the rounds. Similar hesitation was noted in misinformation surrounding the MMR vaccines and their likely role in causing autism was examined. At the time of the crisis, the Indian government also had to tackle disinformation-induced fraud surrounding the supply of oxygens in hospitals. Many critically-ill patients relied on fake news and unverified sources that falsely portrayed the availability of beds, oxygen cylinders and even home set-ups, only to be cheated out of money.
The above examples highlight the difficulty health officials face in administering adequate healthcare. The special case of the COVID-19 pandemic also highlighted how current legal frameworks failed to address misinformation and disinformation, which impedes effective policymaking. It also highlights how taking corrective measures against health-related misinformation becomes difficult since such corrective action creates an uncomfortable gap in an individual’s mind, and it is seen that people ignore accurate information that may help bridge the gap. Misinformation, coupled with the infodemic trend, also leads to false memory syndrome, whereby people fail to differentiate between authentic information and fake narratives. Simple efforts to correct misperceptions usually backfire and even strengthen initial beliefs, especially in the context of complex issues like healthcare. Policymakers thus struggle with balancing policy making and making people receptive to said policies in the backdrop of their tendencies to reject/suspect authoritative action. Examples of the same can be observed on both the domestic front and internationally. In the US, for example, the traditional healthcare system rations access to healthcare through a combination of insurance costs and options versus out-of-pocket essential expenses. While this has been a subject of debate for a long time, it hadn’t created a large scale public healthcare crisis because the incentives offered to the medical professionals and public trust in the delivery of essential services helped balance the conversation. In recent times, however, there has been a narrative shift that sensationalises the system as an issue of deliberate “denial of care,” which has led to concerns about harms to patients.
Policy Recommendations
The hindrances posed by misinformation in policymaking are further exacerbated against the backdrop of policymakers relying on social media as a method to measure public sentiment, consensus and opinions. If misinformation about an outbreak is not effectively addressed, it could hinder individuals from adopting necessary protective measures and potentially worsen the spread of the epidemic. To improve healthcare policymaking amidst the challenges posed by health misinformation, policymakers must take a multifaceted approach. This includes convening a broad coalition of central, state, local, territorial, tribal, private, nonprofit, and research partners to assess the impact of misinformation and develop effective preventive measures. Intergovernmental collaborations such as the Ministry of Health and the Ministry of Electronics and Information Technology should be encouraged whereby doctors debunk online medical misinformation, in the backdrop of the increased reliance on online forums for medical advice. Furthermore, increasing investment in research dedicated to understanding misinformation, along with the ongoing modernization of public health communications, is essential. Enhancing the resources and technical support available to state and local public health agencies will also enable them to better address public queries and concerns, as well as counteract misinformation. Additionally, expanding efforts to build long-term resilience against misinformation through comprehensive educational programs is crucial for fostering a well-informed public capable of critically evaluating health information.
From an individual perspective, since almost half a billion people use WhatsApp it has become a platform where false health claims can spread rapidly. This has led to a rise in the use of fake health news. Viral WhatsApp messages containing fake health warnings can be dangerous, hence it is always recommended to check such messages with vigilance. This highlights the growing concern about the potential dangers of misinformation and the need for more accurate information on medical matters.
Conclusion
The proliferation of misinformation in healthcare poses significant challenges to effective policymaking and public health management. The COVID-19 pandemic has underscored the role of misinformation in vaccine hesitancy, fraud, and increased mortality rates. There is an urgent need for robust strategies to counteract false information and build public trust in health interventions; this includes policymakers engaging in comprehensive efforts, including intergovernmental collaboration, enhanced research, and public health communication modernization, to combat misinformation. By fostering a well-informed public through education and vigilance, we can mitigate the impact of misinformation and promote healthier communities.
References
van der Meer, T. G. L. A., & Jin, Y. (2019), “Seeking Formula for Misinformation Treatment in Public Health Crises: The Effects of Corrective Information Type and Source” Health Communication, 35(5), 560–575. https://doi.org/10.1080/10410236.2019.1573295
Your institution or organization can partner with us in any one of our initiatives or policy research activities and complement the region-specific resources and talent we need.