DPDP Bill 2023 A Comparative Analysis
Introduction
THE DIGITAL PERSONAL DATA PROTECTION BILL, 2022 Released for Public Consultation on November 18, 2022THE DIGITAL PERSONAL DATA PROTECTION BILL, 2023Tabled at LokSabha on August 03. 2023Personal data may be processed only for a lawful purpose for which an individual has given consent. Consent may be deemed in certain cases.The 2023 bill imposes reasonable obligations on data fiduciaries and data processors to safeguard digital personal data.There is a Data Protection Board under the 2022 bill to deal with the non-compliance of the Act.Under the 2023 bill, there is the Establishment of a new Data Protection Board which will ensure compliance, remedies and penalties.
Under the new bill, the Board has been entrusted with the power of a civil court, such as the power to take cognisance in response to personal data breaches, investigate complaints, imposing penalties. Additionally, the Board can issue directions to ensure compliance with the act.The 2022 Bill grants certain rights to individuals, such as the right to obtain information, seek correction and erasure, and grievance redressal.The 2023 bill also grants More Rights to Individuals and establishes a balance between user protection and growing innovations. The bill creates a transparent and accountable data governance framework by giving more rights to individuals. In the 2023 bill, there is an Incorporation of Business-friendly provisions by removing criminal penalties for non-compliance and facilitating international data transfers.
The new 2023 bill balances out fundamental privacy rights and puts reasonable limitations on those rights.Under the 2022 bill, Personal data can be processed for a lawful purpose for which an individual has given his consent. And there was a concept of deemed consent.The new data protection board will carefully examine the instance of non-compliance by imposing penalties on non-compiler.The bill does not provide any express clarity in regards to compensation to be granted to the Data Principal in case of a Data Breach.Under 2023 Deemed consent is there in its new form as ‘Legitimate Users’.The 2022 bill allowed the transfer of personal data to locations notified by the government.There is an introduction of the negative list, which restricts cross-data transfer.
Related Blogs
.webp)
Introduction
Misinformation is a major issue in the AI age, exacerbated by the broad adoption of AI technologies. The misuse of deepfakes, bots, and content-generating algorithms have made it simpler for bad actors to propagate misinformation on a large scale. These technologies are capable of creating manipulative audio/video content, propagate political propaganda, defame individuals, or incite societal unrest. AI-powered bots may flood internet platforms with false information, swaying public opinion in subtle ways. The spread of misinformation endangers democracy, public health, and social order. It has the potential to affect voter sentiments, erode faith in the election process, and even spark violence. Addressing misinformation includes expanding digital literacy, strengthening platform detection capabilities, incorporating regulatory checks, and removing incorrect information.
AI's Role in Misinformation Creation
AI's growth in its capabilities to generate content have grown exponentially in recent years. Legitimate uses or purposes of AI many-a-times take a backseat and result in the exploitation of content that already exists on the internet. One of the main examples of misinformation flooding the internet is when AI-powered bots flood social media platforms with fake news at a scale and speed that makes it impossible for humans to track and figure out whether the same is true or false.
The netizens in India are greatly influenced by viral content on social media. AI-generated misinformation can have particularly negative consequences. Being literate in the traditional sense of the word does not automatically guarantee one the ability to parse through the nuances of social media content authenticity and impact. Literacy, be it social media literacy or internet literacy, is under attack and one of the main contributors to this is the rampant rise of AI-generated misinformation. Some of the most common examples of misinformation that can be found are related to elections, public health, and communal issues. These issues have one common factor that connects them, which is that they evoke strong emotions in people and as such can go viral very quickly and influence social behaviour, to the extent that they may lead to social unrest, political instability and even violence. Such developments lead to public mistrust in the authorities and institutions, which is dangerous in any economy, but even more so in a country like India which is home to a very large population comprising a diverse range of identity groups.
Misinformation and Gen AI
Generative AI (GAI) is a powerful tool that allows individuals to create massive amounts of realistic-seeming content, including imitating real people's voices and creating photos and videos that are indistinguishable from reality. Advanced deepfake technology blurs the line between authentic and fake. However, when used smartly, GAI is also capable of providing a greater number of content consumers with trustworthy information, counteracting misinformation.
Generative AI (GAI) is a technology that has entered the realm of autonomous content production and language creation, which is linked to the issue of misinformation. It is often difficult to determine if content originates from humans or machines and if we can trust what we read, see, or hear. This has led to media users becoming more confused about their relationship with media platforms and content and highlighted the need for a change in traditional journalistic principles.
We have seen a number of different examples of GAI in action in recent times, from fully AI-generated fake news websites to fake Joe Biden robocalls telling the Democrats in the U.S. not to vote. The consequences of such content and the impact it could have on life as we know it are almost too vast to even comprehend at present. If our ability to identify reality is quickly fading, how will we make critical decisions or navigate the digital landscape safely? As such, the safe and ethical use and applications of this technology needs to be a top global priority.
Challenges for Policymakers
AI's ability to generate anonymous content makes it difficult to hold perpetrators accountable due to the massive amount of data generated. The decentralised nature of the internet further complicates regulation efforts, as misinformation can spread across multiple platforms and jurisdictions. Balancing the need to protect the freedom of speech and expression with the need to combat misinformation is a challenge. Over-regulation could stifle legitimate discourse, while under-regulation could allow misinformation to propagate unchecked. India's multilingual population adds more layers to already-complex issue, as AI-generated misinformation is tailored to different languages and cultural contexts, making it harder to detect and counter. Therefore, developing strategies catering to the multilingual population is necessary.
Potential Solutions
To effectively combat AI-generated misinformation in India, an approach that is multi-faceted and multi-dimensional is essential. Some potential solutions are as follows:
- Developing a framework that is specific in its application to address AI-generated content. It should include stricter penalties for the originator and spreader and dissemination of fake content in proportionality to its consequences. The framework should establish clear and concise guidelines for social media platforms to ensure that proactive measures are taken to detect and remove AI-generated misinformation.
- Investing in tools that are driven by AI for customised detection and flagging of misinformation in real time. This can help in identifying deepfakes, manipulated images, and other forms of AI-generated content.
- The primary aim should be to encourage different collaborations between tech companies, cyber security orgnisations, academic institutions and government agencies to develop solutions for combating misinformation.
- Digital literacy programs will empower individuals by training them to evaluate online content. Educational programs in schools and communities teach critical thinking and media literacy skills, enabling individuals to better discern between real and fake content.
Conclusion
AI-generated misinformation presents a significant threat to India, and it is safe to say that the risks posed are at scale with the rapid rate at which the nation is developing technologically. As the country moves towards greater digital literacy and unprecedented mobile technology adoption, one must be cognizant of the fact that even a single piece of misinformation can quickly and deeply reach and influence a large portion of the population. Indian policymakers need to rise to the challenge of AI-generated misinformation and counteract it by developing comprehensive strategies that not only focus on regulation and technological innovation but also encourage public education. AI technologies are misused by bad actors to create hyper-realistic fake content including deepfakes and fabricated news stories, which can be extremely hard to distinguish from the truth. The battle against misinformation is complex and ongoing, but by developing and deploying the right policies, tools, digital defense frameworks and other mechanisms, we can navigate these challenges and safeguard the online information landscape.
References:
- https://economictimes.indiatimes.com/news/how-to/how-ai-powered-tools-deepfakes-pose-a-misinformation-challenge-for-internet-users/articleshow/98770592.cms?from=mdr
- https://www.dw.com/en/india-ai-driven-political-messaging-raises-ethical-dilemma/a-69172400
- https://pure.rug.nl/ws/portalfiles/portal/975865684/proceedings.pdf#page=62

Introduction
The first activity one engages in while using social media is scrolling through their feed and liking or reacting to posts. Social media users' online activity is passive, involving merely reading and observing, while active use occurs when a user consciously decides to share information or comment after actively analysing it. We often "like" photos, posts, and tweets reflexively, hardly stopping to think about why we do it and what information it contains. This act of "liking" or "reacting" is a passive activity that can spark an active discourse. Frequently, we encounter misinformation on social media in various forms, which could be identified as false at first glance if we exercise caution and avoid validating it with our likes.
Passive engagement, such as liking or reacting to a post, triggers social media algorithms to amplify its reach, exposing it to a broader audience. This amplification increases the likelihood of misinformation spreading quickly as more people interact with it. As the content circulates, it gains credibility through repeated exposure, reinforcing false narratives and expanding its impact.
Social media platforms are designed to facilitate communication and conversations for various purposes. However, this design also enables the sharing, exchange, distribution, and reception of content, including misinformation. This can lead to the widespread spread of false information, influencing public opinion and behaviour. Misinformation has been identified as a contributing factor in various contentious events, ranging from elections and referenda to political or religious persecution, as well as the global response to the COVID-19 pandemic.
The Mechanics of Passive Sharing
Sharing a post without checking the facts mentioned or sharing it without providing any context can create situations where misinformation can be knowingly or unknowingly spread. The problem with sharing and forwarding information on social media without fact-checking is that it usually starts in small, trusted networks before going on to be widely seen across the internet. This web which begins is infinite and cutting it from the roots is necessary. The rapid spread of information on social media is driven by algorithms that prioritise engagement and often they amplify misleading or false content and contribute to the spread of misinformation. The algorithm optimises the feed and ensures that the posts that are most likely to engage with appear at the top of the timeline, thus encouraging a cycle of liking and posting that keeps users active and scrolling.
The internet reaches billions of individuals and enables them to tailor persuasive messages to the specific profiles of individual users. The internet because of its reach is an ideal medium for the fast spread of falsehoods at the expense of accurate information.
Recommendations for Combating Passive Sharing
The need to combat passive sharing that we indulge in is important and some ways in which we can do so are as follows:
- We need to critically evaluate the sources before sharing any content. This will ensure that the information source is not corrupted and used as a means to cause disruptions. The medium should not be used to spread misinformation due to the source's ulterior motives. Tools such as crowdsourcing and AI methods have been used in the past to evaluate the sources and have been successful to an extent.
- Engaging with fact-checking tools and verifying the information is also crucial. The information that has been shared on the post needs to be verified through authenticated sources before indulging in the practice of sharing.
- Being mindful of the potential impact of online activity, including likes and shares is important. The kind of reach that social media users have today is due to several reasons ranging from the content they create, the rate at which they engage with other users etc. Liking and sharing content might not seem much for an individual user but the impact it has collectively is huge.
Conclusion
Passive sharing of misinformation, like liking or sharing without verification, amplifies false information, erodes trust in legitimate sources, and deepens social and political divides. It can lead to real-world harm and ethical dilemmas. To combat this, critical evaluation, fact-checking, and mindful online engagement are essential to mitigating this passive spread of misinformation. The small act of “like” or “share” has a much more far-reaching effect than we anticipate and we should be mindful of all our activities on the digital platform.
References
- https://www.tandfonline.com/doi/full/10.1080/00049530.2022.2113340#summary-abstract
- https://timesofindia.indiatimes.com/city/thane/badlapur-protest-police-warn-against-spreading-fake-news/articleshow/112750638.cms

Executive Summary:
Several videos claiming to show bizarre, mutated animals with features such as seal's body and cow's head have gone viral on social media. Upon thorough investigation, these claims were debunked and found to be false. No credible source of such creatures was found and closer examination revealed anomalies typical of AI-generated content, such as unnatural leg movements, unnatural head movements and joined shoes of spectators. AI material detectors confirmed the artificial nature of these videos. Further, digital creators were found posting similar fabricated videos. Thus, these viral videos are conclusively identified as AI-generated and not real depictions of mutated animals.

Claims:
Viral videos show sea creatures with the head of a cow and the head of a Tiger.



Fact Check:
On receiving several videos of bizarre mutated animals, we searched for credible sources that have been covered in the news but found none. We then thoroughly watched the video and found certain anomalies that are generally seen in AI manipulated images.



Taking a cue from this, we checked all the videos in the AI video detection tool named TrueMedia, The detection tool found the audio of the video to be AI-generated. We divided the video into keyframes, the detection found the depicting image to be AI-generated.


In the same way, we investigated the second video. We analyzed the video and then divided the video into keyframes and analyzed it with an AI-Detection tool named True Media.

It was found to be suspicious and so we analyzed the frame of the video.

The detection tool found it to be AI-generated, so we are certain with the fact that the video is AI manipulated. We analyzed the final third video and found it to be suspicious by the detection tool.


The detection tool found the frame of the video to be A.I. manipulated from which it is certain that the video is A.I. manipulated. Hence, the claim made in all the 3 videos is misleading and fake.
Conclusion:
The viral videos claiming to show mutated animals with features like seal's body and cow's head are AI-generated and not real. A thorough investigation by the CyberPeace Research Team found multiple anomalies in AI-generated content and AI-content detectors confirmed the manipulation of A.I. fabrication. Therefore, the claims made in these videos are false.
- Claim: Viral videos show sea creatures with the head of a cow, the head of a Tiger, head of a bull.
- Claimed on: YouTube
- Fact Check: Fake & Misleading