Law in 30 Seconds? The Rise of Influencer Hype and Legal Misinformation
Introduction
In today's digital age, we consume a lot of information and content on social media apps, and it has become a daily part of our lives. Additionally, the algorithm of these apps is such that once you like a particular category of content or show interest in it, the algorithm starts showing you a lot of similar content. With this, the hype around becoming a content creator has also increased, and people have started making short reel videos and sharing a lot of information. There are influencers in every field, whether it's lifestyle, fitness, education, entertainment, vlogging, and now even legal advice.
The online content, reels, and viral videos by social media influencers giving legal advice can have far-reaching consequences. ‘LAW’ is a vast subject where even a single punctuation mark holds significant meaning. If it is misinterpreted or only partially explained in social media reels and short videos, it can lead to serious consequences. Laws apply based on the facts and circumstances of each case, and they can differ depending on the nature of the case or offence. This trend of ‘swipe for legal advice’ or ‘law in 30 seconds’, along with the rise of the increasing number of legal influencers, poses a serious problem in the online information landscape. It raises questions about the credibility and accuracy of such legal advice, as misinformation can mislead the masses, fuel legal confusion, and create risks.
Bar Council of India’s stance against legal misinformation on social media platforms
The Bar Council of India (BCI) on Monday (March 17, 2025) expressed concern over the rise of self-styled legal influencers on social media, stating that many without proper credentials spread misinformation on critical legal issues. Additionally, “Incorrect or misleading interpretations of landmark judgments like the Citizenship Amendment Act (CAA), the Right to Privacy ruling in Justice K.S. Puttaswamy (Retd.) v. Union of India, and GST regulations have resulted in widespread confusion, misguided legal decisions, and undue judicial burden,” the body said. The BCI also ordered the mandatory cessation of misleading and unauthorised legal advice dissemination by non-enrolled individuals and called for the establishment of stringent vetting mechanisms for legal content on digital platforms. The BCI emphasised the need for swift removal of misleading legal information.
Conclusion
Legal misinformation on social media is a growing issue that not only disrupts public perception but also influences real-life decisions. The internet is turning complex legal discourse into a chaotic game of whispers, with influencers sometimes misquoting laws and self-proclaimed "legal experts" offering advice that wouldn't survive in a courtroom. The solution is not censorship, but counterbalance. Verified legal voices need to step up, fact-checking must be relentless, and digital literacy must evolve to keep up with the fast-moving world of misinformation. Otherwise, "legal truth" could be determined by whoever has the best engagement rate, rather than by legislation or precedent.
References:
Related Blogs

What are Deepfakes?
A deepfake is essentially a video of a person in which their face or body has been digitally altered so that they appear to be someone else, typically used maliciously or to spread false information. Deepfake technology is a method for manipulating videos, images, and audio utilising powerful computers and deep learning. It is used to generate fake news and commit financial fraud, among other wrongdoings. It overlays a digital composite over an already-existing video, picture, or audio; cybercriminals use Artificial Intelligence technology. The term deepfake was coined first time in 2017 by an anonymous Reddit user, who called himself deepfake.
Deepfakes works on a combination of AI and ML, which makes the technology hard to detect by Web 2.0 applications, and it is almost impossible for a layman to see if an image or video is fake or has been created using deepfakes. In recent times, we have seen a wave of AI-driven tools which have impacted all industries and professions across the globe. Deepfakes are often created to spread misinformation. There lies a key difference between image morphing and deepfakes. Image morphing is primarily used for evading facial recognition, but deepfakes are created to spread misinformation and propaganda.
Issues Pertaining to Deepfakes in India
Deepfakes are a threat to any nation as the impact can be divesting in terms of monetary losses, social and cultural unrest, and actions against the sovereignty of India by anti-national elements. Deepfake detection is difficult but not impossible. The following threats/issues are seen to be originating out of deep fakes:
- Misinformation: One of the biggest issues of Deepfake is misinformation, the same was seen during the Russia-Ukraine conflict, where in a deepfake of Ukraine’s president, Mr Zelensky, surfaced on the internet and caused mass confusion and propaganda-based misappropriation among the Ukrainians.
- Instigation against the Union of India: Deepfake poses a massive threat to the integrity of the Union of India, as this is one of the easiest ways for anti-national elements to propagate violence or instigate people against the nation and its interests. As India grows, so do the possibilities of anti-national attacks against the nation.
- Cyberbullying/ Harassment: Deepfakes can be used by bad actors to harass and bully people online in order to extort money from them.
- Exposure to Illicit Content: Deepfakes can be easily used to create illicit content, and oftentimes, it is seen that it is being circulated on online gaming platforms where children engage the most.
- Threat to Digital Privacy: Deepfakes are created by using existing videos. Hence, bad actors often use photos and videos from Social media accounts to create deepfakes, this directly poses a threat to the digital privacy of a netizen.
- Lack of Grievance Redressal Mechanism: In the contemporary world, the majority of nations lack a concrete policy to address the aspects of deepfake. Hence, it is of paramount importance to establish legal and industry-based grievance redressal mechanisms for the victims.
- Lack of Digital Literacy: Despite of high internet and technology penetration rates in India, digital literacy lags behind, this is a massive concern for the Indian netizens as it takes them far from understanding the tech, which results in the under-reporting of crimes. Large-scale awareness and sensitisation campaigns need to be undertaken in India to address misinformation and the influence of deepfakes.
How to spot deepfakes?
Deepfakes look like the original video at first look, but as we progress into the digital world, it is pertinent to establish identifying deepfakes in our digital routine and netiquettes in order to stay protected in the future and to address this issue before it is too late. The following aspects can be kept in mind while differentiating between a real video and a deepfake
- Look for facial expressions and irregularities: Whenever differentiating between an original video and deepfake, always look for changes in facial expressions and irregularities, it can be seen that the facial expressions, such as eye movement and a temporary twitch on the face, are all signs of a video being a deepfake.
- Listen to the audio: The audio in deepfake also has variations as it is imposed on an existing video, so keep a check on the sound effects coming from a video in congruence with the actions or gestures in the video.
- Pay attention to the background: The most easiest way to spot a deepfake is to pay attention to the background, in all deepfakes, you can spot irregularities in the background as, in most cases, its created using virtual effects so that all deepfakes will have an element of artificialness in the background.
- Context and Content: Most of the instances of deepfake have been focused towards creating or spreading misinformation hence, the context and content of any video is an integral part of differentiating between an original video and deepfake.
- Fact-Checking: As a basic cyber safety and digital hygiene protocol, one should always make sure to fact-check each and every piece of information they come across on social media. As a preventive measure, always make sure to fact-check any information or post sharing it with your known ones.
- AI Tools: When in doubt, check it out, and never refrain from using Deepfake detection tools like- Sentinel, Intel’s real-time deepfake detector - Fake catcher, We Verify, and Microsoft’s Video Authenticator tool to analyze the videos and combating technology with technology.
Recent Instance
A deepfake video of actress Rashmika Mandanna recently went viral on social media, creating quite a stir. The video showed a woman entering an elevator who looked remarkably like Mandanna. However, it was later revealed that the woman in the video was not Mandanna, but rather, her face was superimposed using AI tools. Some social media users were deceived into believing that the woman was indeed Mandanna, while others identified it as an AI-generated deepfake. The original video was actually of a British-Indian girl named Zara Patel, who has a substantial following on Instagram. This incident sparked criticism from social media users towards those who created and shared the video merely for views, and there were calls for strict action against the uploaders. The rapid changes in the digital world pose a threat to personal privacy; hence, caution is advised when sharing personal items on social media.
Legal Remedies
Although Deepfake is not recognised by law in India, it is indirectly addressed by Sec. 66 E of the IT Act, which makes it illegal to capture, publish, or transmit someone's image in the media without that person's consent, thus violating their privacy. The maximum penalty for this violation is ₹2 lakh in fines or three years in prison. The DPDP Act's applicability in 2023 means that the creation of deepfakes will directly affect an individual's right to digital privacy and will also violate the IT guidelines under the Intermediary Guidelines, as platforms will be required to exercise caution while disseminating and publishing misinformation through deepfakes. The indirect provisions of the Indian Penal Code, which cover the sale and dissemination of derogatory publications, songs and actions, deception in the delivery of property, cheating and dishonestly influencing the delivery of property, and forgery with the intent to defame, are the only legal remedies available for deepfakes. Deep fakes must be recognized legally due to the growing power of misinformation. The Data Protection Board and the soon-to-be-established fact-checking body must recognize crimes related to deepfakes and provide an efficient system for filing complaints.
Conclusion
Deepfake is an aftermath of the advancements of Web 3.0 and, hence is just the tip of the iceberg in terms of the issues/threats from emerging technologies. It is pertinent to upskill and educate the netizens about the keen aspects of deepfakes to stay safe in the future. At the same time, developing and developed nations need to create policies and laws to efficiently regulate deepfake and to set up redressal mechanisms for victims and industry. As we move ahead, it is pertinent to address the threats originating out of the emerging techs and, at the same time, create a robust resilience for the same.
References

CAPTCHA, or the Completely Automated Public Turing Test to Tell Computers and Humans Apart function, is an image or distorted text that users have to identify or interpret to prove they are human. 2007 marked the inception of CAPTCHA, and Google developed its free service called reCAPTCHA, one of the most commonly used technologies to tell computers apart from humans. CAPTCHA protects websites from spam and abuse by using tests considered easy for humans but were supposed to be difficult for bots to solve.
But, now this has changed. With AI becoming more and more sophisticated, it is now capable of solving CAPTCHA tests at a rate that is more accurate than humans, rendering them increasingly ineffective. This raises the question of whether CAPTCHA is still effective as a detection tool with the advancements of AI.
CAPTCHA Evolution: From 2007 Till Now
CAPTCHA has evolved through various versions to keep bots at bay. reCAPTCHA v1 relied on distorted text recognition, v2 introduced image-based tasks and behavioural analysis, and v3 operated invisibly, assigning risk scores based on user interactions. While these advancements improved user experience and security, AI now solves CAPTCHA with 96% accuracy, surpassing humans (50-86%). Bots can mimic human behaviour, undermining CAPTCHA’s effectiveness and raising the question: is it still a reliable tool for distinguishing real people from bots?
Smarter Bots and Their Rise
AI advancements like machine learning, deep learning and neural networks have developed at a very fast pace in the past decade, making it easier for bots to bypass CAPTCHA. They allow the bots to process and interpret the CAPTCHA types like text and images with almost human-like behaviour. Some examples of AI developments against bots are OCR or Optical Character Recognition. The earlier versions of CAPTCHA relied on distorted text: AI because of this tech is able to recognise and decipher the distorted text, making CAPTCHA useless. AI is trained on huge datasets which allows Image Recognition by identifying the objects that are specific to the question asked. These bots can mimic human habits and patterns by Behavioural Analysis and therefore fool the CAPTCHA.
To defeat CAPTCHA, attackers have been known to use Adversarial Machine Learning, which refers to AI models trained specifically to defeat CAPTCHA. They collect CAPTCHA datasets and answers and create an AI that can predict correct answers. The implications that CAPTCHA failures have on platforms can range from fraud to spam to even cybersecurity breaches or cyberattacks.
CAPTCHA vs Privacy: GDPR and DPDP
GDPR and the DPDP Act emphasise protecting personal data, including online identifiers like IP addresses and cookies. Both frameworks mandate transparency when data is transferred internationally, raising compliance concerns for reCAPTCHA, which processes data on Google’s US servers. Additionally, reCAPTCHA's use of cookies and tracking technologies for risk scoring may conflict with the DPDP Act's broad definition of data. The lack of standardisation in CAPTCHA systems highlights the urgent need for policymakers to reevaluate regulatory approaches.
CyberPeace Analysis: The Future of Human Verification
CAPTCHA, once a cornerstone of online security, is losing ground as AI outperforms humans in solving these challenges with near-perfect accuracy. Innovations like invisible CAPTCHA and behavioural analysis provided temporary relief, but bots have adapted, exploiting vulnerabilities and undermining their effectiveness. This decline demands a shift in focus.
Emerging alternatives like AI-based anomaly detection, biometric authentication, and blockchain verification hold promise but raise ethical concerns like privacy, inclusivity, and surveillance. The battle against bots isn’t just about tools but it’s about reimagining trust and security in a rapidly evolving digital world.
AI is clearly winning the CAPTCHA war, but the real victory will be designing solutions that balance security, user experience and ethical responsibility. It’s time to embrace smarter, collaborative innovations to secure a human-centric internet.
References
- https://www.business-standard.com/technology/tech-news/bot-detection-no-longer-working-just-wait-until-ai-agents-come-along-124122300456_1.html
- https://www.milesrote.com/blog/ai-defeating-recaptcha-the-evolving-battle-between-bots-and-web-security
- https://www.technologyreview.com/2023/10/24/1081139/captchas-ai-websites-computing/
- https://datadome.co/guides/captcha/recaptcha-gdpr/

About Global Commission on Internet Governance
The Global Commission on Internet Governance was established in January 2014 with the goal of formulating and advancing a strategic vision for Internet governance going forward. Independent research on Internet-related issues of international public policy is carried out and supported over the two-year initiative. An official commission report with particular policy recommendations for the future of Internet governance will be made available as a result of this initiative.
There are two goals for the Global Commission on Internet Governance. First, it will encourage a broad and inclusive public discussion on how Internet governance will develop globally. Second, through its comprehensive policy-oriented report and the subsequent marketing of this final report, the Global Commission on Internet Governance will present its findings to key stakeholders at major Internet governance events.
The Internet: exploring the world wide web and the deep web
The Internet can be thought of as a vast networking infrastructure, or network of networks. By linking millions of computers worldwide, it creates a network that allows any two computers, provided they are both online, to speak with one another.
The Hypertext Transfer Protocol is the only language spoken over the Internet and is used by the Web to transfer data. Email, which depends on File Transfer Protocol, Usenet newsgroups, Simple Mail Transfer Protocol, and instant messaging, is also used on the Internet—not the Web. Thus, even though it's a sizable chunk, the Web is only a part of the Internet [1]. In summary, the deep Web is the portion of the Internet that is not visible to the naked eye. It is stuff from the World Wide Web that isn't available on the main Web. Standard search engines cannot reach it. More than 500 times larger than the visible Web is this enormous subset of the Internet [1-2].
The Global Commission on Internet Governance will concentrate on four principal themes:
• Improving the legitimacy of government, including standards and methods for regulation;
• Promoting economic innovation and expansion, including the development of infrastructure, competition laws, and vital Internet resources;
• Safeguarding online human rights, including establishing the idea of technological neutrality for rights to privacy, human rights, and freedom of expression;
• Preventing systemic risk includes setting standards for state behaviour, cooperating with law enforcement to combat cybercrime, preventing its spread, fostering confidence, and addressing disarmament-related issues.
Dark Web
The part of the deep Web that has been purposefully concealed and is unreachable using conventional Web browsers is known as the "dark Web." Dark Web sites are a platform for Internet users who value their anonymity since they shield users from prying eyes and typically utilize encryption to thwart monitoring. The Tor network is a well-known source for content that may be discovered on the dark web. Only a unique Web browser known as the Tor browser is required to access the anonymous Tor network (Tor 2014). It was a technique for anonymous online communication that the US Naval Research Laboratory first introduced as The Onion Routing (Tor) project in 2002. Many of the functionality offered by Tor are also available on I2P, another network. On the other hand, I2P was intended to function as a network inside the Internet, with traffic contained within its boundaries. Better anonymous access to the open Internet is offered by Tor, while a more dependable and stable "network within the network" is provided by I2P [3].
Cybersecurity in the dark web
Cyber crime is not any different than crime in the real world — it is just executed in a new medium: “Virtual criminality’ is basically the same as the terrestrial crime with which we are familiar. To be sure, some of the manifestations are new. But a great deal of crime committed with or against computers differs only in terms of the medium. While the technology of implementation, and particularly its efficiency, may be without precedent, the crime is fundamentally familiar. It is less a question of something completely different than a recognizable crime committed in a completely different way [4].”
Dark web monitoring
The dark Web, in general, and the Tor network, in particular, offer a secure platform for cybercriminals to support a vast amount of illegal activities — from anonymous marketplaces to secure means of communication, to an untraceable and difficult to shut down infrastructure for deploying malware and botnets.
As such, it has become increasingly important for security agencies to track and monitor the activities in the dark Web, focusing today on Tor networks, but possibly extending to other technologies in the near future. Due to its intricate webbing and design, monitoring the dark Web will continue to pose significant challenges. Efforts to address it should be focused on the areas discussed below [5].
Hidden service directory of dark web
A domain database used by both Tor and I2P is based on a distributed system called a "distributed hash table," or DHT. In order for a DHT to function, its nodes must cooperate to store and manage a portion of the database, which takes the shape of a key-value store. Owing to the distributed character of the domain resolution process for hidden services, nodes inside the DHT can be positioned to track requests originating from a certain domain [6].
Conclusion
The deep Web, and especially dark Web networks like Tor (2004), offer bad actors a practical means of transacting in products anonymously and lawfully.
The absence of discernible activity in non-traditional dark web networks is not evidence of their nonexistence. As per the guiding philosophy of the dark web, the actions are actually harder to identify and monitor. Critical mass is one of the market's driving forces. It seems unlikely that operators on the black Web will require a great degree of stealth until the repercussions are severe enough, should they be caught. It is possible that certain websites might go down, have a short trading window, and then reappear, which would make it harder to look into them.
References
- Ciancaglini, Vincenzo, Marco Balduzzi, Max Goncharov and Robert McArdle. 2013. “Deepweb and Cybercrime: It’s Not All About TOR.” Trend Micro Research Paper. October.
- Coughlin, Con. 2014. “How Social Media Is Helping Islamic State to Spread Its Poison.” The Telegraph, November 5.
- Dahl, Julia. 2014. “Identity Theft Ensnares Millions while the Law Plays Catch Up.” CBS News, July 14.
- Dean, Matt. 2014. “Digital Currencies Fueling Crime on the Dark Side of the Internet.” Fox Business, December 18.
- Falconer, Joel. 2012. “A Journey into the Dark Corners of the Deep Web.” The Next Web, October 8.
- Gehl, Robert W. 2014. “Power/Freedom on the Dark Web: A Digital Ethnography of the Dark Web Social Network.” New Media & Society, October 15. http://nms.sagepub.com/content/early/2014/ 10/16/1461444814554900.full#ref-38.