#FactCheck - Viral Claim About Nitish Kumar’s Resignation Over UGC Protests Is Misleading
Executive Summary
A news video is being widely circulated on social media with the claim that Bihar Chief Minister Nitish Kumar has resigned from his post in protest against the ongoing UGC-related controversy. Several users are sharing the clip while alleging that Kumar stepped down after opposing the issue. However, CyberPeace research has found the claim to be false. The researchrevealed that the video being shared is from 2022 and has no connection whatsoever with the UGC or any recent protests related to it. An old video has been misleadingly linked to a current issue to spread misinformation on social media.
Claim:
An Instagram user shared a video on January 26 claiming that Bihar Chief Minister Nitish Kumar had resigned. The post further alleged that the news was first aired on Republic channel and that Kumar had submitted his resignation to then-Governor Phagu Chauhan. The link to the post, its archived version, and screenshots can be seen below. (Links as provided)

Fact Check:
To verify the claim, CyberPeace first conducted a keyword-based search on Google. No credible or established media organisation reported any such resignation, clearly indicating that the viral claim lacked authenticity.

Further, the voiceover in the viral video states that Nitish Kumar handed over his resignation to Governor Phagu Chauhan. However, Phagu Chauhan ceased to be the Governor of Bihar in February 2023. The current Governor of Bihar is Arif Mohammad Khan, making the claim in the video factually incorrect and misleading.

In the next step, keyframes from the viral video were extracted and reverse-searched using Google Lens. This led to the official YouTube channel of Republic Bharat, where the full version of the same video was found. The video was uploaded on August 9, 2022. This clearly establishes that the clip circulating on social media is not recent and is being shared out of context.

Conclusion
CyberPeace’s research confirms that the viral video claiming Nitish Kumar resigned over the UGC issue is false. The video dates back to 2022 and has no link to the current UGC controversy. An old political video has been deliberately circulated with a misleading narrative to create confusion on social media.
Related Blogs

Introduction
India’s new Policy for Data Sharing from the National Transport Repository (NTR) released by the Ministry of Road Transport and Highways (MoRTH) in August, 2025, can be seen as a constitutional turning point and a milestone in administrative efficiency. The state has established an unprecedentedly large unified infrastructure by combining the records of 390 million vehicles, 220 million driver’s licenses, and the streams from the e-challan, e-DAR, and FASTag systems. Its supporters hail its promise of private-sector innovation, data-driven research, and smooth governance. However, there is a troubling paradox beneath this facade of advancement: the very structures intended to improve citizen mobility may simultaneously strengthen widespread surveillance. Without strict protections, the NTR runs the risk of violating the constitutional trifecta of need, proportionality, and legality as stated in Puttaswamy v. UOI, which brings to light important issues at the nexus of liberty, law, and data.
The other pertinent question to be addressed is as India unifies one of its comprehensive datasets on citizen mobility the question becomes more pressing: while motorised citizens are now in the spotlight for accountability, what about the millions of other datasets that are still dispersed, unregulated, and shared inconsistently in the areas of health, education, telecom, and welfare?
The Legal Backdrop
MoRTH grounds its new policy in Sections 25A and 62B of the Motor Vehicles Act, 1988. Data is consolidated into a single repository since states are required by Section 136A to electronically monitor road safety. According to the policy, it complies with the Digital Personal Data Protection Act, 2023.
The DPDP Act itself, however, is rife with state exclusions, particularly Sections 7 and 17, which give government organisations access to personal information for “any function under any law” or for law enforcement purposes. This is where the constitutional issue lies. Prior judicial supervision, warrants, or independent checks are not necessary. With legislative approval, MoRTH is essentially creating a national vehicle database without any constitutional protections.
Data, Domination and the New Privacy Paradigm
As an efficiency and governance reform, VAHAN, SARATHI, e-challan, eDAR, and FASTag are being consolidated into a single National Transport Repository (NTR). However, centralising extensive mobility and identity-linked records on a large scale is more than just a technical advancement; it also changes how the state and private life interact. The NTR must therefore be interpreted through a more comprehensive privacy paradigm, one that acknowledges that data aggregation is a means of enhancing administrative capacity and has the potential to develop into a long-lasting tool of social control and surveillance unless both technological and constitutional restrictions are placed at the same time.
Two recent doctrinal developments sharpen this concern. First, the Supreme Court’s foundational ruling that privacy is a fundamental right remains the constitutional lodestar, any state interference must satisfy legality, necessity and proportionality (KS Puttaswamy & Anr. vs UOI). Second, as seen by the court’s most recent refusals to normalise ongoing, warrantless location monitoring, such as the ruling overturning bail requirements that required accused individuals to provide a Google maps pin, as movement tracking necessitates closer examination (Frank Vitus v. Narcotics Control Bureau & Ors.,).When taken as a whole, these authorities maintain that unrestricted, ongoing access to mobility and toll-transaction records is a constitutional issue and cannot be handled as an administrative convenience.
Structural Fault Lines in the NTR Framework
Fundamentally, the NTR policy generates structural vulnerabilities by providing nearly unrestricted access through APIs and even mass transfers on physical media to a broad range of parties, including insurance companies, law enforcement, and intelligence services. This design undermines constitutional protections in three ways: first, it makes it possible to draw conclusions about private life patterns that the Supreme Court has identified as one of the most sensitive data categories by exposing rich mobility trails like FASTag logs and vehicle-linked identities; Second, it allows bulk datasets to circulate outside the ministry’s custodial boundary, which creates the possibility of function creep, secondary use, and monetisation risks reminiscent of the bulk sharing regime that the government itself once abandoned; and third, it introduces coercive exclusion by tying private sector access to Aadhaar-based OTP consent.
Reference

Introduction
Fundamentally, artificial intelligence (AI) is the greatest extension of human intelligence. It is the culmination of centuries of logic, reasoning, math, and creativity, machines trained to reflect cognition. However, such intelligence no longer resembles intelligence at all when it is put in the hands of the irresponsible, the one with malice, or the perverse, unleashed into the wild with minimal safeguards. Instead, distortion seems as a tool of debasement rather than enlightenment.
Recent incidents involving sexually explicit photographs created by AI on social media sites reveal an extremely unsettling reality. When intelligence is detached from accountability, morality, and governance, it corrodes society rather than elevates it. We are seeing a failure of stewardship rather than just a failure of technology.
The Cost of Unchecked Intelligence
The AI chatbot Grok, which operates under Elon Musk’s X (formerly Twitter), is the subject of a debate that goes beyond a single platform or product. The romanticisation of “unfiltered” knowledge and the perilous notion that innovation should come before accountability are signs of a bigger lapse in the digital ecosystem. We have allowed mechanisms that can be used as weapons against human dignity, especially the dignity of women and children, in the name of freedom.
We are no longer discussing artistic expression or experimental AI when a machine can digitally undress women, morph photos, or produce sexualised portrayals of kids with a few keystrokes. We stand in the face of algorithmic violence. Even if the physical touch is absent, the harm caused by it is genuine, long-lasting, and extremely personal.
The Regulatory Red Line
A major inflexion was reached when the Indian government responded by ordering a thorough technical, procedural, and governance-level audit. It acknowledges that AI systems are not isolated entities. Platforms that use them are not neutral pipes, but rather intermediaries with responsibilities. The Bhartiya Nyay Sanhita, the IT Act, the IT Rules 2021, and the possible removal of Section 79 safe-harbour safeguards all make it quite evident that innovation is not automatic immunity.
However, the fundamental dilemma cannot be resolved by legislation alone. AI is hailed as a force multiplier for innovation, productivity, and advancement, but when incentives are biased towards engagement, virality, and shock value, its misuse shows how easily intelligence can turn into ugliness. The output receives greater attention the more provocative it is. Profit increases with attention. Restraint turns into a business disadvantage in this ecology.
The Aftermath
Grok’s own acknowledgement that “safeguard lapses” enabled the creation of pictures showing children wearing skimpy attire underscores a troubling reality, safety was not absent due to impossibility, but due to insufficiency. It was always possible to implement sophisticated filtering, more robust monitoring, and stricter oversight. They were simply not prioritised. When a system asserts that “no system is 100% foolproof,” it must also acknowledge that there is no acceptable margin of error when it comes to child protection.
The casual normalisation of such lapses is what is most troubling. By characterising these instances as “isolated cases,” systemic design decisions run the risk of being trivialised. In addition to intelligence, AI systems that have been taught on enormous amounts of human data also inherit bias, misogyny, and power imbalances.
Conclusion
What is required today is recalibration. Platforms need to shift from reactive compliance to proactive accountability. Safeguards must be incorporated at the architectural level; they cannot be cosmetic or post-facto. Governance must encompass enforced ethical boundaries in addition to terms of service. The idea that “edgy” AI is a sign of advancement must also be rejected by society.
Artificial Intelligence has never promised freedom under the guise of vulgarity. It was improvement, support, and augmentation. The fundamental core of intelligence is lost when it is used as a tool for degradation.So what’s left is a decision between principled innovation and unbridled novelty. Between responsibility and spectacle, between intelligence as purpose and intellect as power.
References
https://www.rediff.com/news/report/govt-orders-x-review-of-grok-over-explicit-content/20260103.htm

Introduction
The Ministry of Electronics and Information Technology recently released the IT Intermediary Guidelines 2023 Amendment for social media and online gaming. The notification is crucial when the Digital India Bill’s drafting is underway. There is no denying that this bill, part of a series of bills focused on amendments and adding new provisions, will significantly improve the dynamics of Cyberspace in India in terms of reporting, grievance redressal, accountability and protection of digital rights and duties.
What is the Amendment?
The amendment comes as a key feature of cyberspace as the bill introduces fact-checking, a crucial aspect of relating information on various platforms prevailing in cyberspace. Misformation and disinformation were seen rising significantly during the Covid-19 pandemic, and fact-checking was more important than ever. This has been taken into consideration by the policymakers and hence has been incorporated as part of the Intermediary guidelines. The key features of the guidelines are as follows –
- The phrase “online game,” which is now defined as “a game that is offered on the Internet and is accessible by a user through a computer resource or an intermediary,” has been added.
- A clause has been added that emphasises that if an online game poses a risk of harm to the user, intermediaries and complaint-handling systems must advise the user not to host, display, upload, modify, publish, transmit, store, update, or share any data related to that risky online game.
- A proviso to Rule 3(1)(f) has been added, which states that if an online gaming intermediary has provided users access to any legal online real money game, it must promptly notify its users of the change, within 24 hours.
- Sub-rules have been added to Rule 4 that focus on any legal online real money game and require large social media intermediaries to exercise further due diligence. In certain situations, online gaming intermediaries:
- Are required to display a demonstrable and obvious mark of verification of such online game by an online gaming self-regulatory organisation on such permitted online real money game
- Will not offer to finance themselves or allow financing to be provided by a third party.
- Verification of real money online gaming has been added to Rule 4-A.
- The Ministry may name as many self-regulatory organisations for online gaming as it deems necessary for confirming an online real-money game.
- Each online gaming self-regulatory body will prominently publish on its website/mobile application the procedure for filing complaints and the appropriate contact information.
- After reviewing an application, the self-regulatory authority may declare a real money online game to be a legal game if it is satisfied that:
- There is no wagering on the outcome of the game.
- Complies with the regulations governing the legal age at which a person can engage into a contract.
- The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 have a new rule 4-B (Applicability of certain obligations after an initial period) that states that the obligations of the rule under rules 3 and 4 will only apply to online games after a three-month period has passed.
- According to Rule 4-C (Obligations in Relation to Online Games Other Than Online Real Money Games), the Central Government may direct the intermediary to make necessary modifications without affecting the main idea if it deems it necessary in the interest of India’s sovereignty and integrity, the security of the State, or friendship with foreign States.
- Intermediaries, such as social media companies or internet service providers, will have to take action against such content identified by this unit or risk losing their “safe harbour” protections under Section 79 of the IT Act, which let intermediaries escape liability for what third parties post on their websites. This is problematic and unacceptable. Additionally, these notified revisions can circumvent the takedown order process described in Section 69A of the IT Act, 2000. They also violated the ruling in Shreya Singhal v. Union of India (2015), which established precise rules for content banning.
- The government cannot decide if any material is “fake” or “false” without a right of appeal or the ability for judicial monitoring since the power to do so could be abused to thwart examination or investigation by media groups. Government takedown orders have been issued for critical remarks or opinions posted on social media sites; most of the platforms have to abide by them, and just a few, like Twitter, have challenged them in court.
Conclusion
The new rules briefly cover the aspects of fact-checking, content takedown by Govt, and the relevance and scope of sections 69A and 79 of the Information Technology Act, 2000. Hence, it is pertinent that the intermediaries maintain compliance with rules to ensure that the regulations are sustainable and efficient for the future. Despite these rules, the responsibility of the netizens cannot be neglected, and hence active civic participation coupled with such efficient regulations will go a long way in safeguarding the Indian cyber ecosystem.