#FactCheck: Fake video falsely claims FM Sitharaman endorsed investment scheme
Executive Summary:
A video gone viral on Facebook claims Union Finance Minister Nirmala Sitharaman endorsed the government’s new investment project. The video has been widely shared. However, our research indicates that the video has been AI altered and is being used to spread misinformation.

Claim:
The claim in this video suggests that Finance Minister Nirmala Sitharaman is endorsing an automotive system that promises daily earnings of ₹15,00,000 with an initial investment of ₹21,000.

Fact Check:
To check the genuineness of the claim, we used the keyword search for “Nirmala Sitharaman investment program” but we haven’t found any investment related scheme. We observed that the lip movements appeared unnatural and did not align perfectly with the speech, leading us to suspect that the video may have been AI-manipulated.
When we reverse searched the video which led us to this DD News live-stream of Sitharaman’s press conference after presenting the Union Budget on February 1, 2025. Sitharaman never mentioned any investment or trading platform during the press conference, showing that the viral video was digitally altered. Technical analysis using Hive moderator further found that the viral clip is Manipulated by voice cloning.

Conclusion:
The viral video on social media shows Union Finance Minister Nirmala Sitharaman endorsing the government’s new investment project as completely voice cloned, manipulated and false. This highlights the risk of online manipulation, making it crucial to verify news with credible sources before sharing it. With the growing risk of AI-generated misinformation, promoting media literacy is essential in the fight against false information.
- Claim: Fake video falsely claims FM Nirmala Sitharaman endorsed an investment scheme.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

AI-generated content has been taking up space in the ever-changing dynamics of today's tech landscape. Generative AI has emerged as a powerful tool that has enabled the creation of hyper-realistic audio, video, and images. While advantageous, this ability has some downsides, too, particularly in content authenticity and manipulation.
The impact of this content is varied in the areas of ethical, psychological and social harms seen in the past couple of years. A major concern is the creation of non-consensual explicit content, including nudes. This content includes content where an individual’s face gets superimposed onto explicit images or videos without their consent. This is not just a violation of privacy for individuals, and can have humongous consequences for their professional and personal lives. This blog examines the existing laws and whether they are equipped to deal with the challenges that this content poses.
Understanding the Deepfake Technology
Deepfake technology is a media file (image, video, or speech) that typically represents a human subject that is altered deceptively using deep neural networks (DNNs). It is used to alter a person’s identity, and it usually takes the form of a “face swap” where the identity of a source subject is transferred onto a destination subject. The destination’s facial expressions and head movements remain the same, but the appearance in the video is that of the source. In the case of videos, the identities can be substituted by way of replacement or reenactment.
This superimposed content creates realistic content, such as fake nudes. Presently, creating a deepfake is not a costly endeavour. It requires a Graphics Processing Unit (GPU), software that is free, open-source, and easy to download, and graphics editing and audio-dubbing skills. Some of the common apps to create deepfakes are DeepFaceLab and FaceSwap, which are both public and open source and are supported by thousands of users who actively participate in the evolution and development of these software and models.
Legal Gaps and Challenges
Multiple gaps and challenges exist in the legal space for deepfakes and their regulation. They are:
- The inadequate definitions governing AI-generated explicit content often lead to enforcement challenges.
- Jurisdictional challenges due to the cross-border nature of crimes and the difficulties caused by international cooperation measures are in the early stages for AI content.
- There is a gap between the current consent-based and harassment laws for AI-generated nudes.
- Providing evidence or providing proof for the intent and identification of perpetrators in digital crimes is a challenge that is yet to be overcome.
Policy Responses and Global Trends
Presently, the global response to deepfakes is developing. The UK has developed the Online Safety Bill, the EU has the AI Act, the US has some federal laws such as the National AI Initiative Act of 2020 and India is currently developing the India AI Act as the specific legislation dealing with AI and its correlating issues.
The IT Rules, 2021, and the DPDP Act, 2023, regulate digital platforms by mandating content governance, privacy policies, grievance redressal, and compliance with removal orders. Emphasising intermediary liability and safe harbour protections, these laws play a crucial role in tackling harmful content like AI-generated nudes, while the DPDP Act focuses on safeguarding privacy and personal data rights.
Bridging the Gap: CyberPeace Recommendations
- Initiate legislative reforms by advocating for clear and precise definitions for the consent frameworks and instituting high penalties for AI-based offences, particularly those which are aimed at sexually explicit material.
- Advocate for global cooperation and collaborations by setting up international standards and bilateral and multilateral treaties that address the cross-border nature of these offences.
- Platforms should push for accountability by pushing for stricter platform responsibility for the detection and removal of harmful AI-generated content. Platforms should introduce strong screening mechanisms to counter the huge influx of harmful content.
- Public campaigns which spread awareness and educate users about their rights and the resources available to them in case such an act takes place with them.
Conclusion
The rapid advancement of AI-generated explicit content demands immediate and decisive action. As this technology evolves, the gaps in existing legal frameworks become increasingly apparent, leaving individuals vulnerable to profound privacy violations and societal harm. Addressing this challenge requires adaptive, forward-thinking legislation that prioritises individual safety while fostering technological progress. Collaborative policymaking is essential and requires uniting governments, tech platforms, and civil society to develop globally harmonised standards. By striking a balance between innovation and societal well-being, we can ensure that the digital age is not only transformative but also secure and respectful of human dignity. Let’s act now to create a safer future!
References
- https://etedge-insights.com/technology/artificial-intelligence/deepfakes-and-the-future-of-digital-security-are-we-ready/
- https://odsc.medium.com/the-rise-of-deepfakes-understanding-the-challenges-and-opportunities-7724efb0d981 
- https://insights.sei.cmu.edu/blog/how-easy-is-it-to-make-and-detect-a-deepfake/

Introduction
The Indian Ministry of Information and Broadcasting has proposed a new legislation. On the 10th of November, 2023, a draft bill emerged, a parchment of governance seeking to sculpt the contours of the nation's broadcasting landscape. The Broadcasting Services (Regulation) Bill, 2023, is not merely a legislative doctrine; it is a harbinger of change, an attestation to the storm of technology and the diversification of media in the age of the internet.
The bill, slated to replace the Cable Television Networks (Regulation) Act of 1995, acknowledges the paradigm shifts that have occurred in the media ecosystem. The emergence of Internet Protocol Television (IPTV), over-the-top (OTT) platforms and other digital broadcasting services has rendered the previous legislation a relic, ill-suited to the dynamism of the current milieu. The draft bill, therefore, stands at the precipice of the future, inviting stakeholders and the vox populi to weigh in on its provisions, to shape the edifice of regulation that will govern the airwaves and the digital streams.
Defining the certain Clauses of the bill
Clause 1 (dd) - The Programme
In the intricate tapestry of the bill's clauses, certain threads stand out, demanding scrutiny and careful consideration. Clause 1(dd), for instance, grapples with the definition of 'Programme,' a term that, in its current breadth, could ensnare the vast expanse of audio, visual, and written content transmitted through broadcasting networks. The implications are profound: content disseminated via YouTube or any website could fall within the ambit of this regulation, a prospect that raises questions about the scope of governmental oversight in the digital realm.
Clause 2(v) - The news and current affairs
Clause 2(v) delves into the murky waters of 'news and current affairs programmes,' a definition that, as it stands, is a maelstrom of ambiguity. The phrases 'newly-received or noteworthy audio, visual or audio-visual programmes' and 'about recent events primarily of socio-political, economic or cultural nature' are a siren's call, luring the unwary into a vortex of subjective interpretation. The threat of potential abuse looms larger, threatening the right to freedom of expression enshrined in Article 19 of the Indian Constitution. It is a clarion call for stakeholders to forge a definition that is objective and clear, one that is in accordance with the Supreme Court's decision in Shreya Singhal v. Union of India, which upheld the sanctity of digital expression while advocating for responsible content creation.
Clause 2(y) Over the Top Broadcasting Services
Clause 2(y) casts its gaze upon OTT broadcasting services, entities that operate in a realm distinct from traditional broadcasting. The one-to-many paradigm of broadcast media justifies a degree of governmental control, but OTT streaming is a more intimate affair, a one-on-one engagement with content on personal devices. The draft bill's attempt to umbrella OTT services under the broadcasting moniker is a conflation that could stifle the diversity and personalised nature of these platforms. It is a conundrum that other nations, such as Australia and Singapore, have approached with nuanced regulatory frameworks that recognise the unique characteristics of OTT services.
Clause 4(4) - Requirements for Broadcasters and Network Operators
The bill's journey through the labyrinth of regulation is fraught with other challenges. The definition of 'Person' in Clause 2(z), the registration exemptions in Clause 4(4), the prohibition on state governments and political parties from engaging in broadcasting in Clause 6, and the powers of inspection and seizure in Clauses 30(2) and 31, all present a complex puzzle. Each clause, each sub-section, is a cog in the machinery of governance that must be calibrated with precision to balance the imperatives of regulation with the freedoms of expression and innovation.
Clause 27 - Advisory Council
The Broadcast Advisory Council, envisioned in Clause 27, is yet another crucible where the principles of impartiality and independence must be tempered. The composition of this council, the public consultations that inform its establishment, and the alignment with constitutional principles are all vital to its legitimacy and efficacy.
A Way Forward
It is up to us, as participants in the democratic process and citizens, to interact with the bill's provisions as it makes its way through the halls of public discourse and legislative examination. To guarantee that the ultimate version of the Broadcasting Services (Regulation) Bill, 2023, is a symbol of advancement and a charter that upholds our most valued liberties while welcoming the opportunities presented by the digital era, we must employ the instruments of study and discussion.
The draft bill is more than just a document in this turbulent time of transition; it is a story of India's dreams, a testament to its dedication to democracy, and a roadmap for its digital future. Therefore, let us take this duty with the seriousness it merits, as the choices we make today will have a lasting impact on the history of our country and the media environment for future generations.
References
- https://scroll.in/article/1059881/why-indias-new-draft-broadcast-bill-has-raised-fears-of-censorship-and-press-suppression#:~:text=The%20bill%20extends%20the%20regulatory,regulation%20through%20content%20evaluation%20committees.
- https://pib.gov.in/PressReleasePage.aspx?PRID=1976200
- https://www.hindustantimes.com/india-news/new-broadcast-bill-may-also-cover-those-who-put-up-news-content-online-101701023054502.html

EXECUTIVE SUMMARY:
A viral video is surfacing claiming to capture an aerial view of Mount Kailash that has breathtaking scenery apparently providing a rare real-life shot of Tibet's sacred mountain. Its authenticity was investigated, and authenticity versus digitally manipulative features were analyzed.
CLAIMS:
The viral video claims to reveal the real aerial shot of Mount Kailash, as if exposing us to the natural beauty of such a hallowed mountain. The video was circulated widely in social media, with users crediting it to be the actual footage of Mount Kailash.


FACTS:
The viral video that was circulated through social media was not real footage of Mount Kailash. The reverse image search revealed that it is an AI-generated video created by Sonam and Namgyal, two Tibet based graphic artists on Midjourney. The advanced digital techniques used helped to provide a realistic lifelike scene in the video.
No media or geographical source has reported or published the video as authentic footage of Mount Kailash. Besides, several visual aspects, including lighting and environmental features, indicate that it is computer-generated.
For further verification, we used Hive Moderation, a deep fake detection tool to conclude whether the video is AI-Generated or Real. It was found to be AI generated.

CONCLUSION:
The viral video claiming to show an aerial view of Mount Kailash is an AI-manipulated creation, not authentic footage of the sacred mountain. This incident highlights the growing influence of AI and CGI in creating realistic but misleading content, emphasizing the need for viewers to verify such visuals through trusted sources before sharing.
- Claim: Digitally Morphed Video of Mt. Kailash, Showcasing Stunning White Clouds
- Claimed On: X (Formerly Known As Twitter), Instagram
- Fact Check: AI-Generated (Checked using Hive Moderation).