#FactCheck: Fake video falsely claims FM Sitharaman endorsed investment scheme
Executive Summary:
A video gone viral on Facebook claims Union Finance Minister Nirmala Sitharaman endorsed the government’s new investment project. The video has been widely shared. However, our research indicates that the video has been AI altered and is being used to spread misinformation.

Claim:
The claim in this video suggests that Finance Minister Nirmala Sitharaman is endorsing an automotive system that promises daily earnings of ₹15,00,000 with an initial investment of ₹21,000.

Fact Check:
To check the genuineness of the claim, we used the keyword search for “Nirmala Sitharaman investment program” but we haven’t found any investment related scheme. We observed that the lip movements appeared unnatural and did not align perfectly with the speech, leading us to suspect that the video may have been AI-manipulated.
When we reverse searched the video which led us to this DD News live-stream of Sitharaman’s press conference after presenting the Union Budget on February 1, 2025. Sitharaman never mentioned any investment or trading platform during the press conference, showing that the viral video was digitally altered. Technical analysis using Hive moderator further found that the viral clip is Manipulated by voice cloning.

Conclusion:
The viral video on social media shows Union Finance Minister Nirmala Sitharaman endorsing the government’s new investment project as completely voice cloned, manipulated and false. This highlights the risk of online manipulation, making it crucial to verify news with credible sources before sharing it. With the growing risk of AI-generated misinformation, promoting media literacy is essential in the fight against false information.
- Claim: Fake video falsely claims FM Nirmala Sitharaman endorsed an investment scheme.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
Significantly, in March 2023, the Prevention of Money Laundering Act, 2002's regulations placed Virtual Digital Asset Service Providers functioning located under the purview of the Anti Money Laundering/Counter Financing of Terrorism (AML-CFT) scheme. An important step toward controlling VDA SP operations and guaranteeing adherence to Anti-Money Laundering and Combating the Financing of Terrorism (AML-CFT) regulations.
The significance of AML-CFT procedures
The AML-CFT framework's incorporation of Virtual Digital Asset Service Providers (VDA SPs) is essential for protecting the banking industry from illegal activities including the laundering of funds and counter-financing of terrorist attacks. These regulations become more crucial as the market for digital assets develops and becomes more well-known.
The practice of money laundering is hiding the source of the sum received illegally, thus it's critical to have strict policies in place to track down and stop these kinds of operations. Furthermore, funding for terrorism is a serious danger to international safety, hence stopping the flow of money to terrorist companies is a top concern for global officials.
The goal of policymakers' move to include VDA SPs in the AML-CFT architecture is to set up control and surveillance procedures that will guarantee these organisations' open and honest operations. This involves tracking transactions, flagging questionable activity, and conducting extensive customer investigations. Incorporating such procedures not only reduces the potential for financial crimes but also builds confidence and trust in the electronic asset market.
It is important to see the significance of AML-CFT procedures and the changes in the legal framework to reflect the evolving characteristics of digital currencies. These procedures are essential to preserving the reliability and safety of the wider banking system.
Notifications of Compliance Show Cause
Under Section 13 of the PMLA Act 2002, FIU IND sent adherence Show Cause Notices to nine offshore Virtual Digital Asset Service Providers (VDA SPs) as part of its dedication to upholding compliance with regulations. This affirmative step requires organisations to be scrutinised and attempted to bring them under inspection.
Governmental Response
The Director of FIU IND has addressed the Secretary of the Ministry of Electronics and Information Technology to take further measures due to the disregard of offshore firms. According to the notification, URLs connected to these organisations that operate in India in violation of the PML Act's requirements must be blocked.
Mandatory Registration for VDA SPs
Virtual Digital Asset Service Providers (both onshore and offshore) who perform a range of operations, including the trading of digital goods for monetary currencies, the distribution of digital currency, and the management or preservation of electronic assets, are now obliged to register with FIU.
Range of Statutory Responsibilities
In accordance with the PML Act, VDA SPs are subject to several requirements, including documentation, disclosure, and other duties. One of their responsibilities is to register with the FIU IND. The primary focus is on guaranteeing that VDA SPs comply with AML-CFT protocols, hence enhancing the general reliability of the banking industry.
Difficulties with Offshore Compliance
There are many obstacles in guaranteeing that offshore organisations comply with Anti Money Laundering/Counter Financing of Terrorism (AML-CFT), chief amongst them being their unwillingness to undergo registration. Some overseas Virtual Digital Asset Service Providers (VDA SPs) have been reluctant to comply with the existing rules and regulations, even though they cater to a significant number of Indian users. There are several reasons for this hesitation, such as worries about heightened monitoring, the expense of compliance, and the apparent complexity of governmental processes. Regulatory organisations have taken steps to close the discrepancy between offshore businesses' real activities and the regulations they must follow. In addition to maintaining the trustworthiness of the economic system, resolving the issues with offshore adherence is essential for promoting confidence and openness in the market for electronic assets.
Conclusion
FIU IND has demonstrated its dedication to creating an effective regulatory framework for Virtual Digital Asset Service Providers through its recent measures. India hopes to fortify its countermeasures against money laundering and safeguard the financial well-being of its users by expanding the AML-CFT legislation to offshore firms. The continuous efforts to restrict the URLs of non-compliant companies show a proactive approach to stopping illicit activity and fostering a safe and law-abiding virtual asset ecosystem. The safety and soundness of the banking sector will be crucially maintained by laws and regulations as the digital world develops.
References
- https://pib.gov.in/PressReleasePage.aspx?PRID=1991372
- https://www.thehindubusinessline.com/books/reviews/business-economy/fiu-ind-issues-compliance-showcause-notices-to-nine-offshore-vda-sps/article67684613.ece
- https://business.outlookindia.com/news/fiu-issues-notice-to-9-offshore-crypto-platforms-writes-to-meity-for-blocking-of-urls
.webp)
Introduction
Pagers were commonly utilized in the late 1990s and early 2000s, especially in fields that needed fast, reliable communication and swift alerts and information sharing. Pagers typically offer a broader coverage range, particularly in remote areas with limited cellular signals, which enhances their dependability. They are simple electronic devices with minimal features, making them easy to use and less prone to technical issues. The decline in their use has been caused by the rise of mobile phones and their extensive features, offering more advanced communication options like voice calls, text messages, and internet access. Despite this, pagers are still used in some specific industries.
A shocking incident occurred on 17th September 2014, where thousands of pager devices exploded within seconds across Lebanon in a synchronized attack, targeting the US-designated terror group Hezbollah. The explosions killed at least 9 and injured over 2,800 individuals in the country that has been caught up in the Israel-Palestine tensions in its backyard.
The Pager Bombs Incident
On Tuesday, 17th September 2024, hundreds of pagers carried by Hezbollah members in Lebanon exploded in an unprecedented attack, surpassing a series of covert assassinations and cyber-attacks in the region over recent years. The Iran-backed militant group claimed the wireless devices began to explode around 3:30 p.m., local time, in a targeted attack on Hezbollah operatives. The pagers that exploded were new and had been purchased by Hezbollah in recent months. Experts say the explosions underscore Hezbollah's vulnerability as its communication network was compromised to deadly effect. Several areas of the country were affected, particularly Beirut's southern suburbs, a populous area that is a known Hezbollah stronghold. At least 9 people were killed, including a child, and about 2,800 people were wounded, overwhelming Lebanese hospitals.
Second Wave of Attack
As per the most recent reports, the next day, following the pager bombing incident, a second wave of blasts hit Beirut and multiple parts of Lebanon. Certain wireless devices such as walkie-talkies, solar equipment, and car batteries exploded, resulting in at least 9 people killed and 300 injured, according to the Lebanese Health Ministry. The attack is said to have embarrassed Hezbollah, incapacitated many of its members, and raised fears about a greater escalation of hostilities between the Iran-backed Lebanese armed group and Israel.
A New Kind of Threat - ‘Cyber-Physical’ Attacks
The incident raises serious concerns about physical tampering with daily-use electronic devices and the possibility of triggering a new age of warfare. This highlights the serious physical threat posed, wherein even devices such as smartwatches, earbuds, and pacemakers could be vulnerable to physical tampering if an attacker gains physical access to them. We are potentially looking at a new age of ‘cyber-physical’ threats where the boundaries between the digital and the physical are blurring rapidly. It raises questions about unauthorised access and manipulation targeting the physical security of such electronic devices. There is a cause for concern regarding the global supply chain across sectors, if even seemingly-innocuous devices can be weaponised to such devastating effect. Such kinds of attacks are capable of causing significant disruption and casualties, as demonstrated by pager bombings in Lebanon, which resulted in numerous deaths and injuries. It also raises questions on the regulatory mechanism and oversights checks at every stage of the electronic device lifecycle, from component manufacturing to the final assembly and shipment or supply. This is a grave issue because embedding explosives and doing malicious modifications by adversaries can turn such electronic devices into weapons.
CyberPeace Outlook
The pager bombing attack demonstrates a new era of threats in warfare tactics, revealing the advanced coordination and technical capabilities of adversaries where they have weaponised the daily use of electronic devices. They have targeted the hardware security of electronic devices, presenting a serious new threat to hardware security. The threat is grave, and has understandably raised widespread apprehension globally. Such kind of gross weaponisation of daily-use devices, specially in the conflict context, also triggers concerns about the violation of International Humanitarian Law principles. It also raises serious questions on the liabilities of companies, suppliers and manufacturers of such devices, who are subject to regulatory checks and ensuring the authenticity of their products.
The incident highlights the need for a more robust regulatory landscape, with stricter supply chain regulations as we adjust to the realities of a possible new era of weaponisation and conflict expression. CyberPeace recommends the incorporation of stringent tracking and vetting processes in product supply chains, along with the strengthening of international cooperation mechanisms to ensure compliance with protocols regarding the responsible use of technology. These will go a long way towards establishing peace in the global cyberspace and restore trust and safety with regards to everyday technologies.
References:
1. https://indianexpress.com/article/what-is/what-is-a-pager-9573113/
5. https://www.theguardian.com/world/2024/sep/18/hezbollah-pager-explosion-lebanon-israel-gold-apollo

Introduction
Artificial Intelligence (AI) is fast transforming our future in the digital world, transforming healthcare, finance, education, and cybersecurity. But alongside this technology, bad actors are also weaponising it. More and more, state-sponsored cyber actors are misusing AI tools such as ChatGPT and other generative models to automate disinformation, enable cyberattacks, and speed up social engineering operations. This write-up explores why and how AI, in the form of large language models (LLMs), is being exploited in cyber operations associated with adversarial states, and the necessity for international vigilance, regulation, and AI safety guidelines.
The Shift: AI as a Cyber Weapon
State-sponsored threat actors are misusing tools such as ChatGPT to turbocharge their cyber arsenal.
- Phishing Campaigns using AI- Generative AI allows for highly convincing and grammatically correct phishing emails. Unlike the shoddily written scams of yesteryears, these AI-based messages are tailored according to the victim's location, language, and professional background, increasing the attack success rate considerably. Example: It has recently been reported by OpenAI and Microsoft that Russian and North Korean APTs have employed LLMs to create customised phishing baits and malware obfuscation notes.
- Malware Obfuscation and Script Generation- Big Language Models (LLMs) such as ChatGPT may be used by cyber attackers to help write, debug, and camouflage malicious scripts. While the majority of AI instruments contain safety mechanisms to guard against abuse, threat actors often exploit "jailbreaking" to evade these protections. Once such constraints are lifted, the model can be utilised to develop polymorphic malware that alters its code composition to avoid detection. It can also be used to obfuscate PowerShell or Python scripts to render them difficult for conventional antivirus software to identify. Also, LLMs have been employed to propose techniques for backdoor installation, additional facilitating stealthy access to hijacked systems.
- Disinformation and Narrative Manipulation
State-sponsored cyber actors are increasingly employing AI to scale up and automate disinformation operations, especially on election, protest, and geopolitical dispute days. With LLMs' assistance, these actors can create massive amounts of ersatz news stories, deepfake interview transcripts, imitation social media posts, and bogus public remarks on online forums and petitions. The localisation of content makes this strategy especially perilous, as messages are written with cultural and linguistic specificity, making them credible and more difficult to detect. The ultimate aim is to seed societal unrest, manipulate public sentiments, and erode faith in democratic institutions.
Disrupting Malicious Uses of AI – OpenAI Report (June 2025)
OpenAI released a comprehensive threat intelligence report called "Disrupting Malicious Uses of AI" and the “Staying ahead of threat actors in the age of AI”, which outlined how state-affiliated actors had been testing and misusing its language models for malicious intent. The report named few advanced persistent threat (APT) groups, each attributed to particular nation-states. OpenAI highlighted that the threat actors used the models mostly for enhancing linguistic quality, generating social engineering content, and expanding operations. Significantly, the report mentioned that the tools were not utilized to produce malware, but rather to support preparatory and communicative phases of larger cyber operations.
AI Jailbreaking: Dodging Safety Measures
One of the largest worries is how malicious users can "jailbreak" AI models, misleading them into generating banned content using adversarial input. Some methods employed are:
- Roleplay: Simulating the AI being a professional criminal advisor
- Obfuscation: Concealing requests with code or jargon
- Language Switching: Proposing sensitive inquiries in less frequently moderated languages
- Prompt Injection: Lacing dangerous requests within innocent-appearing questions
These methods have enabled attackers to bypass moderation tools, transforming otherwise moral tools into cybercrime instruments.
Conclusion
As AI generations evolve and become more accessible, its application by state-sponsored cyber actors is unprecedentedly threatening global cybersecurity. The distinction between nation-state intelligence collection and cybercrime is eroding, with AI serving as a multiplier of adversarial campaigns. AI tools such as ChatGPT, which were created for benevolent purposes, can be targeted to multiply phishing, propaganda, and social engineering attacks. The cross-border governance, ethical development practices, and cyber hygiene practices need to be encouraged. AI needs to be shaped not only by innovation but by responsibility.
References
- https://www.microsoft.com/en-us/security/blog/2024/02/14/staying-ahead-of-threat-actors-in-the-age-of-ai/
- https://www.bankinfosecurity.com/openais-chatgpt-hit-nation-state-hackers-a-28640
- https://oecd.ai/en/incidents/2025-06-13-b5e9
- https://www.microsoft.com/en-us/security/security-insider/meet-the-experts/emerging-AI-tactics-in-use-by-threat-actors
- https://www.wired.com/story/youre-not-ready-for-ai-hacker-agents/
- https://www.cert-in.org.in/PDF/Digital_Threat_Report_2024.pdf
- https://cdn.openai.com/threat-intelligence-reports/5f73af09-a3a3-4a55-992e-069237681620/disrupting-malicious-uses-of-ai-june-2025.pdf