#FactCheck: Fake video falsely claims FM Sitharaman endorsed investment scheme
Executive Summary:
A video gone viral on Facebook claims Union Finance Minister Nirmala Sitharaman endorsed the government’s new investment project. The video has been widely shared. However, our research indicates that the video has been AI altered and is being used to spread misinformation.

Claim:
The claim in this video suggests that Finance Minister Nirmala Sitharaman is endorsing an automotive system that promises daily earnings of ₹15,00,000 with an initial investment of ₹21,000.

Fact Check:
To check the genuineness of the claim, we used the keyword search for “Nirmala Sitharaman investment program” but we haven’t found any investment related scheme. We observed that the lip movements appeared unnatural and did not align perfectly with the speech, leading us to suspect that the video may have been AI-manipulated.
When we reverse searched the video which led us to this DD News live-stream of Sitharaman’s press conference after presenting the Union Budget on February 1, 2025. Sitharaman never mentioned any investment or trading platform during the press conference, showing that the viral video was digitally altered. Technical analysis using Hive moderator further found that the viral clip is Manipulated by voice cloning.

Conclusion:
The viral video on social media shows Union Finance Minister Nirmala Sitharaman endorsing the government’s new investment project as completely voice cloned, manipulated and false. This highlights the risk of online manipulation, making it crucial to verify news with credible sources before sharing it. With the growing risk of AI-generated misinformation, promoting media literacy is essential in the fight against false information.
- Claim: Fake video falsely claims FM Nirmala Sitharaman endorsed an investment scheme.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
Recently the attackers employed the CVE-2017-0199 vulnerability in Microsoft Office to deliver a fileless form of the Remcos RAT. The Remcos RAT makes the attacker have full control of the systems that have been infected by this malware. This research will give a detailed technical description of the identified vulnerability, attack vector, and tactics together with the practical steps to counter the identified risks.
The Targeted Malware: Remcos RAT
Remcos RAT (Remote Control & Surveillance) is a commercially available remote access tool designed for legitimate administrative use. However, it has been widely adopted by cybercriminals for its stealth and extensive control capabilities, enabling:
- System control and monitoring
- Keylogging
- Data exfiltration
- Execution of arbitrary commands
The fileless variant utilised in this campaign makes detection even more challenging by running entirely in system memory, leaving minimal forensic traces.
Attack Vector: Phishing with Malicious Excel Attachments
The phishing email will be sent which appears as legitimate business communication, such as a purchase order or invoice. This email contains an Excel attachment that is weaponized to exploit the CVE-2017-0199 vulnerability.
Technical Analysis: CVE-2017-0199 Exploitation
Vulnerability Assessment
- CVE-2017-0199 is a Remote Code Execution (RCE) vulnerability in Microsoft Office which uses Object Linking and Embedding (OLE) objects.
- Affected Components:some text
- Microsoft Word
- Microsoft Excel
- WordPad
- CVSS Score: 7.8 (High Severity)
Mechanism of Exploitation
The vulnerability enables attackers to craft a malicious document when opened, it fetches and executes an external payload via an HTML Application (HTA) file. The execution process occurs without requiring user interaction beyond opening the document.
Detailed Exploitation Steps
- Phishing Email and Malicious Document some text
- The email contains an Excel file designed to make use of CVE-2017-0199.
- When the email gets opened, the document automatically connects to a remote server (e.g., 192.3.220[.]22) to download an HTA file (cookienetbookinetcache.hta).
- Execution via mshta.exe some text
- The downloaded HTA file is executed using mshta.exe, a legitimate Windows process for running HTML Applications.
- This execution is seamless and does not prompt the user, making the attack stealthy.
- Multi-Layer Obfuscation some text
- The HTA file is wrapped in several layers of scripting, including: some text
- JavaScript
- VBScript
- PowerShell
- This obfuscation helps evade static analysis by traditional antivirus solutions.
- The HTA file is wrapped in several layers of scripting, including: some text
- Fileless Payload Deployment some text
- The downloaded executable leverages process hollowing to inject malicious code into legitimate system processes.
- The Remcos RAT payload is loaded directly into memory, avoiding the creation of files on disk.
Fileless Malware Techniques
1. Process Hollowing
The attack replaces the memory of a legitimate process (e.g., explorer.exe) with the malicious Remcos RAT payload. This allows the malware to:
- Evade detection by blending into normal system activity.
- Run with the privileges of the hijacked process.
2. Anti-Analysis Techniques
- Anti-Debugging: Detects the presence of debugging tools and terminates malicious processes if found.
- Anti-VM and Sandbox Evasion: Ensures execution only on real systems to avoid detection during security analysis.
3. In-Memory Execution
- By running entirely in system memory, the malware avoids leaving artifacts on the disk, making forensic analysis and detection more challenging.
Capabilities of Remcos RAT
Once deployed, Remcos RAT provides attackers with a comprehensive suite of functionalities, including:
- Data Exfiltration: some text
- Stealing system information, files, and credentials.
- Remote Execution: some text
- Running arbitrary commands, scripts, and additional payloads.
- Surveillance: some text
- Enabling the camera and microphone.
- Capturing screen activity and clipboard contents.
- System Manipulation: some text
- Modifying Windows Registry entries.
- Controlling system services and processes.
- Disabling user input devices (keyboard and mouse).
Advanced Phishing Techniques in Parallel Campaigns
1. DocuSign Abuse
Attackers exploit legitimate DocuSign APIs to create authentic-looking phishing invoices. These invoices can trick users into authorising payments or signing malicious documents, bypassing traditional email security systems.
2. ZIP File Concatenation
By appending multiple ZIP archives into a single file, attackers exploit inconsistencies in how different tools handle these files. This allows them to embed malware that evades detection by certain archive managers.
Broader Implications of Fileless Malware
Fileless malware like Remcos RAT poses significant challenges:
- Detection Difficulties: Traditional signature-based antivirus systems struggle to detect fileless malware, as there are no static files to scan.
- Forensic Limitations: The lack of disk artifacts complicates post-incident analysis, making it harder to trace the attack's origin and scope.
- Increased Sophistication: These campaigns demonstrate the growing technical prowess of cybercriminals, leveraging legitimate tools and services for malicious purposes.
Mitigation Strategies
- Patch Management some text
- It is important to regularly update software to address known vulnerabilities like CVE-2017-0199. Microsoft released a patch for this vulnerability in April 2017.
- Advanced Email Security some text
- It is important to implement email filtering solutions that can detect phishing attempts, even those using legitimate services like DocuSign.
- Endpoint Detection and Response (EDR)some text
- Always use EDR solutions to monitor for suspicious behavior, such as unauthorized use of mshta.exe or process hollowing.
- User Awareness and Training some text
- Educate users about phishing techniques and the risks of opening unexpected attachments.
- Behavioral Analysis some text
- Deploy security solutions capable of detecting anomalous activity, even if no malicious files are present.
Conclusion
The attack via CVE-2017-0199 further led to the injection of a new fileless variant of Remcos RAT, proving how threats are getting more and more sophisticated. Thanks to the improved obfuscation and the lack of files, the attackers eliminate all traditional antiviral protection and gain full control over the infected computers. It is real and organisations have to make sure that they apply patches on time, that they build better technologies for detection and that the users themselves are more wary of the threats.
References
- Fortinet FortiGuard Labs: Analysis by Xiaopeng Zhang
- Perception Point: Research on ZIP File Concatenation
- Wallarm: DocuSign Phishing Analysis
- Microsoft Security Advisory: CVE-2017-0199

Artificial intelligence is revolutionizing industries such as healthcare to finance to influence the decisions that touch the lives of millions daily. However, there is a hidden danger associated with this power: unfair results of AI systems, reinforcement of social inequalities, and distrust of technology. One of the main causes of this issue is training data bias, which appears when the examples on which an AI model is trained are not representative or skewed. To deal with it successfully, this needs a combination of statistical methods, algorithmic design that is mindful of fairness, and robust governance over the AI lifecycle. This article discusses the origin of bias, the ways to reduce it, and the unique position of fairness-conscious algorithms.
Why Bias in Training Data Matters
The bias in AI occurs when the models mirror and reproduce the trends of inequality in the training data. When a dataset has a biased representation of a demographic group or includes historical biases, the model will be trained to make decisions in ways that will harm the group. This is a fact that has a practical implication: prejudiced AI may cause discrimination during the recruitment of employees, lending, and evaluation of criminal risks, as well as various other spheres of social life, thus compromising justice and equity. These problems are not only technical in nature but also require moral principles and a system of governance (E&ICTA).
Bias is not uniform. It may be based on the data itself, the algorithm design, or even the lack of diversity among developers. The bias in data occurs when data does not represent the real world. Algorithm bias may arise when design decisions inadvertently put one group at an unfair advantage over another. Both the interpretation of the model and data collection may be affected by human bias. (MDPI)
Statistical Principles for Reducing Training Data Bias
Statistical principles are at the core of bias mitigation and they redefine the data-model interaction. These approaches are focused on data preparation, training process adjustment, and model output corrections in such a way that the notion of fairness becomes a quantifiable goal.
Balancing Data Through Re-Sampling and Re-Weighting
Among the aforementioned methods, a fair representation of all the relevant groups in the dataset is one way. This can be achieved by oversampling underrepresented groups and undersampling overrepresented groups. Oversampling gives greater weight to minority examples, whereas re-weighting gives greater weight to under-represented data points in training. The methods minimize the tendency of models to fit to salient patterns and improve coverage among vulnerable groups. (GeeksforGeeks)
Feature Engineering and Data Transformation
The other statistical technique is to convert data characteristics in such a way that sensitive characteristics have a lesser impact on the results. In one example, fair representation learning adjusts the data representation to discourage bias during the untraining of the model. The disparate impact remover adjust technique performs the adjustment of features of the model in such a way that the impact of sensitive features is reduced during learning. (GeeksforGeeks)
Measuring Fairness With Metrics
Statistical fairness measures are used to measure the effectiveness of a model in groups.
Fairness-Aware Algorithms Explained
Fair algorithms do not simply detect bias. They incorporate fairness goals in model construction and run in three phases including pre-processing, in-processing, and post-processing.
Pre-Processing Techniques
Fairness-aware pre-processing deals with bias prior to the model consuming the information. This involves the following ways:
- Rebalancing training data through sampling and re-weighting training data to address sample imbalances.
- Data augmentation to generate examples of underrepresented groups.
- Feature transformation removes or downplays the impact of sensitive attributes prior to the commencement of training. (IJMRSET)
These methods can be used to guarantee that the model is trained on more balanced data and to reduce the chances of bias transfer between historical data.
In-Processing Techniques
The in-processing techniques alter the learning algorithm. These include:
- Fairness constraints that penalize the model for making biased predictions during training.
- Adversarial debiasing, where a second model is used to ensure that sensitive attributes are not predicted by the learned representations.
- Fair representation learning that modifies internal model representations in favor of
Post-Processing Techniques
Fairness may be enhanced after training by changing the model outputs. These strategies comprise:
- Threshold adjustments to various groups to meet conditions of fairness, like equalized odds.
- Calibration techniques such that the estimated probabilities are fair indicators of the actual probabilities in groups. (GeeksforGeeks)
Challenges
Mitigating bias is complex. The statistical bias minimization may at times come at the cost of the model accuracy, and there is a conflict between predictive performance and fairness. The definition of fairness itself is potentially a difficult task because various applications of fairness require various criteria, and various criteria can be conflicting. (MDPI)
Gaining varied and representative data is also a challenge that is experienced because of privacy issues, incomplete records, and a lack of resources. The auditing and reporting done on a continuous basis are needed so that mitigation processes are up to date, as models are continually updated. (E&ICTA)
Why Fairness-Aware Development Matters
The outcomes of the unfair treatment of some groups by AI systems are far-reaching. Discriminatory software in recruitment may support inequality in the workplace. Subjective credit rating may deprive deserving people of opportunities. Unbiased medical forecasts might result in the flawed allocation of medical resources. In both cases, prejudice contravenes the credibility and clouds the greater prospect of AI. (E&ICTA)
Algorithms that are fair and statistical mitigation plans provide a way to create not only powerful AI but also fair and trustworthy AI. They admit that the results of AI systems are social tools whose effects extend across society. Responsible development will necessitate sustained fairness quantification, model adjustment, and upholding human control.
Conclusion
AI bias is not a technical malfunction. It is a mirror of real-world disparities in data and exaggerated by models. Statistical rigor, wise algorithm design, and readiness to address the trade-offs between fairness and performance are required to reduce training data bias. Fairness-conscious algorithms (which can be implemented in pre-processing, in-processing, or post-processing) are useful in delivering more fair results. As AI is taking part in the most crucial decisions, it is necessary to consider fairness at the beginning to have a system that serves the population in a responsible and fair manner.
References
- Understanding Bias in Artificial Intelligence: Challenges, Impacts, and Mitigation Strategies: E&ICTA, IITK
- Bias and Fairness in Artificial Intelligence: Methods and Mitigation Strategies: JRPS Shodh Sagar
- Fairness and Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, and Mitigation Strategies: MDPI
- Ensuring Fairness in Machine Learning Algorithms: GeeksforGeeks
Bias and Fairness in Machine Learning Models: A Critical Examination of Ethical Implications: IJMRSET - Bias in AI Models: Origins, Impact, and Mitigation Strategies: Preprints
- Bias in Artificial Intelligence and Mitigation Strategies: TCS
- Survey on Machine Learning Biases and Mitigation Techniques: MDPI

Executive Summary:
A video went viral on social media claiming to show a bridge collapsing in Bihar. The video prompted panic and discussions across various social media platforms. However, an exhaustive inquiry determined this was not real video but AI-generated content engineered to look like a real bridge collapse. This is a clear case of misinformation being harvested to create panic and ambiguity.

Claim:
The viral video shows a real bridge collapse in Bihar, indicating possible infrastructure failure or a recent incident in the state.
Fact Check:
Upon examination of the viral video, various visual anomalies were highlighted, such as unnatural movements, disappearing people, and unusual debris behavior which suggested the footage was generated artificially. We used Hive AI Detector for AI detection, and it confirmed this, labelling the content as 99.9% AI. It is also noted that there is the absence of realism with the environment and some abrupt animation like effects that would not typically occur in actual footage.

No valid news outlet or government agency reported a recent bridge collapse in Bihar. All these factors clearly verify that the video is made up and not real, designed to mislead viewers into thinking it was a real-life disaster, utilizing artificial intelligence.
Conclusion:
The viral video is a fake and confirmed to be AI-generated. It falsely claims to show a bridge collapsing in Bihar. This kind of video fosters misinformation and illustrates a growing concern about using AI-generated videos to mislead viewers.
Claim: A recent viral video captures a real-time bridge failure incident in Bihar.
Claimed On: Social Media
Fact Check: False and Misleading