#FactCheck - Misleading Viral Video Targets Rubika Liyaquat, Original Footage Tells Different Story
Executive Summary
A video circulating on social media claims that a Pakistani man misbehaved with TV anchor Rubika Liyaquat during a live television debate. Users sharing the clip alleged that the Pakistani participant silenced the anchor on live TV.
However, research by CyberPeace found the viral claim to be false and revealed that the video being shared on social media is edited. In the original video, published on YouTube on November 26, 2025, the alleged Pakistani man was not present in the TV debate.
Claim
On February 13, 2026, a user shared the viral clip on X (formerly Twitter), claiming that the anchor was insulted during the debate and was left speechless. Another user on February 11, 2026, asked News18 India to verify the video and questioned who allowed such behaviour towards the journalist on air.

Fact Check:
To verify the claim, we extracted key frames from the viral video and conducted a reverse image search using Google Lens. During the research, we found the full version of the debate uploaded on the official YouTube channel of News18 India on November 26, 2025. The nearly 40-minute original broadcast featured anchor Rubika Liyaquat along with panelists Zafar Islam, Varun Purohit, Prateek Kumar, Arvind Kumar Vajpayee, Tausif Ahmed Khan, and Aziz Khan. However, the person seen misbehaving with the anchor in the viral clip was not present in the original video.

Upon carefully reviewing the footage, we located the actual segment around the 25-minute 40-second mark. In this portion, the anchor can be heard asking panelist Tausif Ahmed Khan to leave the show, using the same words heard in the viral clip. However, the original broadcast does not feature any Pakistani participant or any individual named “Nadeem Shahzad.”

Conclusion
Our research found that the viral claim is false. The circulating video has been edited, and the alleged Pakistani participant does not appear in the original debate uploaded on November 26, 2025.
Related Blogs

AI has grown manifold in the past decade and so has its reliance. A MarketsandMarkets study estimates the AI market to reach $1,339 billion by 2030. Further, Statista reports that ChatGPT amassed more than a million users within the first five days of its release, showcasing its rapid integration into our lives. This development and integration have their risks. Consider this response from Google’s AI chatbot, Gemini to a student’s homework inquiry: “You are not special, you are not important, and you are not needed…Please die.” In other instances, AI has suggested eating rocks for minerals or adding glue to pizza sauce. Such nonsensical outputs are not just absurd; they’re dangerous. They underscore the urgent need to address the risks of unrestrained AI reliance.
AI’s Rise and Its Limitations
The swiftness of AI’s rise, fueled by OpenAI's GPT series, has revolutionised fields like natural language processing, computer vision, and robotics. Generative AI Models like GPT-3, GPT-4 and GPT-4o with their advanced language understanding, enable learning from data, recognising patterns, predicting outcomes and finally improving through trial and error. However, despite their efficiency, these AI models are not infallible. Some seemingly harmless outputs can spread toxic misinformation or cause harm in critical areas like healthcare or legal advice. These instances underscore the dangers of blindly trusting AI-generated content and highlight the importance and the need to understand its limitations.
Defining the Problem: What Constitutes “Nonsensical Answers”?
Harmless errors due to AI nonsensical responses can be in the form of a wrong answer for a trivia question, whereas, critical failures could be as damaging as wrong legal advice.
AI algorithms sometimes produce outputs that are not based on training data, are incorrectly decoded by the transformer or do not follow any identifiable pattern. This response is known as a Nonsensical Answer and the situation is known as an “AI Hallucination”. It can be factual inaccuracies, irrelevant information or even contextually inappropriate responses.
A significant source of hallucination in machine learning algorithms is the bias in input that it receives. If the inputs for the AI model are full of biased datasets or unrepresentative data, it may lead to the model hallucinating and producing results that reflect these biases. These models are also vulnerable to adversarial attacks, wherein bad actors manipulate the output of an AI model by tweaking the input data ina subtle manner.
The Need for Policy Intervention
Nonsensical AI responses risk eroding user trust and causing harm, highlighting the need for accountability despite AI’s opaque and probabilistic nature. Different jurisdictions address these challenges in varied ways. The EU’s AI Act enforces stringent reliability standards with a risk-based and transparent approach. The U.S. emphasises creating ethical guidelines and industry-driven standards. India’s DPDP Act indirectly tackles AI safety through data protection, focusing on the principles of accountability and consent. While the EU prioritises compliance, the U.S. and India balance innovation with safeguards. This reflects on the diverse approaches that nations have to AI regulation.
Where Do We Draw the Line?
The critical question is whether AI policies should demand perfection or accept a reasonable margin for error. Striving for flawless AI responses may be impractical, but a well-defined framework can balance innovation and accountability. Adopting these simple measures can lead to the creation of an ecosystem where AI develops responsibly while minimising the societal risks it can pose. Key measures to achieve this include:
- Ensure that users are informed about AI and its capabilities and limitations. Transparent communication is the key to this.
- Implement regular audits and rigorous quality checks to maintain high standards. This will in turn prevent any form of lapses.
- Establishing robust liability mechanisms to address any harms caused by AI-generated material which is in the form of misinformation. This fosters trust and accountability.
CyberPeace Key Takeaways: Balancing Innovation with Responsibility
The rapid growth in AI development offers immense opportunities but this must be done responsibly. Overregulation of AI can stifle innovation, on the other hand, being lax could lead to unintended societal harm or disruptions.
Maintaining a balanced approach to development is essential. Collaboration between stakeholders such as governments, academia, and the private sector is important. They can ensure the establishment of guidelines, promote transparency, and create liability mechanisms. Regular audits and promoting user education can build trust in AI systems. Furthermore, policymakers need to prioritise user safety and trust without hindering creativity while making regulatory policies.
We can create a future that is AI-development-driven and benefits us all by fostering ethical AI development and enabling innovation. Striking this balance will ensure AI remains a tool for progress, underpinned by safety, reliability, and human values.
References
- https://timesofindia.indiatimes.com/technology/tech-news/googles-ai-chatbot-tells-student-you-are-not-needed-please-die/articleshow/115343886.cms
- https://www.forbes.com/advisor/business/ai-statistics/#2
- https://www.reuters.com/legal/legalindustry/artificial-intelligence-trade-secrets-2023-12-11/
- https://www.indiatoday.in/technology/news/story/chatgpt-has-gone-mad-today-openai-says-it-is-investigating-reports-of-unexpected-responses-2505070-2024-02-21
.webp)
Introduction
The scam involving "drugs in parcels' has resurfaced again with a new face. Cybercriminals impersonating and acting as FedEx, Police and various other authorities and in actuality, they are the perpetrators or bad actors behind the renewed "drugs in parcel" scam, which entails pressuring victims into sending money and divulging private information in order to escape fictitious legal repercussions.
Modus operandi
The modus operandi followed in this scam usually begins with a hacker calling someone on their cell phone posing as FedEx. They say that they are the recipients of a package under their name that includes illegal goods like jewellery, narcotics, or other items. The victim would feel afraid and apprehensive by now. Then there will be a video call with someone else who is posing as a police officer. The victim will be asked to keep the matter confidential while it is being investigated by this "fake officer."
After the call, they would get falsified paperwork from the CBI and RBI stating that an arrest warrant had been issued. Once the victim has fallen entirely under their sway, they would claim that the victim's Aadhaar has been used to carry out the unlawful conduct. They then request that the victim submit their bank account information and Aadhaar data for investigation. Subsequently, the hackers request that the victim transfer funds to a bank account for RBI validation. The victims thus submit money to the hackers believing it to be true for clearing their name.
Recent incidence:
In the most recent instance of a "drug-in-parcel" scam, an IT expert in Pune was defrauded of Rs 27.9 lakh by internet con artists acting as members of the Mumbai police's Cyber Crime Cell. The victim filed the First Information Report (FIR) in this matter at the police station. The victim stated that on November 11, 2023, the complainant received a call from a fraudster posing as a Mumbai police Cyber Crime Cell officer. The scammer falsely claimed to have discovered illegal narcotics in a package addressed to the complainant sent from Mumbai to Taiwan, along with an expired passport and an SBI card. To avoid arrest in a fabricated drug case, the fraudster coerced the complainant into providing bank account information under the guise of "verification." The victim, fearing legal consequences, transferred Rs 27,98,776 in ten online transactions to two separate bank accounts as instructed. Upon realizing the deception, the complainant reported the incident to the police, leading to an investigation.
In another such incident, the victim received an online bogus identity card from the scammers who had phoned him on the phone in October 2023. In an attempt to "clear the case" and issue a "no-objection certificate (NOC)," the fraudster persuaded the victim to wire money to a bank account, claiming to have seized narcotics in a shipment shipped from Mumbai to Thailand under his name. Fraudsters threatened to arrest the victim for mailing the narcotics package if money was not provided.
Furthermore, In August 2023, fraudsters acting as police officers and executives of courier companies defrauded a 25-year-old advertising student of Rs 53 lakh. They extorted money from her under the guise of avoiding legal action, which would include arrest, and informed her that narcotics had been discovered in a package she had delivered to Taiwan. According to the police, callers acting as police officers threatened to arrest the girl and forced her to complete up to 34 transactions totalling Rs 53.63 lakh from her and her mother's bank accounts to different bank accounts.
Measures to protect oneself from such scams
Call Verification:
- Be sure to always confirm the legitimacy of unexpected calls, particularly those purporting to be from law enforcement or delivery services. Make use of official contact information obtained from reliable sources to confirm the information presented.
Confidentiality:
- Use caution while disclosing personal information online or over the phone, particularly Aadhaar and bank account information. In general, legitimate authorities don't ask for private information in this way.
Official Documentation:
- Request official documents via the appropriate means. Make sure that any documents—such as arrest warrants or other government documents—are authentic by getting in touch with the relevant authorities.
No Haste in Transactions:
- Proceed with caution when responding hastily to requests for money or quick fixes. Creating a sense of urgency is a common tactic used by scammers to coerce victims into acting quickly.
Knowledge and Awareness:
- Remain up to date on common fraud schemes and frauds. Keep up with the most recent strategies employed by online fraudsters to prevent falling for fresh scam iterations.
Report Suspicious Activity:
- Notify the local police or other appropriate authorities of any suspicious calls or activities. Reports received in a timely manner can help investigations and shield others from falling for the same fraud.
2fA:
- Enable two-factor authentication (2FA) wherever you can to provide online accounts and transactions an additional degree of protection. This may lessen the chance of unwanted access.
Cybersecurity Software:
- To defend against malware, phishing attempts, and other online risks, install and update reputable antivirus and anti-malware software on a regular basis.
Educate Friends and Family:
- Inform friends and family about typical scams and how to avoid falling victim to fraud. A safer online environment can be achieved through increased collective knowledge.
Be skeptical
- Whenever anything looks strange or too good to be true, it most often is. Trust your instincts. Prior to acting, follow your gut and confirm the information.
By taking these precautions and exercising caution, people may lessen their vulnerability to scams and safeguard their money and personal data from online fraudsters.
Conclusion:
Verifying calls, maintaining secrecy, checking official papers, transacting cautiously, and keeping up to date are all examples of protective measures for protecting ourselves from such scams. Using cybersecurity software, turning on two-factor authentication, and reporting suspicious activity are essential in stopping these types of frauds. Raising awareness and working together are essential to making the internet a safer place and resisting the activities of cybercriminals.
References:
- https://indianexpress.com/article/cities/pune/pune-cybercrime-drug-in-parcel-cyber-scam-it-duping-9058298/#:~:text=In%20August%20this%20year%2C%20a,avoiding%20legal%20action%20including%20arrest.
- https://www.the420.in/pune-it-professional-duped-of-rs-27-9-lakh-in-drug-in-parcel-scam/
- https://www.newindianexpress.com/states/tamil-nadu/2023/oct/16/the-return-of-drugs-in-parcel-scam-2624323.html
- https://timesofindia.indiatimes.com/city/hyderabad/2-techies-fall-prey-to-drug-parcel-scam/articleshow/102786234.cms

Introduction
Discussions took place focused on cybersecurity measures, specifically addressing cybercrime in the context of emerging technologies such as Non-Fungible Tokens (NFTs), Artificial Intelligence (AI), and the Metaverse. Session 5 of the conference focused on the interconnectedness between the darknet and cryptocurrency and the challenges it poses for law enforcement agencies and regulators. They discussed that Understanding AI is necessary for enterprises. AI models have difficulties, but we are looking forward to trustworthy AIs. and AI technology must be transparent.
Darknet and Cryptocurrency
The darknet refers to the hidden part of the internet where illicit activities have proliferated in recent years. It was initially developed to provide anonymity, privacy, and protection to specific individuals such as journalists, activists, and whistleblowers. However, it has now become a playground for criminal activities. Cryptocurrency, particularly Bitcoin, has been widely adopted on the darknet due to its anonymous nature, enabling anti-money laundering and unlawful transactions.
Three major points emerge from this relationship: the integrated nature of the darknet and cryptocurrency, the need for regulations to prevent darknet-based crimes, and the importance of striking a balance between privacy and security.
Key Challenges:
- Integrated Relations: The darknet and cryptocurrency have evolved independently, with different motives and purposes. It is crucial to understand the integrated relationship between them and how criminals exploit this connection.
- Regulatory Frameworks: There is a need for effective regulations to prevent crimes facilitated through the darknet and cryptocurrency while striking a balance between privacy and security.
- Privacy and Security: Privacy is a fundamental right, and any measures taken to enhance security should not infringe upon individual privacy. A multistakeholder approach involving tech companies and regulators is necessary to find this delicate balance.
Challenges Associated with Cryptocurrency Use:
The use of cryptocurrency on the darknet poses several challenges. The risks associated with darknet-based cryptocurrency crimes are a significant concern. Additionally, regulatory challenges arise due to the decentralised and borderless nature of cryptocurrencies. Mitigating these challenges requires innovative approaches utilising emerging technologies.
Preventing Misuse of Technologies:
The discussion emphasised that we can step ahead of the people who wish to use these beautiful technologies meant and developed for a different purpose, to prevent from using them for crime.
Monitoring the Darknet:
The darknet, as explained, is an elusive part of the internet that necessitates the use of a special browser for access. Initially designed for secure communication by the US government, its purpose has drastically changed over time. The darknet’s evolution has given rise to significant challenges for law enforcement agencies striving to monitor its activities.
Around 95% of the activities carried out on the dark net are associated with criminal acts. Estimates suggest that over 50% of the global cybercrime revenue originates from the dark net. This implies that approximately half of all cybercrimes are facilitated through the darknet.
The exploitation of the darknet has raised concerns regarding the need for effective regulation. Monitoring the darknet is crucial for law enforcement, national agencies, and cybersecurity companies. The challenges associated with the darknet’s exploitation and the criminal activities facilitated by cryptocurrency emphasise the pressing need for regulations to ensure a secure digital landscape.
Use of Cryptocurrency on the Darknet
Cryptocurrency plays a central role in the activities taking place on the darknet. The discussion highlighted its involvement in various illicit practices, including ransomware attacks, terrorist financing, extortion, theft, and the operation of darknet marketplaces. These applications leverage cryptocurrency’s anonymous features to enable illegal transactions and maintain anonymity.
AI's Role in De-Anonymizing the Darknet and Monitoring Challenges:
- 1.AI’s Potential in De-Anonymizing the Darknet
During the discussion, it was highlighted how AI could be utilised to help in de-anonymizing the darknet. AI’s pattern recognition capabilities can aid in identifying and analysing patterns of behaviour within the darknet, enabling law enforcement agencies and cybersecurity experts to gain insights into its operations. However, there are limitations to what AI can accomplish in this context. AI cannot break encryption or directly associate patterns with specific users, but it can assist in identifying illegal marketplaces and facilitating their takedown. The dynamic nature of the darknet, with new marketplaces quickly emerging, adds further complexity to monitoring efforts.
- 2.Challenges in Darknet Monitoring
Monitoring the darknet poses various challenges due to its vast amount of data, anonymous and encrypted nature, dynamically evolving landscape, and the need for specialised access. These challenges make it difficult for law enforcement agencies and cybersecurity professionals to effectively track and prevent illicit activities.
- 3.Possible Ways Forward
To address the challenges, several potential avenues were discussed. Ethical considerations, striking a balance between privacy and security, must be taken into account. Cross-border collaboration, involving the development of relevant laws and policies, can enhance efforts to combat darknet-related crimes. Additionally, education and awareness initiatives, driven by collaboration among law enforcement, government entities, and academia, can play a crucial role in combating darknet activities.
The panel also addressed the questions from the audience
- How law enforcement agencies and regulators can use AI to detect and prevent crimes on the darknet and cryptocurrency? The panel answered that- Law enforcement officers should also be AI and technology ready, and that kind of upskilling program should be there in place.
- How should lawyers and the judiciary understand the problem and regulate it? The panel answered that AI should only be applied by looking at the outcomes. And Law has to be clear as to what is acceptable and what is not.
- Aligning AI with human intention? Whether it’s possible? Whether can we create an ethical AI instead of talking about using AI ethically? The panel answered that we have to understand how to behave ethically. AI can beat any human. We have to learn AI. Step one is to focus on our ethical behaviour. And step two is bringing the ethical aspect to the software and technologies. Aligning AI with human intention and creating ethical AI is a challenge. The focus should be on ethical behaviour both in humans and in the development of AI technologies.
Conclusion
The G20 Conference on Crime and Security shed light on the intertwined relationship between the darknet and cryptocurrency and the challenges it presents to cybersecurity. The discussions emphasised the need for effective regulations, privacy-security balance, AI integration, and cross-border collaboration to tackle the rising cybercrime activities associated with the darknet and cryptocurrency. Addressing these challenges will require the combined efforts of governments, law enforcement agencies, technology companies, and individuals committed to building a safer digital landscape.