#FactCheck -Viral Humanoid Robot Video Actually Filmed at the Museum of the Future
Executive Summary
A video circulating widely on social media shows a man interacting with a humanoid robot and using abusive language, after which the robot asks him to maintain politeness. Several users shared the clip claiming that the incident took place during a recent AI summit in New Delhi. The video triggered strong reactions online, with some users demanding legal action against the individual. However, research by CyberPeace found the claim to be misleading.
Claim
Social media users claimed that the viral video showing a man abusing a robot was recorded during an AI summit in New Delhi, India.

Fact Check
To verify the claim, we conducted a reverse image search of the individual seen in the video. The search led us to an Instagram post uploaded by a Pakistani account identifying the individual as Kashif Zameer.

Further keyword searches helped us locate his Instagram profile, where the same video had been uploaded on February 17, 2026. The post included hashtags such as “Dubai,” indicating the actual location of the incident. The profile also lists Lahore, Pakistan, as the user’s location and describes him as a businessman and social media personality.

To confirm the location shown in the video, we conducted additional searches using keywords such as “Dubai” and “humanoid robot.” The research revealed that the robot featured in the clip is “Ameca,” located at the Museum of the Future in Dubai.

Conclusion
The viral claim is false. The video is not related to any AI summit held in New Delhi. The incident occurred in Dubai, and the person seen in the video is not an Indian citizen.
Related Blogs

Executive Summary:
A video of India’s Defence Minister Rajnath Singh is going viral on social media. The post claims that Rajnath Singh is openly supporting Israeli-American attacks against Iran. In the video, he can allegedly be heard saying that Prime Minister Narendra Modi had visited Israel before the war began and warned Tehran that disturbing peace would have serious consequences.
Research by CyberPeace found that the viral video is a deepfake created using Artificial Intelligence (AI). Rajnath Singh has not made any such statement about Iran or the Israel-US conflict.
Claim
A Facebook user “Sheikh Sadeque Ali” shared the video on March 2, 2026. The caption of the post reads, “Indian Defence Minister Rajnath Singh is supporting Israel’s attack on Iran. This clearly shows that India supports the killing of Muslims.”
In the viral video, Rajnath Singh appears to say in English: “Prime Minister Modi’s visit to Israel before the attack on Iran reflects India’s solidarity with its strategic partner… He warned Tehran that hostile actions would have serious consequences for regional peace.”

Fact Check:
To verify the claim, we extracted keyframes from the viral video and conducted a reverse image search. During the research , we found the original video on Rajnath Singh’s official YouTube channel. The video was uploaded on November 23, 2025.In the original video, Rajnath Singh was addressing a Sindhi community conference in Delhi. During his speech, he was talking about Sindhi culture and the history of Partition. He did not mention Israel, Iran or any Middle East conflict during the entire program.

Upon closely examining the viral video, technical inconsistencies between the lip movements and the audio (lip-sync discrepancies) can be observed, which strongly indicate that the video may have been generated using AI. To verify this, we analysed the clip using several AI-detection tools. The AI detection tool Hive Moderation indicated that the video has a 99% probability of being AI-generated.

Conclusion:
Our research found that the viral video of Rajnath Singh is a deepfake. He has not made any statement supporting Israel or opposing Iran. The original video is from a Sindhi community event in Delhi, which has been digitally altered using AI to spread a misleading claim.

Introduction
Targeting airlines and airports, airline hoax threats are fabricated alarms which intend to disrupt normal day-to-day activities and create panic among the public. Security of public settings is of utmost importance, making them a vulnerable target. The consequences of such threats include the financial loss incurred by parties concerned, increased security protocols to be followed immediately after and in preparation, flight delays and diversions, emergency landings and passenger inconvenience and emotional distress. The motivation behind such threats is malicious intent of varying degrees, breaching national security, integrity and safety. However, apart from the government, airline and social media authorities which already have certain measures in place to tackle such issues, the public, through responsible consumption and verified sharing has an equal role in preventing the spread of misinformation and panic regarding the same.
Hoax Airline Threats
The recent spate of bomb hoax threats to Indian airlines has witnessed false reports about threats to (over) 500 flights since 14/10/2024, the majority being traced to posts on social media handles which are either anonymous or unverified. Some recent incidents include a hoax threat on Air India's flights from Delhi to Mumbai via Indore which was posted on X, 30/10/2024 and a flight from Nepal (Kathmandu) to Delhi on November 2nd, 2024.
As per reports by the Indian Express, steps are being taken to address such incidents by tweaking the assessment criteria for threats (regarding bombs) and authorities such as the Bomb Threat Assessment Committees (BTAC) are being selective in categorising them as specific and non-specific. Some other consideration factors include whether a VIP is onboard and whether the threat has been posted from an anonymous account with a similar history.
CyberPeace Recommendations
- For Public
- Question sensational information: The public should scrutinise the information they’re consuming not only to keep themselves safe but also to be responsible to other citizens. Exercise caution before sharing alarming messages, posts and pieces of information
- Recognising credible sources: Rely only on trustworthy, verified sources when sharing information, especially when it comes to topics as serious as airline safety.
- Avoiding Reactionary Sharing: Sharing in a state of panic can contribute to the chaos created upon receiving unverified news, hence, it is suggested to refrain from reactionary sharing.
- For the Authorities & Agencies
- After a series of hoax bomb threats, the Government of India has issued an advisory to social media platforms calling for them to make efforts for the removal of such malicious content. Adherence to obligations such as the prompt removal of harmful content or disabling access to such unlawful information has been specified under the IT Rules, 2021. They are also obligated under the Bhartiya Nagarik Suraksha Sanhita 2023 to report certain offences on their platform. The Ministry of Civil Aviation’s action plan consists of plans regarding hoax bomb threats being labelled as a cognisable offence, and attracting a no-flyers list as a penalty, among other things.
These plans also include steps such as :
- Introduction of other corrective measures that are to be taken against bad actors (similar to having a non-flyers list).
- Introduction of a reporting mechanism which is specific to such threats.
- Focus on promoting awareness, digital literacy and critical thinking, fact-checking resources as well as encouraging the public to report such hoaxes
Conclusion
Preventing the spread of airline threat hoaxes is a collective responsibility which involves public engagement and ownership to strengthen safety measures and build upon the trust in the overall safety ecosystem (here; airline agencies, government authorities and the public). As the government and agencies take measures to prevent such instances, the public should continue to share information only from and on verified and trusted portals. It is encouraged that the public must remain vigilant and responsible while consuming and sharing information.
References
- https://indianexpress.com/article/business/flight-bomb-threats-assessment-criteria-serious-9646397/
- https://www.wionews.com/world/indian-airline-flight-bound-for-new-delhi-from-nepal-receives-hoax-bomb-threat-amid-rise-in-similar-incidents-772795
- https://www.newindianexpress.com/nation/2024/Oct/26/centre-cautions-social-media-platforms-to-tackle-misinformation-after-hoax-bomb-threat-to-multiple-airlines
- https://economictimes.indiatimes.com/industry/transportation/airlines-/-aviation/amid-rising-hoax-bomb-threats-to-indian-airlines-centre-issues-advisory-to-social-media-companies/articleshow/114624187.cms

Introduction
US President Biden takes a step by signing a key executive order to manage the risks posed by AI. The new presidential order on Artificial intelligence (AI) sets rules on the rapidly growing technology that has big potential but also covers risks. The presidential order was signed on 30th October 2023. It is a strong action that the US president has taken on AI safety and security. This order will require the developers to work on the most powerful AI model to share their safety test results with the government before releasing their product to the public. It also includes developing standards for ethically using AI and for detecting AI-generated content and labelling it as such. Tackling the many dangers of AI as it rapidly advances, the technology poses certain risks by replacing human workers, spreading misinformation and stealing people's data. The white house is also making clear that this is not just America’s problem and that the US needs to work with the world to set standards here and to ensure the responsible use of AI. The white house is also urging Congress to do more and pass comprehensive privacy legislation. The order includes new safety guidelines for AI developers, standards to disclose AI-generated content and requirements for federal agencies that are utilising AI. The white house says that it is the strongest action that any government has taken on AI safety and security. In the most recent events, India has reported the biggest ever data breach, where data of 815 million Indians has been leaked. ICMR is the Indian Council of Medical Research and is the imperial medical research institution of India.
Key highlights of the presidential order
The presidential order requires developers to share safety test results. It focuses on developing standards, tools & tests to ensure safe AI. It will ensure protection from AI-enabled frauds and protect Americans' privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, protect against risks of using AI to engineer dangerous material and provide guidelines for detecting AI -AI-generated content and establishing overall standards for AI safety and security.
Online content authentication and labelling
Biden administration has asked the Department of Commerce to set guidelines to help authenticate content coming from the government, meaning the American people should be able to trust official documents coming from the government. So, focusing on content authentication, they have also talked about labelling AI-generated content, making the differentiation between a real authentic piece of content and something that has been manipulated or generated using AI.
ICMR Breach
On 31/10/2023, an American intelligence and cybersecurity agency flagged the biggest-ever data breach, putting the data of 81.5 crore Indians at stake and at at potential risk of making its way to the dark market. The cyber agency has informed that a ‘threat actor’, also known as ‘pwn001’ shared a thread on Breach Forums, which is essentially claimed as the ‘premier Databreach discussion and leaks forum’. The forum confirms a breach of 81.5 crore Indians. As of today,, ICRM has not issued any official statement, but it has informed the government that the prestigious Central Bureau of Investigation (CBI) will be taking on the investigation and apprehending the cybercriminals behind the cyber attack. The bad actor’s alias, 'pwn001,' made a post on X (formerly Twitter); the post informed that Aadhaar and passport information, along with personal data such as names, phone numbers, and addresses. It is claimed that the data was extracted from the COVID-19 test details of citizens registered with ICMR. This poses a serious threat to the Indian Netizen from any form of cybercrime from anywhere in the world.
Conclusion:
The US presidential order on AI is a move towards making Artificial intelligence safe and secure. This is a major step by the Biden administration, which is going to protect both Americans and the world from the considerable dangers of AI. The presidential order requires developing standards, tools, and tests to ensure AI safety. The US administration will work with allies and global partners, including India, to develop a strong international framework to govern the development and use of AI. It will ensure the responsible use of AI. With the passing of legislation such as the Digital Personal Data Protection Act, 2023, it is pertinent that the Indian government works towards creating precautionary and preventive measures to protect Indian data. As the evolution of cyber laws is coming along, we need to keep an eye on emerging technologies and update/amend our digital routines and hygienes to stay safe and secure.
References:
- https://m.dailyhunt.in/news/india/english/lokmattimes+english-epaper-lokmaten/biden+signs+landmark+executive+order+to+manage+ai+risks-newsid-n551950866?sm=Y
- https://www.hindustantimes.com/technology/in-indias-biggest-data-breach-personal-information-of-81-5-crore-people-leaked-101698719306335-amp.html?utm_campaign=fullarticle&utm_medium=referral&utm_source=inshorts