#FactCheck - Edited Video Falsely Claims as an attack on PM Netanyahu in the Israeli Senate
Executive Summary:
A viral online video claims of an attack on Prime Minister Benjamin Netanyahu in the Israeli Senate. However, the CyberPeace Research Team has confirmed that the video is fake, created using video editing tools to manipulate the true essence of the original footage by merging two very different videos as one and making false claims. The original footage has no connection to an attack on Mr. Netanyahu. The claim that endorses the same is therefore false and misleading.

Claims:
A viral video claims an attack on Prime Minister Benjamin Netanyahu in the Israeli Senate.


Fact Check:
Upon receiving the viral posts, we conducted a Reverse Image search on the keyframes of the video. The search led us to various legitimate sources featuring an attack on an ethnic Turkish leader of Bulgaria but not on the Prime Minister Benjamin Netanyahu, none of which included any attacks on him.

We used AI detection tools, such as TrueMedia.org, to analyze the video. The analysis confirmed with 68.0% confidence that the video was an editing. The tools identified "substantial evidence of manipulation," particularly in the change of graphics quality of the footage and the breakage of the flow in footage with the change in overall background environment.



Additionally, an extensive review of official statements from the Knesset revealed no mention of any such incident taking place. No credible reports were found linking the Israeli PM to the same, further confirming the video’s inauthenticity.
Conclusion:
The viral video claiming of an attack on Prime Minister Netanyahu is an old video that has been edited. The research using various AI detection tools confirms that the video is manipulated using edited footage. Additionally, there is no information in any official sources. Thus, the CyberPeace Research Team confirms that the video was manipulated using video editing technology, making the claim false and misleading.
- Claim: Attack on the Prime Minister Netanyahu Israeli Senate
- Claimed on: Facebook, Instagram and X(Formerly Twitter)
- Fact Check: False & Misleading
Related Blogs

Introduction
In the digital landscape, there is a rapid advancement of technologies such as generative AI(Artificial Intelligence), deepfake, machine learning, etc. Such technologies offer convenience to users in performing several tasks and are capable of assisting individuals and business entities. Certain regulatory mechanisms are also established for the ethical and reasonable use of such advanced technologies. However, these technologies are easily accessible; hence, cyber-criminals leverage AI tools and technologies for malicious activities or for committing various cyber frauds. By such misuse of advanced technologies, new cyber threats have emerged.
Deepfake Scams
Deepfake is an AI-based technology. Deepfake is capable of creating realistic images or videos which in actuality are created by machine algorithms. Deepfake technology, since easily accessible, is misused by fraudsters to commit various cyber crimes or deceive and scam people through fake images or videos that look realistic. By using the Deepfake technology, cybercriminals manipulate audio and video content, which looks very realistic but, in actuality, is fake.
Voice cloning
To create a voice clone of anyone's, audio can be deepfaked too, which closely resembles a real one but, in actuality, is a fake voice created through deepfake technology. Recently, in Kerala, a man fell victim to an AI-based video call on WhatsApp. He received a video call from a person claiming to be his former colleague. The scammer, using AI deepfake technology, impersonated the face of his former colleague and asked for financial help of 40,000.
Uttarakhand Police issues warning admitting the rising trend of AI-based scams
Recently, Uttarakhand police’s Special Task Force (STF) has issued a warning admitting the widespread of AI technology-based scams such as deepfake or voice cloning scams targeting innocent people. Police expressed concern that several incidents have been reported where innocent people are lured by cybercriminals. Cybercriminals exploit advanced technologies and manipulate innocent people to believe that they are talking to their close ones or friends, but in actuality, they are fake voice clones or deepfake video calls. In this way, cybercriminals ask for immediate financial help, which ultimately leads to financial losses for victims of such scams.
Tamil Nadu Police Issues advisory on deepfake scams
To deceive people, cyber criminals misuse deepfake technologies and target them for financial gain. Recently, Tamilnadu Police Cyberwing have issued an advisory on rising deepfake scams. Fraudsters are creating highly convincing images, videos or voice clones to defraud innocent people and make them victims of financial fraud. The advisory states that you limit the personal data you share you share online and adjust privacy settings. Advisory says to promptly report any suspicious activity or cyber crimes to 1930 or the National Cyber Crime Reporting portal.
Best practices
- Pay attention if you notice compromised video quality because deepfake videos often have compromised or poor video quality and unusual blur resolution, which poses a question to its genuineness. Deepfake videos often loop or unusually freeze, which indicates that the video content might be fabricated.
- Whenever you receive requests for any immediate financial help, act responsively and verify the situation by directly contacting the person on his primary contact number.
- You need to be vigilant and cautious, since scammers often possess a sense of urgency, leading to giving no time for the victim to think about it and deceiving them by making a quick decision. Scammers pose sudden emergencies and demand financial support on an urgent basis.
- Be aware of the recent scams and follow the best practices to stay protected from rising cyber frauds.
- Verify the identity of unknown callers.
- Utilise privacy settings on your social media.
- Pay attention if you notice any suspicious nature, and avoid sharing voice notes with unknown users because scammers might use them as voice samples and create your voice clone.
- If you fall victim to such frauds, one powerful resource available is the National Cyber Crime Reporting Portal (www.cybercrime.gov.in) and the 1930 toll-free helpline number where you can report cyber fraud, including any financial crimes.
Conclusion
AI-powered technologies are leveraged by cybercriminals to commit cyber crimes such as deepfake scams, voice clone scams, etc. Where innocent people are lured by scammers. Hence there is a need for awareness and caution among the people. We should be vigilant and aware of the growing incidents of AI-based cyber scams. Must follow the best practices to stay protected.
References:
- https://www.the420.in/ai-voice-cloning-cyber-crime-alert-uttarakhand-police/
- https://www.trendmicro.com/vinfo/us/security/news/cybercrime-and-digital-threats/exploiting-ai-how-cybercriminals-misuse-abuse-ai-and-ml#:~:text=AI%20and%20ML%20Misuses%20and%20Abuses%20in%20the%20Future&text=Through%20the%20use%20of%20AI,and%20business%20processes%20are%20compromised.
- https://www.ndtv.com/india-news/kerala-man-loses-rs-40-000-to-ai-based-deepfake-scam-heres-what-it-is-4217841
- https://news.bharattimes.co.in/t-n-cybercrime-police-issue-advisory-on-deepfake-scams/
.jpeg)
As technological advancements continue to shape the future, the rise of artificial intelligence brings with it significant potential benefits, yet also raises concerns about the spread of misinformation. Recognising the need for accountability on both ends, on 5th May, during the three-day World News Media Congress 2025 in Kraków, Poland the European Broadcasting Union (EBU) and the World Association of News Publishers (WAN-IFRA) have announced to the public the five core principles for their joint initiative called News Integrity in the Age of AI. The initiative is aimed at fostering dialogue and cooperation between media organisations and technology platforms, and the principles announced are to be a code of practice to be followed by all those taking part. With thousands of public and private media outlets around the world joining the effort, the initiative highlights the shared responsibility of AI developers to ensure that AI systems are trustworthy, safe, and supportive of a reliable news ecosystem. It represents a global call to action to uphold the integrity of news in this age of major influx and curb the growing challenge of misinformation.
The five core principles released focus on:
1. Authorisation of content by the originators is a must prior to its usage in Generative AI tools and models
2. High-quality and up-to-date news content must be recognised by third parties that are benefiting from it
3. There must be a focus on accuracy and attribution, making the original sources of news apparent to the public, promoting transparency
4. Harnessing the plural nature of the news perspectives, which will help AI-driven tools perform better and
5. An invitation to tech companies for an open dialogue with news outlets, facilitating conversation to collaborate and develop standards of transparency, accuracy, and safety.
As this initiative provides a unified platform to address and deliberate on issues affecting the integrity of news, there are also some other technical ways in which misinformation in news caused by AI can be curbed:
1. Encourage the usage of Smaller Generative AI Models: The Large Language Models (LLMs) have to be trained on a range of topics. Businesses don’t require such an expanse of information but just a little that is relevant. A narrower context of information to be sourced from allows better content navigation and a reduced chance of mix-up.
2. Fighting AI hallucination: This is a phenomenon that causes generative AI (such as chatbots and computer vision tools) to present nonsensical and inaccurate outputs as the system perceives objects or patterns that are imperceptible or non-existent to human observers. This occurs as a result of the system trying to focus on both language fluency and stitching information from different sources together. In order to deal with this, one can deploy retrieval augmented generation (RAG). This enables connection with external sources of data that include academic journals, a company’s organisational data, among other things, that would help in providing more accurate, domain-specific content.
Conclusion
This global call to action marks an important step toward fostering unified efforts to combat misinformation. The set of principles introduced is designed to be adaptable, providing a flexible framework that can evolve to address emerging challenges (through dialogue and discussion), including issues like copyright infringement. While AI offers powerful tools to support the news industry, it is essential to emphasise that human oversight remains crucial. These technological advancements are meant to enhance and augment the work of journalists, not replace it, ensuring that the core values of journalism, such as accuracy and integrity, are preserved in the age of AI.
References
● https://www.techtarget.com/searchenterpriseai/tip/Generative-AI-ethics-8-biggest-concerns
● https://trilateralresearch.com/responsible-ai/using-responsible-ai-to-combat-misinformation
● https://www.omdena.com/blog/the-ethical-role-of-ai-in-media-combating-misformation
● https://2024.jou.ufl.edu/page/ai-and-misinformation
● https://techxplore.com/news/2025-05-ai-counter-misinformation-fact-based.html
● https://www.advanced-television.com/2025/05/06/media-outlets-call-for-ai-companies-news-integrity-protection/https://www.ibm.com/think/insights/ai-misinformation

Introduction
In recent times the evolution of cyber laws has picked up momentum, primarily because of new and emerging technologies. However, just as with any other law, the same is also strengthened and substantiated by judicial precedents and judgements. Recently Delhi High Court has heard a matter between Tata Sky and Linkedin, where the court has asked them to present their Chief Grievance Officer details and SoP per the intermediary guidelines 2021.
Furthermore, in another news, officials from RBI and Meity have been summoned by the Parliamentary Standing Committee in order to address the rising issues of cyber securities and cybercrimes in India. This comes on the very first day of the monsoon session of the parliament this year. As we move towards the aspects of digital India, addressing these concerns are of utmost importance to safeguard the Indian Netizen.
The Issue
Tata Sky changed its name to Tata Play last year and has since then made its advent in the OTT sector as well. As the rebranding took place, the company was very cautious of anyone using the name Tata Sky in a bad light. Tata Play found that a lot of people on Linkedin had posted their work experience in Tata Sky for multiple years, as any new recruiter cannot verify the same. This poses a misappropriation of the brand’s name. This issue was reported to Linkedin multiple times by officials of Tata Play, but no significant action was seen. This led to an issue between the two brands; hence, a matter has been filed in front of the Hon’ble Delhi High Court to address the issue. The court has taken due cognisance of the issue, and hence in accordance with the Intermediary Guidelines 2021, the court has directed Linkedlin to provide the details of their Cheif Grievance Officer in the public domain and also to share the SoP for the redressal of issues and grievances. The guidelines made it mandatory for all intermediaries to set up a dedicated office in India and appoint a Chief Grievance Officer responsible for effective and efficient redressal of the platform-related offences and grievances within the stipulated period.
The job platform has also been ordered to share the SoPs and the various requirements and safety checks for users to create profiles over Linkedin. The policy of Linkedin is focused towards the users as well as the companies existing on the platform in order to create a synergy between the two.
RBI and Meity Official at Praliament
As we go deeper into cyberspace, especially after the pandemic, we have seen an exponential rise in cybercrimes. Based on statistics, 4 out of 10 people have been victims of cybercrimes in 2022-23, and it is estimated that 70% of the population has been subjected to direct or indirect cybercrime. As per the latest statistics, 85% of Indian children have been subjected to cyberbullying in some form or the other.
The government has taken note of the rising numbers of such crimes and threats, and hence the Parliamentary Committee has summoned the officials from RBI and the Ministery of Electronics and Information Technology to the parliament on July 20, 2023, i.e. the first day of monsoon session at the parliament. This comes at a very crucial time as the Digital Personal Data Protection Bill is to be tabled in the parliament this session and this marks the revamping of the legislation and regulations in the Indian cyberspace. As emerging technologies have started to surround us it is pertinent to create legal safeguards and practices to protect the Indian Netizen at large.
Conclusion
The legal crossroads between Tata Sky and Linkedin will go a long way in establishing the mandates under the Intermediary guidelines in the form of legal precedents. The compliance with the rule of law is the most crucial aspect of any democracy. Hence the separation of power between the Legislature, Judiciary and Execution has been fundamental in safeguarding basic and fundamental rights. Similarly, the RBI and Meity officials being summoned to the parliament shows the transparency in the system and defines the true spirit of democracy., which will contribute towards creating a safe and secured Indian Cyberspace.