#FactCheck - Debunking the AI-Generated Image of an Alleged Israeli Army Dog Attack
Executive Summary:
A photo allegedly shows an Israeli Army dog attacking an elderly Palestinian woman has been circulating online on social media. However, the image is misleading as it was created using Artificial Intelligence (AI), as indicated by its graphical elements, watermark ("IN.VISUALART"), and basic anomalies. Although there are certain reports regarding the real incident in several news channels, the viral image was not taken during the actual event. This emphasizes the need to verify photos and information shared on social media carefully.

Claims:
A photo circulating in the media depicts an Israeli Army dog attacking an elderly Palestinian woman.



Fact Check:
Upon receiving the posts, we closely analyzed the image and found certain discrepancies that are commonly seen in AI-generated images. We can clearly see the watermark “IN.VISUALART” and also the hand of the old lady looks odd.

We then checked in AI-Image detection tools named, True Media and contentatscale AI detector. Both found potential AI Manipulation in the image.



Both tools found it to be AI Manipulated. We then keyword searched for relevant news regarding the viral photo. Though we found relevant news, we didn’t get any credible source for the image.

The photograph that was shared around the internet has no credible source. Hence the viral image is AI-generated and fake.
Conclusion:
The circulating photo of an Israeli Army dog attacking an elderly Palestinian woman is misleading. The incident did occur as per the several news channels, but the photo depicting the incident is AI-generated and not real.
- Claim: A photo being shared online shows an elderly Palestinian woman being attacked by an Israeli Army dog.
- Claimed on: X, Facebook, LinkedIn
- Fact Check: Fake & Misleading
Related Blogs

Introduction
The integration of Artificial Intelligence into our daily workflows has compelled global policymakers to develop legislative frameworks to govern its impact efficiently. The question that we arrive at here is: While AI is undoubtedly transforming global economies, who governs the transformation? The EU AI Act was the first of its kind legislation to govern Artificial Intelligence, making the EU a pioneer in the emerging technology regulation space. This blog analyses the EU's Draft AI Rules and Code of Practice, exploring their implications for ethics, innovation, and governance.
Background: The Need for AI Regulation
AI adoption has been happening at a rapid pace and is projected to contribute $15.7 trillion to the global economy by 2030. The AI market size is expected to grow by at least 120% year-over-year. Both of these statistics have been stated in arguments citing concrete examples of AI risks (e.g., bias in recruitment tools, misinformation spread through deepfakes). Unlike the U.S., which relies on sector-specific regulations, the EU proposes a unified framework to address AI's challenges comprehensively, especially with the vacuum that exists in the governance of emerging technologies such as AI. It should be noted that the GDPR or the General Data Protection Regulation has been a success with its global influence on data privacy laws and has started a domino effect for the creation of privacy regulations all over the world. This precedent emphasises the EU's proactive approach towards regulations which are population-centric.
Overview of the Draft EU AI Rules
This Draft General Purpose AI Code of Practice details the AI rules for the AI Act rules and the providers of general-purpose AI models with systemic risks. The European AI Office facilitated the drawing up of the code, and was chaired by independent experts and involved nearly 1000 stakeholders and EU member state representatives and observers both European and international observers.
14th November 2024 marks the publishing of the first draft of the EU’s General-Purpose AI Code of Practice, established by the EU AI Act. As per Article 56 of the EU AI Act, the code outlines the rules that operationalise the requirements, set out for General-Purpose AI (GPAI) model under Article 53 and GPAI models with systemic risks under Article 55. The AI Act is legislation that finds its base in product safety and relies on setting harmonised standards in order to support compliance. These harmonised standards are essentially sets of operational rules that have been established by the European Standardisation bodies, such as the European Committee for Standardisation (CEN), the European Committee for Electrotechnical Standardisation (CENELEC) and the European Telecommunications Standards Institute. Industry experts, civil society and trade unions are translating the requirements set out by the EU sectoral legislation into the specific mandates set by the European Commission. The AI Act obligates the developers, deployers and users of AI on mandates for transparency, risk management and compliance mechanisms
The Code of Practice for General Purpose AI
The most popular applications of GPAI include ChatGPT and other foundational models such as CoPilot from Microsoft, BERT from Google, Llama from Meta AI and many others and they are under constant development and upgradation. The 36-pages long draft Code of Practice for General Purpose AI is meant to serve as a roadmap for tech companies to comply with the AI Act and avoid paying penalties. It focuses on transparency, copyright compliance, risk assessment, and technical/governance risk mitigation as the core areas for the companies that are developing GPAIs. It also lays down guidelines that look to enable greater transparency on what goes into developing GPAIs.
The Draft Code's provisions for risk assessment focus on preventing cyber attacks, large-scale discrimination, nuclear and misinformation risks, and the risk of the models acting autonomously without oversight.
Policy Implications
The EU’s Draft AI Rules and Code of Practice represent a bold step in shaping the governance of general-purpose AI, positioning the EU as a global pioneer in responsible AI regulation. By prioritising harmonised standards, ethical safeguards, and risk mitigation, these rules aim to ensure AI benefits society while addressing its inherent risks. While the code is a welcome step, the compliance burdens on MSMEs and startups could hinder innovation, whereas, the voluntary nature of the Code raises concerns about accountability. Additionally, harmonising these ambitious standards with varying global frameworks, especially in regions like the U.S. and India, presents a significant challenge to achieving a cohesive global approach.
Conclusion
The EU’s initiative to regulate general-purpose AI aligns with its legacy of proactive governance, setting the stage for a transformative approach to balancing innovation with ethical accountability. However, challenges remain. Striking the right balance is crucial to avoid stifling innovation while ensuring robust enforcement and inclusivity for smaller players. Global collaboration is the next frontier. As the EU leads, the world must respond by building bridges between regional regulations and fostering a unified vision for AI governance. This demands active stakeholder engagement, adaptive frameworks, and a shared commitment to addressing emerging challenges in AI. The EU’s Draft AI Rules are not just about regulation, they are about leading a global conversation.
References
- https://indianexpress.com/article/technology/artificial-intelligence/new-eu-ai-code-of-practice-draft-rules-9671152/
- https://digital-strategy.ec.europa.eu/en/policies/ai-code-practice
- https://www.csis.org/analysis/eu-code-practice-general-purpose-ai-key-takeaways-first-draft#:~:text=Drafting%20of%20the%20Code%20of%20Practice%20is%20taking%20place%20under,the%20drafting%20of%20the%20code.
- https://copyrightblog.kluweriplaw.com/2024/12/16/first-draft-of-the-general-purpose-ai-code-of-practice-has-been-released/

AI and other technologies are advancing rapidly. This has ensured the rapid spread of information, and even misinformation. LLMs have their advantages, but they also come with drawbacks, such as confident but inaccurate responses due to limitations in their training data. The evidence-driven retrieval systems aim to address this issue by using and incorporating factual information during response generation to prevent hallucination and retrieve accurate responses.
What is Retrieval-Augmented Response Generation?
Evidence-driven Retrieval Augmented Generation (or RAG) is an AI framework that improves the accuracy and reliability of large language models (LLMs) by grounding them in external knowledge bases. RAG systems combine the generative power of LLMs with a dynamic information retrieval mechanism. The standard AI models rely solely on pre-trained knowledge and pattern recognition to generate text. RAG pulls in credible, up-to-date information from various sources during the response generation process. RAG integrates real-time evidence retrieval with AI-based responses, combining large-scale data with reliable sources to combat misinformation. It follows the pattern of:
- Query Identification: When misinformation is detected or a query is raised.
- Evidence Retrieval: The AI searches databases for relevant, credible evidence to support or refute the claim.
- Response Generation: Using the evidence, the system generates a fact-based response that addresses the claim.
How is Evidence-Driven RAG the key to Fighting Misinformation?
- RAG systems can integrate the latest data, providing information on recent scientific discoveries.
- The retrieval mechanism allows RAG systems to pull specific, relevant information for each query, tailoring the response to a particular user’s needs.
- RAG systems can provide sources for their information, enhancing accountability and allowing users to verify claims.
- Especially for those requiring specific or specialised knowledge, RAG systems can excel where traditional models might struggle.
- By accessing a diverse range of up-to-date sources, RAG systems may offer more balanced viewpoints, unlike traditional LLMs.
Policy Implications and the Role of Regulation
With its potential to enhance content accuracy, RAG also intersects with important regulatory considerations. India has one of the largest internet user bases globally, and the challenges of managing misinformation are particularly pronounced.
- Indian regulators, such as MeitY, play a key role in guiding technology regulation. Similar to the EU's Digital Services Act, the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, mandate platforms to publish compliance reports detailing actions against misinformation. Integrating RAG systems can help ensure accurate, legally accountable content moderation.
- Collaboration among companies, policymakers, and academia is crucial for RAG adaptation, addressing local languages and cultural nuances while safeguarding free expression.
- Ethical considerations are vital to prevent social unrest, requiring transparency in RAG operations, including evidence retrieval and content classification. This balance can create a safer online environment while curbing misinformation.
Challenges and Limitations of RAG
While RAG holds significant promise, it has its challenges and limitations.
- Ensuring that RAG systems retrieve evidence only from trusted and credible sources is a key challenge.
- For RAG to be effective, users must trust the system. Sceptics of content moderation may show resistance to accepting the system’s responses.
- Generating a response too quickly may compromise the quality of the evidence while taking too long can allow misinformation to spread unchecked.
Conclusion
Evidence-driven retrieval systems, such as Retrieval-Augmented Generation, represent a pivotal advancement in the ongoing battle against misinformation. By integrating real-time data and credible sources into AI-generated responses, RAG enhances the reliability and transparency of online content moderation. It addresses the limitations of traditional AI models and aligns with regulatory frameworks aimed at maintaining digital accountability, as seen in India and globally. However, the successful deployment of RAG requires overcoming challenges related to source credibility, user trust, and response efficiency. Collaboration between technology providers, policymakers, and academic experts can foster the navigation of these to create a safer and more accurate online environment. As digital landscapes evolve, RAG systems offer a promising path forward, ensuring that technological progress is matched by a commitment to truth and informed discourse.
References
- https://experts.illinois.edu/en/publications/evidence-driven-retrieval-augmented-response-generation-for-onlin
- https://research.ibm.com/blog/retrieval-augmented-generation-RAG
- https://medium.com/@mpuig/rag-systems-vs-traditional-language-models-a-new-era-of-ai-powered-information-retrieval-887ec31c15a0
- https://www.researchgate.net/publication/383701402_Web_Retrieval_Agents_for_Evidence-Based_Misinformation_Detection

Executive Summary:
A viral clip where the Indian batsman Virat Kohli is shown endorsing an online casino and declaring a Rs 50,000 jackpot in three days as a guarantee has been proved a fake. In the clip that is accompanied by manipulated captions, Kohli is said to have admitted to being involved in the launch of an online casino during the interview with Graham Bensinger but this is not true. Nevertheless, an investigation showed that the original interview, which was published on YouTube in the last quarter of 2023 by Bensinger, did not have the mentioned words spoken by Kohli. Besides, another AI deepfake analysis tool called Deepware labelled the viral video as a deepfake.

Claims:
The viral video states that cricket star Virat Kohli gets involved in the promotion of an online casino and ensures that the users of the site can make a profit of Rs 50,000 within three days. Conversely, the CyberPeace Research Team has just revealed that the video is a deepfake and not the original and there is no credible evidence suggesting Kohli's participation in such endorsements. A lot of the users are sharing the videos with the wrong info title over different Social Media platforms.


Fact Check:
As soon as we were informed about the news, we made use of Keyword Search to see any news report that could be considered credible about Virat Kohli promoting any Casino app and we found nothing. Therefore, we also used Reverse Image Search for Virat Kohli wearing a Black T-shirt as seen in the video to find out more about the subject. We landed on a YouTube Video by Graham Bensinger, an American Journalist. The clip of the viral video was taken from this original video.

In this video, he discussed his childhood, his diet, his cricket training, his marriage, etc. but did not mention anything regarding a newly launched Casino app by the cricketer.
Through close scrutiny of the viral video we have noticed some inconsistencies in the lip-sync and voice. Subsequently, we executed Deepfake Detection in Deepware tool and identified it to be Deepfake Detected.


Finally, we affirm that the Viral Video Is Deepfakes Video and the statement made is False.
Conclusion:
The video has gone viral and claims that cricketer Virat Kohli is the one endorsing an online casino and assuring you that in three days time you will be a guaranteed winner of Rs 50,000. This is all a fake story. This incident demonstrates the necessity of checking facts and a source before believing any information, as well as remaining sceptical about deepfakes and AI (artificial intelligence), which is a new technology used nowadays for spreading misinformation.