#Factcheck-False Claims of Houthi Attack on Israel’s Ashkelon Power Plant
Executive Summary:
A post on X (formerly Twitter) has gained widespread attention, featuring an image inaccurately asserting that Houthi rebels attacked a power plant in Ashkelon, Israel. This misleading content has circulated widely amid escalating geopolitical tensions. However, investigation shows that the footage actually originates from a prior incident in Saudi Arabia. This situation underscores the significant dangers posed by misinformation during conflicts and highlights the importance of verifying sources before sharing information.

Claims:
The viral video claims to show Houthi rebels attacking Israel's Ashkelon power plant as part of recent escalations in the Middle East conflict.

Fact Check:
Upon receiving the viral posts, we conducted a Google Lens search on the keyframes of the video. The search reveals that the video circulating online does not refer to an attack on the Ashkelon power plant in Israel. Instead, it depicts a 2022 drone strike on a Saudi Aramco facility in Abqaiq. There are no credible reports of Houthi rebels targeting Ashkelon, as their activities are largely confined to Yemen and Saudi Arabia.

This incident highlights the risks associated with misinformation during sensitive geopolitical events. Before sharing viral posts, take a brief moment to verify the facts. Misinformation spreads quickly and it’s far better to rely on trusted fact-checking sources.
Conclusion:
The assertion that Houthi rebels targeted the Ashkelon power plant in Israel is incorrect. The viral video in question has been misrepresented and actually shows a 2022 incident in Saudi Arabia. This underscores the importance of being cautious when sharing unverified media. Before sharing viral posts, take a moment to verify the facts. Misinformation spreads quickly, and it is far better to rely on trusted fact-checking sources.
- Claim: The video shows massive fire at Israel's Ashkelon power plant
- Claimed On:Instagram and X (Formerly Known As Twitter)
- Fact Check: False and Misleading
Related Blogs
.webp)
Introduction
A Pew Research Center survey conducted in September 2023, found that among 1,453 age group of 13-17 year olds projected that the majority of the age group uses TikTok (63%), Snapchat (60%) and Instagram (59%) in the U.S. Further, in India the 13-19 year-olds age group makes up 31% of social media users in India, according to a report by Statista from 2021. This has been the leading cause of young users inadvertently or deliberately accessing adult content on social media platforms.
Brief Analysis of Meta’s Proposed AI Age Classifier
It can be seen as a step towards safer and moderated content for teen users, by placing age restrictions on teen social media users as sometimes they do not have enough cognitive skills to understand what content can be shared and consumed on these platforms and what can not as per their age. Moreover, there needs to be an understanding of platform policies and they need to understand that nothing can be completely erased from the internet.
Unrestricted access to social media exposes teens to potentially harmful or inappropriate online content, raising concerns about their safety and mental well-being. Meta's recent measures aim to address this, however striking a balance between engagement, protection, and privacy is also an essential part.
The AI-based Age Classifier proposed by Meta classifies users based on their age and places them in the ‘Teen Account’ category which has built-in limits on who can contact them, the content they see and more ways to connect and explore their interests. According to Meta, teens under 16 years of age will need parental permission to change these settings.
Meta's Proposed Solution: AI-Powered Age Classifier
This tool uses Artificial Intelligence (AI) to analyze users’ online behaviours and other profile information to estimate their age. It analyses different factors such as who follows the user, what kind of content they interact with, and even comments like birthday posts from friends. If the classifier detects that a user is likely under 18 years old, it will automatically switch them to a “Teen Account.” These accounts have more restricted privacy settings, such as limiting who can message the user and filtering the type of content they can see.
The adult classifier is anticipated to be deployed by next year and will start scanning for such users who may have lied about their age. All users found to be under 18 years old will be placed in the category of teen accounts, but 16-17 year olds will be able to adjust these settings if they want more flexibility, while younger teens will need parental permission. The effort is part of a broader strategy to protect teens from potentially harmful content on social media. This is especially important in today’s time as the invasion of privacy for anyone, particularly, can be penalised due to legal instruments like GDPR, DPDP Act, COPPA and many more.
Policy Implications and Compliances
Meta's AI Age Classifier addresses the growing concerns over teen safety on social media by categorizing users based on age, restricting minors' access to adult content, and enforcing parental controls. However, reliance on behavioural tracking might potentially impact the online privacy of teen users. Hence the approach of Meta needs to be aligned with applicable jurisdictional laws. In India, the recently enacted DPDP Act, of 2023 prohibits behavioural tracking and targeted advertising to children. Accuracy and privacy are the two main concerns that Meta should anticipate when they roll out the classifier.
Meta emphasises transparency to build user trust, and customizable parental controls empower families to manage teens' online experiences. This initiative reflects Meta's commitment to creating a safer, regulated digital space for young users worldwide, it must also align its policies properly with the regional policy and law standards. Meta’s proposed AI Age Classifier aims to protect teens from adult content, reassure parents by allowing them to curate acceptable content, and enhance platform integrity by ensuring a safer environment for teen users on Instagram.
Conclusion
Meta’s AI Age Classifier while promising to enhance teen safety and putting certain restrictions and parental controls on accounts categorised as ‘teen accounts’, must also properly align with global regulations like GDPR, and the DPDP Act with reference to India. This tool offers reassurance to parents and aims to foster a safer social media environment for teens. To support accurate age estimation and transparency, policy should focus on refining AI methods to minimise errors and ensure clear disclosures about data handling. Collaborative international standards are essential as privacy laws evolve. Meta’s initiative is intended to prioritise youth protection and build public trust in AI-driven moderation across social platforms, while it must also balance the online privacy of users while utilising these advanced tech measures on the platforms.
References
- https://familycenter.meta.com/in/our-products/instagram/
- https://www.indiatoday.in/technology/news/story/instagram-will-now-take-help-of-ai-to-check-if-kids-are-lying-about-their-age-on-app-2628464-2024-11-05
- https://www.bloomberg.com/news/articles/2024-11-04/instagram-plans-to-use-ai-to-catch-teens-lying-about-age
- https://tech.facebook.com/artificial-intelligence/2022/6/adult-classifier/
- https://indianexpress.com/article/technology/artificial-intelligence/too-young-to-use-instagram-metas-ai-classifier-could-help-catch-teens-lying-about-their-age-9658555/

Introduction
The Australian Parliament has passed the world’s first legislation regarding a ban on social media for children under 16. This was done citing risks to the mental and physical well-being of children and the need to contain misogynistic influence on them. The debate surrounding the legislation is raging strong, as it is the first proposal of its kind and would set precedence for how other countries can assess their laws regarding children and social media platforms and their priorities.
The Legislation
Currently trailing an age-verification system (such as biometrics or government identification), the legislation mandates a complete ban on underage children using social media, setting the age limit to 16 or above. Further, the law does not provide exemptions of any kind, be it for pre-existing accounts or parental consent. With federal elections approaching, the law seeks to address parental concerns regarding measures to protect their children from threats lurking on social media platforms. Every step in this regard is being observed with keen interest.
The Australian Prime Minister, Anthony Albanese, emphasised that the onus of taking responsible steps toward preventing access falls on the social media platforms, absolving parents and their children of the same. Social media platforms like TikTok, X, and Meta Platforms’ Facebook and Instagram all come under the purview of this legislation.
CyberPeace Overview
The issue of a complete age-based ban raises a few concerns:
- It is challenging to enforce digitally as children might find a way to circumnavigate such restrictions. An example would be the Cinderella Law, formally known as the Shutdown Law, which the Government of South Korea had implemented back in 2011 to reduce online gaming and promote healthy sleeping habits among children. The law mandated the prohibition of access to online gaming for children under the age of 16 between 12 A.M. to 6 A.M. However, a few drawbacks rendered it less effective over time. Children were able to use the login IDs of adults, switch to VPN, and even switch to offline gaming. In addition, parents also felt the government was infringing on the right to privacy and the restrictions were only for online PC games and did not extend to mobile phones. Consequently, the law lost relevance and was repealed in 2021.
- The concept of age verification inherently requires collecting more personal data and inadvertently opens up concerns regarding individual privacy.
- A ban is likely to reduce the pressure on tech and social media companies to develop and work on areas that would make their services a safe child-friendly environment.
Conclusion
Social media platforms can opt for an approach that focuses on how to create a safe environment online for children as they continue to deliberate on restrictions. An example of an impactful-yet-balanced step towards the protection of children on social media while respecting privacy is the U.K.'s Age-Appropriate Design Code (UK AADC). It is the U.K.’s implementation of the European Union’s General Data Protection Regulation (GDPR), prepared by the ICO (Information Commissioner's Office), the U.K. data protection regulator. It follows a safety-by-design approach for children. As we move towards a future that is predominantly online, we must continue to strive and create a safe space for children and address issues in innovative ways.
References
- https://indianexpress.com/article/technology/social/australia-proposes-ban-on-social-media-for-children-under-16-9657544/
- https://www.thehindu.com/opinion/op-ed/should-children-be-barred-from-social-media/article68661342.ece
- https://forumias.com/blog/debates-on-whether-children-should-be-banned-from-social-media/
- https://timesofindia.indiatimes.com/education/news/why-banning-kids-from-social-media-wont-solve-the-youth-mental-health-crisis/articleshow/113328111.cms
- https://iapp.org/news/a/childrens-privacy-laws-and-freedom-of-expression-lessons-from-the-uk-age-appropriate-design-code
- https://www.techinasia.com/s-koreas-cinderella-law-finally-growing-up-teens-may-soon-be-able-to-play-online-after-midnight-again
- https://wp.towson.edu/iajournal/2021/12/13/video-gaming-addiction-a-case-study-of-china-and-south-korea/
- https://www.dailysabah.com/world/asia-pacific/australia-passes-worlds-1st-total-social-media-ban-for-children

AI and other technologies are advancing rapidly. This has ensured the rapid spread of information, and even misinformation. LLMs have their advantages, but they also come with drawbacks, such as confident but inaccurate responses due to limitations in their training data. The evidence-driven retrieval systems aim to address this issue by using and incorporating factual information during response generation to prevent hallucination and retrieve accurate responses.
What is Retrieval-Augmented Response Generation?
Evidence-driven Retrieval Augmented Generation (or RAG) is an AI framework that improves the accuracy and reliability of large language models (LLMs) by grounding them in external knowledge bases. RAG systems combine the generative power of LLMs with a dynamic information retrieval mechanism. The standard AI models rely solely on pre-trained knowledge and pattern recognition to generate text. RAG pulls in credible, up-to-date information from various sources during the response generation process. RAG integrates real-time evidence retrieval with AI-based responses, combining large-scale data with reliable sources to combat misinformation. It follows the pattern of:
- Query Identification: When misinformation is detected or a query is raised.
- Evidence Retrieval: The AI searches databases for relevant, credible evidence to support or refute the claim.
- Response Generation: Using the evidence, the system generates a fact-based response that addresses the claim.
How is Evidence-Driven RAG the key to Fighting Misinformation?
- RAG systems can integrate the latest data, providing information on recent scientific discoveries.
- The retrieval mechanism allows RAG systems to pull specific, relevant information for each query, tailoring the response to a particular user’s needs.
- RAG systems can provide sources for their information, enhancing accountability and allowing users to verify claims.
- Especially for those requiring specific or specialised knowledge, RAG systems can excel where traditional models might struggle.
- By accessing a diverse range of up-to-date sources, RAG systems may offer more balanced viewpoints, unlike traditional LLMs.
Policy Implications and the Role of Regulation
With its potential to enhance content accuracy, RAG also intersects with important regulatory considerations. India has one of the largest internet user bases globally, and the challenges of managing misinformation are particularly pronounced.
- Indian regulators, such as MeitY, play a key role in guiding technology regulation. Similar to the EU's Digital Services Act, the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, mandate platforms to publish compliance reports detailing actions against misinformation. Integrating RAG systems can help ensure accurate, legally accountable content moderation.
- Collaboration among companies, policymakers, and academia is crucial for RAG adaptation, addressing local languages and cultural nuances while safeguarding free expression.
- Ethical considerations are vital to prevent social unrest, requiring transparency in RAG operations, including evidence retrieval and content classification. This balance can create a safer online environment while curbing misinformation.
Challenges and Limitations of RAG
While RAG holds significant promise, it has its challenges and limitations.
- Ensuring that RAG systems retrieve evidence only from trusted and credible sources is a key challenge.
- For RAG to be effective, users must trust the system. Sceptics of content moderation may show resistance to accepting the system’s responses.
- Generating a response too quickly may compromise the quality of the evidence while taking too long can allow misinformation to spread unchecked.
Conclusion
Evidence-driven retrieval systems, such as Retrieval-Augmented Generation, represent a pivotal advancement in the ongoing battle against misinformation. By integrating real-time data and credible sources into AI-generated responses, RAG enhances the reliability and transparency of online content moderation. It addresses the limitations of traditional AI models and aligns with regulatory frameworks aimed at maintaining digital accountability, as seen in India and globally. However, the successful deployment of RAG requires overcoming challenges related to source credibility, user trust, and response efficiency. Collaboration between technology providers, policymakers, and academic experts can foster the navigation of these to create a safer and more accurate online environment. As digital landscapes evolve, RAG systems offer a promising path forward, ensuring that technological progress is matched by a commitment to truth and informed discourse.
References
- https://experts.illinois.edu/en/publications/evidence-driven-retrieval-augmented-response-generation-for-onlin
- https://research.ibm.com/blog/retrieval-augmented-generation-RAG
- https://medium.com/@mpuig/rag-systems-vs-traditional-language-models-a-new-era-of-ai-powered-information-retrieval-887ec31c15a0
- https://www.researchgate.net/publication/383701402_Web_Retrieval_Agents_for_Evidence-Based_Misinformation_Detection