#Factcheck-False Claims of Houthi Attack on Israel’s Ashkelon Power Plant
Executive Summary:
A post on X (formerly Twitter) has gained widespread attention, featuring an image inaccurately asserting that Houthi rebels attacked a power plant in Ashkelon, Israel. This misleading content has circulated widely amid escalating geopolitical tensions. However, investigation shows that the footage actually originates from a prior incident in Saudi Arabia. This situation underscores the significant dangers posed by misinformation during conflicts and highlights the importance of verifying sources before sharing information.

Claims:
The viral video claims to show Houthi rebels attacking Israel's Ashkelon power plant as part of recent escalations in the Middle East conflict.

Fact Check:
Upon receiving the viral posts, we conducted a Google Lens search on the keyframes of the video. The search reveals that the video circulating online does not refer to an attack on the Ashkelon power plant in Israel. Instead, it depicts a 2022 drone strike on a Saudi Aramco facility in Abqaiq. There are no credible reports of Houthi rebels targeting Ashkelon, as their activities are largely confined to Yemen and Saudi Arabia.

This incident highlights the risks associated with misinformation during sensitive geopolitical events. Before sharing viral posts, take a brief moment to verify the facts. Misinformation spreads quickly and it’s far better to rely on trusted fact-checking sources.
Conclusion:
The assertion that Houthi rebels targeted the Ashkelon power plant in Israel is incorrect. The viral video in question has been misrepresented and actually shows a 2022 incident in Saudi Arabia. This underscores the importance of being cautious when sharing unverified media. Before sharing viral posts, take a moment to verify the facts. Misinformation spreads quickly, and it is far better to rely on trusted fact-checking sources.
- Claim: The video shows massive fire at Israel's Ashkelon power plant
- Claimed On:Instagram and X (Formerly Known As Twitter)
- Fact Check: False and Misleading
Related Blogs

On 22nd October 2024, Jyotiraditya Scindia, Union Minister for Communications, launched the (DoT) Department of Telecoms’ International Incoming Spoofed Calls Prevention System. This was introduced in light of efforts toward preventing international fraudulent calls that enable cyber crimes. A recent report as per PIB claims for the system to have been effective and played a role in a 90% reduction in the number of spoofed international calls, its instances falling from 1.35 Crore to 6 Lakhs within two months of the launch of the system.
International spoof calls are calls that masquerade as numbers originating from within the country when displayed on the target's mobile screen. This is done by manipulating the calling line identity or the CLI, commonly known as the phone number. Previous cases reported mention that such spoof calls have been used for conducting financial scams, impersonating government officials to carry out digital arrests, and inducing panic. Instances of threats of disconnecting numbers by TRAI officials, and narcotics officials on finding drugs or even contraband through couriers are also rampant.
International Incoming Spoofed Calls Prevention System
As was addressed in the Budget in 2024, the system was previously called the Centralised International Out Roamer (CIOR), and the DoT was allocated Rs.38.76 crore for the same. The Digital Intelligence Unit (DIU) under the DoT is another project that aims to investigate and research fraudulent use of telecom resources, including messages, scams, and spam - the budget for which has been increased from 50 to 85 crores.
The International Incoming Spoofed Calls Prevention System was implemented in two phases, the first one was at the level of the telephone companies (telcos). Telcos can verify their subscribers and Indian SIMs based on the Indian Telecom Service Providers (TSPs) international long-distance (ILD) network. When a user with an Indian number travels abroad, the roaming feature gets activated, and all calls hit the ILD network of the TSP. This allows the TSP to verify whether the numbers starting with +91 are genuinely making calls from abroad or from India. However, a TSP can only verify numbers that are issued with their TSP ILD network and not those of other TSPs. This issue was addressed in the second phase, as the DIU of DoT and the TSPs built an integrated system so that a centralised database could be used to check for genuine subscribers.
CyberPeace Outlook
A press release on 23rd December 2024 encouraged the TSPs to label incoming International calls as International calls on the mobile screen of the receiver. Some of them have already started adding labels and are sending awareness messages informing their subscribers of tips on staying safe from scams. Apart from these, there are also applications available online that help in identifying callers and their location, however, these are at the behest of the users' efforts and have moderate trust value. At the level of the public, the practice of blocking unknown international numbers and not calling back, along with awareness regarding country codes is encouraged. Coordinated and updated efforts on the part of the Government and the TSPs are much appreciated in today's time as scammers continue to find new ways to commit cyber crimes using telecommunication resources.
References
- https://www.hindustantimes.com/india-news/jyotiraditya-scindia-launches-dot-system-to-block-spam-international-calls-101729615441509.html
- https://www.business-standard.com/india-news/centre-launches-system-to-block-international-spoofed-calls-curb-fraud-124102300449_1.html
- https://www.opindia.com/2024/12/number-of-spoofed-international-calls-used-in-cyber-crimes-goes-down-by-90-in-2-months/
- https://www.cnbctv18.com/technology/telecom/telecom-department-anti-spoofed-international-calls-19529459.htm
- https://pib.gov.in/PressReleaseIframePage.aspx?PRID=2067113
- https://pib.gov.in/PressReleasePage.aspx?PRID=2087644
- https://www.hindustantimes.com/india-news/display-international-call-for-calls-from-abroad-to-curb-scams-dot-to-telecos-101735050551449.html

Introduction
In an era where digitalization is transforming every facet of life, ensuring that personal data is protected becomes crucial. The enactment of the Digital Personal Data Protection Act, 2023 (DPDP Act) is a significant step that has been taken by the Indian Parliament which sets forth a comprehensive framework for Digital Personal Data. The Draft Digital Personal Data Protection Rules, 2025 has recently been released for public consultation to supplement the Act and ensure its smooth implementation once finalised. Though noting certain positive aspects, there is still room for addressing certain gaps and multiple aspects under the draft rules that require attention. The DPDP Act, 2023 recognises the individual’s right to protect their personal data providing control over the processing of personal data for lawful purposes. This Act applies to data which is available in digital form as well as data which is not in digital form but is digitalised subsequently. While the Act is intended to offer wide control to the individuals (Data Principal) over their personal information, its impact on vulnerable groups such as ‘Persons with Disabilities’ requires closer scrutiny.
Person with Disabilities as data principal
The term ‘data principal’ has been defined under the DPDP Act under Section 2(j) as a person to whom the personal data is related to, which also includes a person with a disability. A lawful guardian acting on behalf of such person with disability has also been included under the ambit of this definition of Data Principal. As a result, a lawful guardian acting on behalf of a person with disability will have the same rights and responsibilities as a data principal under the Act.
- Section 9 of the DPDP Act, 2023 states that before processing the personal data of a person with a disability who has a lawful guardian, the data fiduciary must obtain verifiable consent from that guardian, ensuring proper protection of the person with disability's data privacy.
- The data principal has the right to access information about personal data under Section 11 which is being processed by the data fiduciary.
- Section 12 provides the right to correction and erasure of personal data by making a request in a manner prescribed by the data fiduciary.
- A right to grievance redressal must be provided to the data principal in respect of any act or omission of performance of obligations by the data fiduciary or the consent manager.
- Under Section 14, the data principal has the right to nominate any other person to exercise the rights provided under the Act in case of death or incapacity.
Provision of consent and its implication
The three key components of Consent that can be identified under the DPDP Act, are:
- Explicit and Informed Consent: Consent given for the processing of data by the data principal or a lawful guardian in case of persons with disabilities must be clear, free and informed as per section 6 of the Act. The data fiduciary must specify the itemised description of the personal data required along with the specified purpose and description of the goods or services that would be provided by such processing of data. (Rule 3 under Draft Digital Personal Data Protection Rules)
- Verifiable Consent: Section 9 of the DPDP Act provides that the data fiduciary needs to obtain verifiable consent of the lawful guardian before processing any personal data of such a person with a disability. Rule 10 of the Draft Rules obligates the data fiduciary to adopt measures to ensure that the consent given by the lawful guardian is verifiable before the is processed.
- Withdrawal of Consent: Data principal or such lawful guardian has the option to withdraw consent for the processing of data at any point by making a request to the data fiduciary.
Although the Act includes certain provisions that focus on the inclusivity of persons with disability, the interpretation of such sections says otherwise.
Concerns related to provisions for Persons with Disabilities under the DPDP Act:
- Lack of definition of ‘person with disabilities’: The DPDP Act or the Draft Rules does not define the term ‘persons with disabilities’. This will create confusion as to which categories of disability are included and up to what percentage. The Rights of Persons with Disabilities Act, 2016 clearly defines ‘person with benchmark disability’, ‘person with disability’ and ‘person with disability having high support needs’. This categorisation is essential to determine up to what extent a person with disability needs a lawful guardian which is missing under the DPDP Act.
- Lack of autonomy: Though the definition of data principal includes persons with disabilities however the decision-making authority has been given to the lawful guardian of such individuals. The section creates ambiguity for people who have a lower percentage of disability and are capable of making their own decisions and have no autonomy in making decisions related to the processing of their personal data because of the lack of clarity in the definition of ‘persons with disabilities’.
- Safeguards for abuse of power by lawful guardian: The lawful guardian once verified by the data fiduciary can make decisions for the persons with disabilities. This raises concerns regarding the potential abuse of power by lawful guardians in relation to the handling of personal data. The DPDP Act does not provide any specific protection against such abuse.
- Difficulty in verification of consent: The consent obtained by the Data Fiduciary must be verified. The process that will be adopted for verification is at the discretion of the data fiduciary according to Rule 10 of the Draft Data Protection Rules. The authenticity of consent is difficult to determine as it is a complex process which lacks a standard format. Also, with the technological advancements, it would be challenging to identify whether the information given to verify the consent is actually true.
CyberPeace Recommendations
The DPDP Act, 2023 is a major step towards making the data protection framework more comprehensive, however, the provisions related to persons with disabilities and powers given to lawful guardians acting on their behalf still need certain clarity and refinement within the DPDP Act framework.
- Consonance of DPDP with Rights of Persons with Disabilities (RPWD) Act, 2016: The RPWD and DPDP Act should supplement each other and can be used to clear the existing ambiguities. Such as the definition of ‘persons with disabilities’ under the RPWD Act can be used in the context of the DPDP Act, 2023.
- Also, there must be certain mechanisms and safeguards within the Act to prevent abuse of power by the lawful guardian. The affected individual in case of suspected abuse of power should have an option to file a complaint with the Data Protection Board and the Board can further take necessary actions to determine whether there is abuse of power or not.
- Regulatory oversight and additional safeguards are required to ensure that consent is obtained in a manner that respects the rights of all individuals, including those with disabilities.
References:
- https://www.meity.gov.in/writereaddata/files/Digital%20Personal%20Data%20Protection%20Act%202023.pdf
- https://www.meity.gov.in/writereaddata/files/259889.pdf
- https://www.indiacode.nic.in/bitstream/123456789/15939/1/the_rights_of_persons_with_disabilities_act%2C_2016.pdf
- https://www.deccanherald.com/opinion/consent-disability-rights-and-data-protection-3143441
- https://www.pacta.in/digital-data-protection-consent-protocols-for-disability.pdf
- https://www.snrlaw.in/indias-new-data-protection-regime-tracking-updates-and-preparing-for-compliance/

Introduction
The G7 nations, a group of the most powerful economies, have recently turned their attention to the critical issue of cybercrimes and (AI) Artificial Intelligence. G7 summit has provided an essential platform for discussing the threats and crimes occurring from AI and lack of cybersecurity. These nations have united to share their expertise, resources, diplomatic efforts and strategies to fight against cybercrimes. In this blog, we shall investigate the recent development and initiatives undertaken by G7 nations, exploring their joint efforts to combat cybercrime and navigate the evolving landscape of artificial intelligence. We shall also explore the new and emerging trends in cybersecurity, providing insights into ongoing challenges and innovative approaches adopted by the G7 nations and the wider international community.
G7 Nations and AI
Each of these nations have launched cooperative efforts and measures to combat cybercrime successfully. They intend to increase their collective capacities in detecting, preventing, and responding to cyber assaults by exchanging intelligence, best practices, and experience. G7 nations are attempting to develop a strong cybersecurity architecture capable of countering increasingly complex cyber-attacks through information-sharing platforms, collaborative training programs, and joint exercises.
The G7 Summit provided an important forum for in-depth debates on the role of artificial intelligence (AI) in cybersecurity. Recognising AI’s transformational potential, the G7 nations have participated in extensive discussions to investigate its advantages and address the related concerns, guaranteeing responsible research and use. The nation also recognises the ethical, legal, and security considerations of deploying AI cybersecurity.
Worldwide Rise of Ransomware
High-profile ransomware attacks have drawn global attention, emphasising the need to combat this expanding threat. These attacks have harmed organisations of all sizes and industries, leading to data breaches, operational outages, and, in some circumstances, the loss of sensitive information. The implications of such assaults go beyond financial loss, frequently resulting in reputational harm, legal penalties, and service delays that affect consumers, clients, and the public. The increase in high-profile ransomware incidents has garnered attention worldwide, Cybercriminals have adopted a multi-faceted approach to ransomware attacks, combining techniques such as phishing, exploit kits, and supply chain Using spear-phishing, exploit kits, and supply chain hacks to obtain unauthorised access to networks and spread the ransomware. This degree of expertise and flexibility presents a substantial challenge to organisations attempting to protect against such attacks.

Focusing On AI and Upcoming Threats
During the G7 summit, one of the key topics for discussion on the role of AI (Artificial Intelligence) in shaping the future, Leaders and policymakers discuss the benefits and dangers of AI adoption in cybersecurity. Recognising AI’s revolutionary capacity, they investigate its potential to improve defence capabilities, predict future threats, and secure vital infrastructure. Furthermore, the G7 countries emphasise the necessity of international collaboration in reaping the advantages of AI while reducing the hazards. They recognise that cyber dangers transcend national borders and must be combated together. Collaboration in areas such as exchanging threat intelligence, developing shared standards, and promoting best practices is emphasised to boost global cybersecurity defences. The G7 conference hopes to set a global agenda that encourages responsible AI research and deployment by emphasising the role of AI in cybersecurity. The summit’s sessions present a path for maximising AI’s promise while tackling the problems and dangers connected with its implementation.
As the G7 countries traverse the complicated convergence of AI and cybersecurity, their emphasis on collaboration, responsible practices, and innovation lays the groundwork for international collaboration in confronting growing cyber threats. The G7 countries aspire to establish robust and secure digital environments that defend essential infrastructure, protect individuals’ privacy, and encourage trust in the digital sphere by collaboratively leveraging the potential of AI.
Promoting Responsible Al development and usage
The G7 conference will focus on developing frameworks that encourage ethical AI development. This includes fostering openness, accountability, and justice in AI systems. The emphasis is on eliminating biases in data and algorithms and ensuring that AI technologies are inclusive and do not perpetuate or magnify existing societal imbalances.
Furthermore, the G7 nations recognise the necessity of privacy protection in the context of AI. Because AI systems frequently rely on massive volumes of personal data, summit speakers emphasise the importance of stringent data privacy legislation and protections. Discussions centre around finding the correct balance between using data for AI innovation, respecting individuals’ privacy rights, and protecting data security. In addition to responsible development, the G7 meeting emphasises the importance of responsible AI use. Leaders emphasise the importance of transparent and responsible AI governance frameworks, which may include regulatory measures and standards to ensure AI technology’s ethical and legal application. The goal is to defend individuals’ rights, limit the potential exploitation of AI, and retain public trust in AI-driven solutions.
The G7 nations support collaboration among governments, businesses, academia, and civil society to foster responsible AI development and use. They stress the significance of sharing best practices, exchanging information, and developing international standards to promote ethical AI concepts and responsible practices across boundaries. The G7 nations hope to build the global AI environment in a way that prioritises human values, protects individual rights, and develops trust in AI technology by fostering responsible AI development and usage. They work together to guarantee that AI is a force for a good while reducing risks and resolving social issues related to its implementation.
Challenges on the way
During the summit, the nations, while the G7 countries are committed to combating cybercrime and developing responsible AI development, they confront several hurdles in their efforts. Some of them are:
A Rapidly Changing Cyber Threat Environment: Cybercriminals’ strategies and methods are always developing, as is the nature of cyber threats. The G7 countries must keep up with new threats and ensure their cybersecurity safeguards remain effective and adaptable.
Cross-Border Coordination: Cybercrime knows no borders, and successful cybersecurity necessitates international collaboration. On the other hand, coordinating activities among nations with various legal structures, regulatory environments, and agendas can be difficult. Harmonising rules, exchanging information, and developing confidence across states are crucial for effective collaboration.
Talent Shortage and Skills Gap: The field of cybersecurity and AI knowledge necessitates highly qualified personnel. However, skilled individuals in these fields need more supply. The G7 nations must attract and nurture people, provide training programs, and support research and innovation to narrow the skills gap.
Keeping Up with Technological Advancements: Technology changes at a rapid rate, and cyber-attacks become more complex. The G7 nations must ensure that their laws, legislation, and cybersecurity plans stay relevant and adaptive to keep up with future technologies such as AI, quantum computing, and IoT, which may both empower and challenge cybersecurity efforts.
Conclusion
To combat cyber threats effectively, support responsible AI development, and establish a robust cybersecurity ecosystem, the G7 nations must constantly analyse and adjust their strategy. By aggressively tackling these concerns, the G7 nations can improve their collective cybersecurity capabilities and defend their citizens’ and global stakeholders’ digital infrastructure and interests.