#FactCheck: Debunking the Edited Image Claim of PM Modi with Hafiz Saeed
Executive Summary:
A photoshopped image circulating online suggests Prime Minister Narendra Modi met with militant leader Hafiz Saeed. The actual photograph features PM Modi greeting former Pakistani Prime Minister Nawaz Sharif during a surprise diplomatic stopover in Lahore on December 25, 2015.
The Claim:
A widely shared image on social media purportedly shows PM Modi meeting Hafiz Saeed, a declared terrorist. The claim implies Modi is hostile towards India or aligned with terrorists.

Fact Check:
On our research and reverse image search we found that the Press Information Bureau (PIB) had tweeted about the visit on 25 December 2015, noting that PM Narendra Modi was warmly welcomed by then-Pakistani PM Nawaz Sharif in Lahore. The tweet included several images from various angles of the original meeting between Modi and Sharif. On the same day, PM Modi also posted a tweet stating he had spoken with Nawaz Sharif and extended birthday wishes. Additionally, no credible reports of any meeting between Modi and Hafiz Saeed, further validating that the viral image is digitally altered.


In our further research we found an identical photo, with former Pakistan Prime Minister Nawaz Sharif in place of Hafiz Saeed. This post was shared by Hindustan Times on X on 26 December 2015, pointing to the possibility that the viral image has been manipulated.
Conclusion:
The viral image claiming to show PM Modi with Hafiz Saeed is digitally manipulated. A reverse image search and official posts from the PIB and PM Modi confirm the original photo was taken during Modi’s visit to Lahore in December 2015, where he met Nawaz Sharif. No credible source supports any meeting between Modi and Hafiz Saeed, clearly proving the image is fake.
- Claim: Debunking the Edited Image Claim of PM Modi with Hafiz Saeed
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs
%20(1).webp)
Introduction
The Central Electricity Authority (CEA) has released the Draft Central Electricity Authority (Cyber Security in Power Sector) Regulations, 2024, inviting ‘comments’ from stakeholders, including the general public, which are to be submitted by 10 September 2024. The new regulation is intended to make India’s power sector more cyber-resilient and responsive to counter emerging cyber threats and safeguard the nation's power infrastructure.
Key Highlights of the CEA’s New (Cyber Security in Power Sector) Regulations, 2024
- Central Electricity Authority has framed the ‘Cyber Security in Power Sector Regulations, 2024’ in the exercise of the powers conferred by sub-section (1) of 177 of the Electricity Act, 2003 in order to make regulations for measures relating to Cyber Security in the power sector.
- The scope of the regulation entails that these regulations will be applicable to all Responsible Entities, Regional Power Committees, Appropriate Commission, Appropriate Government and Associated Power Sector Government Organizations, and Training Institutes recognized by the Authority, Authority and Vendors.
- One key aspect of the proposed regulation is the establishment of a dedicated Computer Security Incident Response Team (CSIRT) for the power sector. This team will coordinate a unified cyber defense strategy throughout the sector, establishing security frameworks, and serving as the main agency for handling incident response and recovery. The CSIRT will also be responsible for creating/developing Standard Operating Procedures (SOPs), security policies, and best practices for incident response activities in consultation with CERT-In and NCIIPC. The detailed roles and responsibilities of CSIRT are outlined under Chapter 2 of the said regulations.
- All responsible entities in the power sector as mentioned under the scope of the regulation, are mandated to appoint a Chief Information Security Officer (CISO) and an alternate CISO, who need to be Indian nationals and who are senior management employees. The regulations specify that these officers must directly report to the CEO/Head of the Responsible Entity. Thus emphasizing the critical nature of CISO’s roles in safeguarding the nation’s power grid sector assets.
- All Responsible Entities shall establish an Information Security Division (ISD) dedicated to ensuring Cyber Security, headed by the CISO and remain operational around the clock. The schedule under regulation entails that the minimum workforce required for setting up an ISD is 04 (Four) officers including CISO and 04 officers/officials for shift operations. Sufficient workforce and infrastructure support shall be ensured for ISD. The detailed functions and responsibilities of ISD are outlined under Chapter 5 regulation 10. Furthermore, the ISD shall be manned by sufficient numbers of officers, having valid certificates of successful completion of domain-specific Cyber Security courses.
- The regulation obliged the entities to have a defined, documented and maintained Cyber Security Policy which is approved by the Board or Head of the entity. The regulation also obliged the entities to have a Cyber Crisis Management Plan (CCMP) approved by the higher management.
- As regards upskilling and empowerment the regulation advocates for organising or conducting periodic Cyber Security awareness programs and Cyber Security exercises including mock drills and tabletop exercises.
CyberPeace Policy Outlook
CyberPeace Policy & Advocacy Vertical has submitted its detailed recommendations on the proposed ‘Cyber Security in Power Sector Regulations, 2024’ to the Central Electricity Authority, Government of India. We have advised on various aspects within the regulation including harmonisation of these regulations with other rules as issued by CERT-In and NCIIPC, at present. As this needs to be clarified which set of guidelines will supersede in case of any discrepancy that may arise. Additionally, we advised on incorporating or making modifications to specific provisions under the regulation for a more robust framework. We have also emphasized legal mandates and penalties for non-compliance with cybersecurity, so as to make sure that these regulations do not only act as guiding principles but also provide stringent measures in case of non-compliance.
References:

Introduction
In the digital realm of social media, Meta Platforms, the driving force behind Facebook and Instagram, faces intense scrutiny following The Wall Street Journal's investigative report. This exploration delves deeper into critical issues surrounding child safety on these widespread platforms, unravelling algorithmic intricacies, enforcement dilemmas, and the ethical maze surrounding monetisation features. Instances of "parent-managed minor accounts" leveraging Meta's subscription tools to monetise content featuring young individuals have raised eyebrows. While skirting the line of legality, this practice prompts concerns due to its potential appeal to adults and the associated inappropriate interactions. It's a nuanced issue demanding nuanced solutions.
Failed Algorithms
The very heartbeat of Meta's digital ecosystem, its algorithms, has come under intense scrutiny. These algorithms, designed to curate and deliver content, were found to actively promoting accounts featuring explicit content to users with known pedophilic interests. The revelation sparks a crucial conversation about the ethical responsibilities tied to the algorithms shaping our digital experiences. Striking the right balance between personalised content delivery and safeguarding users is a delicate task.
While algorithms play a pivotal role in tailoring content to users' preferences, Meta needs to reevaluate the algorithms to ensure they don't inadvertently promote inappropriate content. Stricter checks and balances within the algorithmic framework can help prevent the inadvertent amplification of content that may exploit or endanger minors.
Major Enforcement Challenges
Meta's enforcement challenges have come to light as previously banned parent-run accounts resurrect, gaining official verification and accumulating large followings. The struggle to remove associated backup profiles adds layers to concerns about the effectiveness of Meta's enforcement mechanisms. It underscores the need for a robust system capable of swift and thorough actions against policy violators.
To enhance enforcement mechanisms, Meta should invest in advanced content detection tools and employ a dedicated team for consistent monitoring. This proactive approach can mitigate the risks associated with inappropriate content and reinforce a safer online environment for all users.
The financial dynamics of Meta's ecosystem expose concerns about the exploitation of videos that are eligible for cash gifts from followers. The decision to expand the subscription feature before implementing adequate safety measures poses ethical questions. Prioritising financial gains over user safety risks tarnishing the platform's reputation and trustworthiness. A re-evaluation of this strategy is crucial for maintaining a healthy and secure online environment.
To address safety concerns tied to monetisation features, Meta should consider implementing stricter eligibility criteria for content creators. Verifying the legitimacy and appropriateness of content before allowing it to be monetised can act as a preventive measure against the exploitation of the system.
Meta's Response
In the aftermath of the revelations, Meta's spokesperson, Andy Stone, took centre stage to defend the company's actions. Stone emphasised ongoing efforts to enhance safety measures, asserting Meta's commitment to rectifying the situation. However, critics argue that Meta's response lacks the decisive actions required to align with industry standards observed on other platforms. The debate continues over the delicate balance between user safety and the pursuit of financial gain. A more transparent and accountable approach to addressing these concerns is imperative.
To rebuild trust and credibility, Meta needs to implement concrete and visible changes. This includes transparent communication about the steps taken to address the identified issues, continuous updates on progress, and a commitment to a user-centric approach that prioritises safety over financial interests.
The formation of a task force in June 2023 was a commendable step to tackle child sexualisation on the platform. However, the effectiveness of these efforts remains limited. Persistent challenges in detecting and preventing potential child safety hazards underscore the need for continuous improvement. Legislative scrutiny adds an extra layer of pressure, emphasising the urgency for Meta to enhance its strategies for user protection.
To overcome ongoing challenges, Meta should collaborate with external child safety organisations, experts, and regulators. Open dialogues and partnerships can provide valuable insights and recommendations, fostering a collaborative approach to creating a safer online environment.
Drawing a parallel with competitors such as Patreon and OnlyFans reveals stark differences in child safety practices. While Meta grapples with its challenges, these platforms maintain stringent policies against certain content involving minors. This comparison underscores the need for universal industry standards to safeguard minors effectively. Collaborative efforts within the industry to establish and adhere to such standards can contribute to a safer digital environment for all.
To align with industry standards, Meta should actively participate in cross-industry collaborations and adopt best practices from platforms with successful child safety measures. This collaborative approach ensures a unified effort to protect users across various digital platforms.
Conclusion
Navigating the intricate landscape of child safety concerns on Meta Platforms demands a nuanced and comprehensive approach. The identified algorithmic failures, enforcement challenges, and controversies surrounding monetisation features underscore the urgency for Meta to reassess and fortify its commitment to being a responsible digital space. As the platform faces this critical examination, it has an opportunity to not only rectify the existing issues but to set a precedent for ethical and secure social media engagement.
This comprehensive exploration aims not only to shed light on the existing issues but also to provide a roadmap for Meta Platforms to evolve into a safer and more responsible digital space. The responsibility lies not just in acknowledging shortcomings but in actively working towards solutions that prioritise the well-being of its users.
References
- https://timesofindia.indiatimes.com/gadgets-news/instagram-facebook-prioritised-money-over-child-safety-claims-report/articleshow/107952778.cms
- https://www.adweek.com/blognetwork/meta-staff-found-instagram-tool-enabled-child-exploitation-the-company-pressed-ahead-anyway/107604/
- https://www.tbsnews.net/tech/meta-staff-found-instagram-subscription-tool-facilitated-child-exploitation-yet-company

Executive Summary:
A video is circulating on social media claiming to be footage of the aftermath of Iran's missile strikes on Israel. The video shows destruction, damaged infrastructure, and panic among civilian casualties. After our own digital verification, visual inspection, and frame-by-frame inspection, we have determined that the video is fake. The video is just AI-generated clips and not related to any incident.

Claim:
The viral video claims that a recent military strike by Iran resulted in the destruction of parts of Israel, following an initial missile attack launched by Iran. The footage appears current and depicts significant destruction of buildings and widespread chaos in the streets.

FACT CHECK:
We conducted our research on the viral video to determine if it was AI-generated. During the research we broke the video into individual still frames, and upon closely examining the frames, several of the visuals he showed us had odd-shaped visual features, abnormal body proportions, and flickering movements that don't occur in real footage. We took several still frames and checked them in image search sites to see if they had appeared before. The search results revealed that several clips in the video had appeared previously, in separate and unrelated circumstances, which indicates that they are neither recent nor original.

While examining the Instagram profile, we noticed that the account frequently shares visually dramatic AI content that appears digitally created. Many earlier posts from the same page include scenes that are unrealistic, such as wrecked aircraft in desolate areas or buildings collapsing in unnatural ways. In the current video, for instance, the fighter jets shown have multiple wings, which is not technically or aerodynamically possible in real life. The profile’s bio, which reads "Resistance of Artificial Intelligence," suggests that the page intentionally focuses on sharing AI-generated or fictional content.

We also ran the viral post through Tenorshare.AI for Deep-Fake detection, and the result came 94% AI. All findings resulting from our research established that the video is synthetic and unrelated to any event occurring in Israel, and therefore debunked a false narrative propagated on social media.

Conclusion:
Our research found that the video is fake and contains AI-generated images and is not related to any real missile strike or destruction occurring in Israel. The source is specific to fuel the panic and misinformation in a context of already-heightened geopolitical tension. We call on viewers not to share this unverified information and to rely on trusted sources. When there are sensitive international developments, the dissemination of fake imagery can promote fear, confusion, and misinformation on a global scale.
- Claim: Real Footage of Iran’s Missile Strikes on Israel
- Claimed On: Social Media
- Fact Check: False and Misleading