#FactCheck - Debunking Manipulated Photos of Smiling Secret Service Agents During Trump Assassination Attempt
Executive Summary:
Viral pictures featuring US Secret Service agents smiling while protecting former President Donald Trump during a planned attempt to kill him in Pittsburgh have been clarified as photoshopped pictures. The pictures making the rounds on social media were produced by AI-manipulated tools. The original image shows no smiling agents found on several websites. The event happened with Thomas Mathew Crooks firing bullets at Trump at an event in Butler, PA on July 13, 2024. During the incident one was deceased and two were critically injured. The Secret Service stopped the shooter, and circulating photos in which smiles were faked have stirred up suspicion. The verification of the face-manipulated image was debunked by the CyberPeace Research Team.

Claims:
Viral photos allegedly show United States Secret Service agents smiling while rushing to protect former President Donald Trump during an attempted assassination in Pittsburgh, Pennsylvania.



Fact Check:
Upon receiving the posts, we searched for any credible source that supports the claim made, we found several articles and images of the incident but in those the images were different.

This image was published by CNN news media, in this image we can see the US Secret Service protecting Donald Trump but not smiling. We then checked for AI Manipulation in the image using the AI Image Detection tool, True Media.


We then checked with another AI Image detection tool named, contentatscale AI image detection, which also found it to be AI Manipulated.

Comparison of both photos:

Hence, upon lack of credible sources and detection of AI Manipulation concluded that the image is fake and misleading.
Conclusion:
The viral photos claiming to show Secret Service agents smiling when protecting former President Donald Trump during an assassination attempt have been proven to be digitally manipulated. The original image found on CNN Media shows no agents smiling. The spread of these altered photos resulted in misinformation. The CyberPeace Research Team's investigation and comparison of the original and manipulated images confirm that the viral claims are false.
- Claim: Viral photos allegedly show United States Secret Service agents smiling while rushing to protect former President Donald Trump during an attempted assassination in Pittsburgh, Pennsylvania.
- Claimed on: X, Thread
- Fact Check: Fake & Misleading
Related Blogs
.webp)
Introduction
In the labyrinthine world of digital currencies, a new chapter unfolds as India intensifies its scrutiny over the ethereal realm of offshore cryptocurrency exchanges. With nuance and determination that virtually mirrors the Byzantine complexities of the very currencies they seek to regulate, Indian authorities embark on a course of stringent oversight, bringing to the fore an ever-evolving narrative of control and compliance in the fintech sector. The government's latest manoeuvre—a directive to Apple Inc. to excise the apps of certain platforms, including the colossus Binance, from its App Store in India—signals a crescendo in the nation's efforts to rein in the unbridled digital bazaar that had hitherto thrived in a semi-autonomous expanse of cyberspace.
The directive, with ramifications as significant and intricate as the cryptographic algorithms that underpin the blockchain, stems from the Ministry of Electronics and Information Technology, which has cast eight exchanges, including Bitfinex, HTX, and Kucoin, into the shadows, rendering their apps as elusive as the Higgs boson in the vast App Store universe. The movement of these exchanges from visibility to obscurity in the digital storefront is cloaked in secrecy, with sources privy to this development remaining cloaked in anonymity, their identities as guarded as the cryptographic keys that secure blockchain transactions.
The Contention
This escalation, however, did not manifest from the vacuum of the ether; it is the culmination of a series of precipitating actions that began unfolding on December 28th, when the Indian authorities unfurled a net over nine exchanges, ensnaring them with suspicions of malfeasance. The spectre of inaccessible funds, a byproduct of this entanglement, has since haunted Indian crypto traders, prompting a migration of deposits to local exchanges that operate within the nation's regulatory framework—a fortress against the uncertainties of the offshore crypto tempest.
The extent of the authorities' reach manifests further, beckoning Alphabet Inc.'s Google to follow in Apple's footsteps. Yet, in a display of the unpredictable nature of enforcement, the Google Play Store in India still played host to the very apps that Apple's digital Eden had forsaken as of a nondescript Wednesday afternoon, marked by the relentless march of time. The triad of power-brokers—Apple, Google, and India's technology ministry—has maintained a stance as enigmatic as the Sphinx, their communications as impenetrable as the vaults that secure the nation's precious monetary reserves.
Compounding the tightening of this digital noose, the Financial Intelligence Unit of India, a sentinel ever vigilant at the gates of financial propriety, unfurled a compliance show-cause notice to the nine offshore platforms, an ultimatum demanding they justify their elusive presence in Indian cyberspace. The FIU's decree echoed with clarity amidst the cacophony of regulatory overtures: these digital entities were tethered to operations sequestered in the shadows, skirting the reach of India's anti-money laundering edicts, their websites lingering in cyberspace like forbidden fruit, tantalisingly within reach yet potentially laced with the cyanide of non-compliance.
In this chaotic tableau of constraint and control, a glimmer of presence remains—only Bitstamp has managed to brave the regulatory storm, maintaining its presence on the Indian App Store, a lone beacon amid the turbulent sea of regimentation. Kraken, another leviathan of crypto depths, presented only its Pro version to the Indian connoisseurs of the digital marketplace. An aura of silence envelops industry giants such as Binance, Bitfinex, and KuCoin, their absence forming a void as profound as the dark side of the moon in the consciousness of Indian users. HTX, formerly known as Huobi, has announced a departure from Indian operations with the detached finality of a distant celestial body, cold and indifferent to the gravitational pull of India's regulatory orbit.
Compliances
In compliance with the provisions of the Money Laundering Act (PMLA) 2002 and the recent uproar on crypto assessment apps, Apple store finally removed these apps namely Binance and Kucoin from the store after receiving show cause notice. The alleged illegal operation and failure to comply with existing money laundering laws are major reasons for their removal.
The Indian Narrative
The overarching narrative of India's embrace of rigid oversight aligns with a broader global paradigm shift, where digital financial assets are increasingly subjected to the same degree of scrutiny as their physical analogues. The persistence in imposing anti-money laundering provisions upon the crypto sector reflects this shift, with India positioning its regulatory lens in alignment with the stars of international accountability. The preceding year bore witness to seismic shifts as Indian authorities imposed a tax upon crypto transactions, a move that precipitated a downfall in trading volumes, reminiscent of Icarus's fateful flight—hubris personified as his waxen appendages succumbed to the unrelenting kiss of the sun.
On a local scale, trading powerhouses lament the imposition of a 1% levy, colloquially known as Tax Deducted at Source. This fiscal shackle drove an exodus of Indian crypto traders into the waiting, seemingly benevolent arms of offshore financial Edens, absolved of such taxational rites. As Sumit Gupta, CEO of CoinDCX, recounted, this fiscal migration witnessed the haemorrhaging of revenue. His estimation that a staggering 95% of trading volume abandoned local shores for the tranquil harbours of offshore havens punctuates the magnitude of this phenomenon.
Conclusion
Ultimately, the story of India's proactive clampdown on offshore crypto exchanges resembles a meticulously woven tapestry of regulatory ardour, financial prudence, and the inexorable progression towards a future where digital incarnations mirror the scrutinised tangibility of physical assets. It is a saga delineating a nation's valiant navigation through the tempestuous, cryptic waters of cryptocurrency, helming its ship with unwavering determination, with eyes keenly trained on the farthest reaches of the horizon. Here, amidst the fusion of digital and corporeal realms, India charts its destiny, setting its sails towards an inextricably linked future that promises to shape the contour of the global financial landscape.
References
- https://www.business-standard.com/markets/cryptocurrency/govt-escalates-clampdown-on-offshore-crypto-venues-like-binance-report-124011000586_1.html
- https://www.cnbctv18.com/technology/india-escalates-clampdown-on-offshore-crypto-exchanges-like-binance-18763111.htm
- https://economictimes.indiatimes.com/tech/technology/centre-blocks-web-platforms-of-offshore-crypto-apps-binance-kucoin-and-others/articleshow/106783697.cms?from=mdr

Introduction
The integration of Artificial Intelligence into our daily workflows has compelled global policymakers to develop legislative frameworks to govern its impact efficiently. The question that we arrive at here is: While AI is undoubtedly transforming global economies, who governs the transformation? The EU AI Act was the first of its kind legislation to govern Artificial Intelligence, making the EU a pioneer in the emerging technology regulation space. This blog analyses the EU's Draft AI Rules and Code of Practice, exploring their implications for ethics, innovation, and governance.
Background: The Need for AI Regulation
AI adoption has been happening at a rapid pace and is projected to contribute $15.7 trillion to the global economy by 2030. The AI market size is expected to grow by at least 120% year-over-year. Both of these statistics have been stated in arguments citing concrete examples of AI risks (e.g., bias in recruitment tools, misinformation spread through deepfakes). Unlike the U.S., which relies on sector-specific regulations, the EU proposes a unified framework to address AI's challenges comprehensively, especially with the vacuum that exists in the governance of emerging technologies such as AI. It should be noted that the GDPR or the General Data Protection Regulation has been a success with its global influence on data privacy laws and has started a domino effect for the creation of privacy regulations all over the world. This precedent emphasises the EU's proactive approach towards regulations which are population-centric.
Overview of the Draft EU AI Rules
This Draft General Purpose AI Code of Practice details the AI rules for the AI Act rules and the providers of general-purpose AI models with systemic risks. The European AI Office facilitated the drawing up of the code, and was chaired by independent experts and involved nearly 1000 stakeholders and EU member state representatives and observers both European and international observers.
14th November 2024 marks the publishing of the first draft of the EU’s General-Purpose AI Code of Practice, established by the EU AI Act. As per Article 56 of the EU AI Act, the code outlines the rules that operationalise the requirements, set out for General-Purpose AI (GPAI) model under Article 53 and GPAI models with systemic risks under Article 55. The AI Act is legislation that finds its base in product safety and relies on setting harmonised standards in order to support compliance. These harmonised standards are essentially sets of operational rules that have been established by the European Standardisation bodies, such as the European Committee for Standardisation (CEN), the European Committee for Electrotechnical Standardisation (CENELEC) and the European Telecommunications Standards Institute. Industry experts, civil society and trade unions are translating the requirements set out by the EU sectoral legislation into the specific mandates set by the European Commission. The AI Act obligates the developers, deployers and users of AI on mandates for transparency, risk management and compliance mechanisms
The Code of Practice for General Purpose AI
The most popular applications of GPAI include ChatGPT and other foundational models such as CoPilot from Microsoft, BERT from Google, Llama from Meta AI and many others and they are under constant development and upgradation. The 36-pages long draft Code of Practice for General Purpose AI is meant to serve as a roadmap for tech companies to comply with the AI Act and avoid paying penalties. It focuses on transparency, copyright compliance, risk assessment, and technical/governance risk mitigation as the core areas for the companies that are developing GPAIs. It also lays down guidelines that look to enable greater transparency on what goes into developing GPAIs.
The Draft Code's provisions for risk assessment focus on preventing cyber attacks, large-scale discrimination, nuclear and misinformation risks, and the risk of the models acting autonomously without oversight.
Policy Implications
The EU’s Draft AI Rules and Code of Practice represent a bold step in shaping the governance of general-purpose AI, positioning the EU as a global pioneer in responsible AI regulation. By prioritising harmonised standards, ethical safeguards, and risk mitigation, these rules aim to ensure AI benefits society while addressing its inherent risks. While the code is a welcome step, the compliance burdens on MSMEs and startups could hinder innovation, whereas, the voluntary nature of the Code raises concerns about accountability. Additionally, harmonising these ambitious standards with varying global frameworks, especially in regions like the U.S. and India, presents a significant challenge to achieving a cohesive global approach.
Conclusion
The EU’s initiative to regulate general-purpose AI aligns with its legacy of proactive governance, setting the stage for a transformative approach to balancing innovation with ethical accountability. However, challenges remain. Striking the right balance is crucial to avoid stifling innovation while ensuring robust enforcement and inclusivity for smaller players. Global collaboration is the next frontier. As the EU leads, the world must respond by building bridges between regional regulations and fostering a unified vision for AI governance. This demands active stakeholder engagement, adaptive frameworks, and a shared commitment to addressing emerging challenges in AI. The EU’s Draft AI Rules are not just about regulation, they are about leading a global conversation.
References
- https://indianexpress.com/article/technology/artificial-intelligence/new-eu-ai-code-of-practice-draft-rules-9671152/
- https://digital-strategy.ec.europa.eu/en/policies/ai-code-practice
- https://www.csis.org/analysis/eu-code-practice-general-purpose-ai-key-takeaways-first-draft#:~:text=Drafting%20of%20the%20Code%20of%20Practice%20is%20taking%20place%20under,the%20drafting%20of%20the%20code.
- https://copyrightblog.kluweriplaw.com/2024/12/16/first-draft-of-the-general-purpose-ai-code-of-practice-has-been-released/

Introduction
The 2023-24 annual report of the Union Home Ministry states that WhatsApp is among the primary platforms being targeted for cyber fraud in India, followed by Telegram and Instagram. Cybercriminals have been conducting frauds like lending and investment scams, digital arrests, romance scams, job scams, online phishing etc., through these platforms, creating trauma for victims and overburdening law enforcement, which is not always the best equipped to recover their money. WhatsApp’s scale, end-to-end encryption, and ease of mass messaging make it both a powerful medium of communication and a vulnerable target for bad actors. It has over 500 million users in India, which makes it a primary subject for scammers running illegal lending apps, phishing schemes, and identity fraud.
Action Taken by Whatsapp
As a response to this worrying trend and in keeping with Rule 4(1)(d) of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, [updated as of 6.4.2023], WhatsApp has been banning millions of Indian accounts through automated tools, AI-based detection systems, and behaviour analysis, which can detect suspicious activity and misuse. In July 2021, it banned over 2 million accounts. By February 2025, this number had shot up to over 9.7 million, with 1.4 million accounts removed proactively, that is, before any user reported them. While this may mean that the number of attacks has increased, or WhatsApp’s detection systems have improved, or both, what it surely signals is the acknowledgement of a deeper, systemic challenge to India’s digital ecosystem and the growing scale and sophistication of cyber fraud, especially on encrypted platforms.
CyberPeace Insights
- Under Rule 4(1)(d) of the IT Rules, 2021, significant social media intermediaries (SSMIs) are required to implement automated tools to detect harmful content. But enforcement has been uneven. WhatsApp’s enforcement action demonstrates what effective compliance with proactive moderation can look like because of the scale and transparency of its actions.
- Platforms must treat fraud not just as a content violation but as a systemic abuse of the platform’s infrastructure.
- India is not alone in facing this challenge. The EU’s Digital Services Act (DSA), for instance, mandates large platforms to conduct regular risk assessments, maintain algorithmic transparency, and allow independent audits of their safety mechanisms. These steps go beyond just removing bad content by addressing the design of the platform itself. India can draw from this by codifying a baseline standard for fraud detection, requiring platforms to publish detailed transparency reports, and clarifying the legal expectations around proactive monitoring. Importantly, regulators must ensure this is done without compromising encryption or user privacy.
- WhatsApp’s efforts are part of a broader, emerging ecosystem of threat detection. The Indian Cyber Crime Coordination Centre (I4C) is now sharing threat intelligence with platforms like Google and Meta to help take down scam domains, malicious apps, and sponsored Facebook ads promoting illegal digital lending. This model of public-private intelligence collaboration should be institutionalized and scaled across sectors.
Conclusion: Turning Enforcement into Policy
WhatsApp’s mass account ban is not just about enforcement but an example of how platforms must evolve. As India becomes increasingly digital, it needs a forward-looking policy framework that supports proactive monitoring, ethical AI use, cross-platform coordination, and user safety. The digital safety of users in India and those around the world must be built into the architecture of the internet.
References
- https://scontent.xx.fbcdn.net/v/t39.8562-6/486805827_1197340372070566_282096906288453586_n.pdf?_nc_cat=104&ccb=1-7&_nc_sid=b8d81d&_nc_ohc=BRGwyxF87MgQ7kNvwHyyW8u&_nc_oc=AdnNG2wXIN5F-Pefw_FTt2T4K6POllUyKpO7nxwzCWxNgQEkVLllHmh81AHT2742dH8&_nc_zt=14&_nc_ht=scontent.xx&_nc_gid=iaQzNQ8nBZzxuIS4rXLOkQ&oh=00_AfEnbac47YDXvymJ5vTVB-gXteibjpbTjY5uhP_sMN9ouw&oe=67F95BF0
- https://scontent.xx.fbcdn.net/v/t39.8562-6/217535270_342765227288666_5007519467044742276_n.pdf?_nc_cat=110&ccb=1-7&_nc_sid=b8d81d&_nc_ohc=aj6og9xy5WQQ7kNvwG9Vzkd&_nc_oc=AdnDtVbrQuo4lm3isKg5O4cw5PHkp1MoMGATVpuAdOUUz-xyJQgWztGV1PBovGACQ9c&_nc_zt=14&_nc_ht=scontent.xx&_nc_gid=gabMfhEICh_gJFiN7vwzcA&oh=00_AfE7lXd9JJlEZCpD4pxW4OOc03BYcp1e3KqHKN9-kaPGMQ&oe=67FD6FD3
- https://www.hindustantimes.com/india-news/whatsapp-is-most-used-platform-for-cyber-crimes-home-ministry-report-101735719475701.html
- https://www.indiatoday.in/technology/news/story/whatsapp-bans-over-97-lakhs-indian-accounts-to-protect-users-from-scam-2702781-2025-04-02