#FactCheck - Debunking Manipulated Photos of Smiling Secret Service Agents During Trump Assassination Attempt
Executive Summary:
Viral pictures featuring US Secret Service agents smiling while protecting former President Donald Trump during a planned attempt to kill him in Pittsburgh have been clarified as photoshopped pictures. The pictures making the rounds on social media were produced by AI-manipulated tools. The original image shows no smiling agents found on several websites. The event happened with Thomas Mathew Crooks firing bullets at Trump at an event in Butler, PA on July 13, 2024. During the incident one was deceased and two were critically injured. The Secret Service stopped the shooter, and circulating photos in which smiles were faked have stirred up suspicion. The verification of the face-manipulated image was debunked by the CyberPeace Research Team.

Claims:
Viral photos allegedly show United States Secret Service agents smiling while rushing to protect former President Donald Trump during an attempted assassination in Pittsburgh, Pennsylvania.



Fact Check:
Upon receiving the posts, we searched for any credible source that supports the claim made, we found several articles and images of the incident but in those the images were different.

This image was published by CNN news media, in this image we can see the US Secret Service protecting Donald Trump but not smiling. We then checked for AI Manipulation in the image using the AI Image Detection tool, True Media.


We then checked with another AI Image detection tool named, contentatscale AI image detection, which also found it to be AI Manipulated.

Comparison of both photos:

Hence, upon lack of credible sources and detection of AI Manipulation concluded that the image is fake and misleading.
Conclusion:
The viral photos claiming to show Secret Service agents smiling when protecting former President Donald Trump during an assassination attempt have been proven to be digitally manipulated. The original image found on CNN Media shows no agents smiling. The spread of these altered photos resulted in misinformation. The CyberPeace Research Team's investigation and comparison of the original and manipulated images confirm that the viral claims are false.
- Claim: Viral photos allegedly show United States Secret Service agents smiling while rushing to protect former President Donald Trump during an attempted assassination in Pittsburgh, Pennsylvania.
- Claimed on: X, Thread
- Fact Check: Fake & Misleading
Related Blogs

Introduction
Generative AI models are significant consumers of computational resources and energy required for training and running models. While AI is being hailed as a game-changer, however underneath the shiny exterior, cracks are present which significantly raises concerns for its environmental impact. The development, maintenance, and disposal of AI technology all come with a large carbon footprint. The energy consumption of AI models, particularly large-scale models or image generation systems, these models rely on data centers powered by electricity, often from non-renewable sources, which exacerbates environmental concerns and contributes to substantial carbon emissions.
As AI adoption grows, improving energy efficiency becomes essential. Optimising algorithms, reducing model complexity, and using more efficient hardware can lower the energy footprint of AI systems. Additionally, transitioning to renewable energy sources for data centers can help mitigate their environmental impact. There is a growing need for sustainable AI development, where environmental considerations are integral to model design and deployment.
A breakdown of how generative AI contributes to environmental risks and the pressing need for energy efficiency:
- Gen AI during the training phase has high power consumption, when vast amounts of computational power which is often utilising extensive GPU clusters for weeks or at times even months, consumes a substantial amount of electricity. Post this phase, the inference phase where the deployment of these models takes place for real-time inference, can be energy-extensive especially when we take into account the millions of users of Gen AI.
- The main source of energy used for training and deploying AI models often comes from non-renewable sources which then contribute to the carbon footprint. The data centers where the computations for Gen AI take place are a significant source of carbon emissions if they rely on the use of fossil fuels for their energy needs for the training and deployment of the models. According to a study by MIT, training an AI can produce emissions that are equivalent to around 300 round-trip flights between New York and San Francisco. According to a report by Goldman Sachs, Data Companies will use 8% of US power by 2030, compared to 3% in 2022 as their energy demand grows by 160%.
- The production and disposal of hardware (GPUs, servers) necessary for AI contribute to environmental degradation. Mining for raw materials and disposing of electronic waste (e-waste) are additional environmental concerns. E-waste contains hazardous chemicals, including lead, mercury, and cadmium, that can contaminate soil and water supplies and endanger both human health and the environment.
Efforts by the Industry to reduce the environmental risk posed by Gen AI
There are a few examples of how companies are making efforts to reduce their carbon footprint, reduce energy consumption and overall be more environmentally friendly in the long run. Some of the efforts are as under:
- Google's TPUs in particular the Google Tensor are designed specifically for machine learning tasks and offer a higher performance-per-watt ratio compared to traditional GPUs, leading to more efficient AI computations during the shorter periods requiring peak consumption.
- Researchers at Microsoft, for instance, have developed a so-called “1 bit” architecture that can make LLMs 10 times more energy efficient than the current leading system. This system simplifies the models’ calculations by reducing the values to 0 or 1, slashing power consumption but without sacrificing its performance.
- OpenAI has been working on optimizing the efficiency of its models and exploring ways to reduce the environmental impact of AI and using renewable energy as much as possible including the research into more efficient training methods and model architectures.
Policy Recommendations
We advocate for the sustainable product development process and press the need for Energy Efficiency in AI Models to counter the environmental impact that they have. These improvements would not only be better for the environment but also contribute to the greater and sustainable development of Gen AI. Some suggestions are as follows:
- AI needs to adopt a Climate justice framework which has been informed by a diverse context and perspectives while working in tandem with the UN’s (Sustainable Development Goals) SDGs.
- Working and developing more efficient algorithms that would require less computational power for both training and inference can reduce energy consumption. Designing more energy-efficient hardware, such as specialized AI accelerators and next-generation GPUs, can help mitigate the environmental impact.
- Transitioning to renewable energy sources (solar, wind, hydro) can significantly reduce the carbon footprint associated with AI. The World Economic Forum (WEF) projects that by 2050, the total amount of e-waste generated will have surpassed 120 million metric tonnes.
- Employing techniques like model compression, which reduces the size of AI models without sacrificing performance, can lead to less energy-intensive computations. Optimized models are faster and require less hardware, thus consuming less energy.
- Implementing scattered learning approaches, where models are trained across decentralized devices rather than centralized data centers, can lead to a better distribution of energy load evenly and reduce the overall environmental impact.
- Enhancing the energy efficiency of data centers through better cooling systems, improved energy management practices, and the use of AI for optimizing data center operations can contribute to reduced energy consumption.
Final Words
The UN Sustainable Development Goals (SDGs) are crucial for the AI industry just as other industries as they guide responsible innovation. Aligning AI development with the SDGs will ensure ethical practices, promoting sustainability, equity, and inclusivity. This alignment fosters global trust in AI technologies, encourages investment, and drives solutions to pressing global challenges, such as poverty, education, and climate change, ultimately creating a positive impact on society and the environment. The current state of AI is that it is essentially utilizing enormous power and producing a product not efficiently utilizing the power it gets. AI and its derivatives are stressing the environment in such a manner which if it continues will affect the clean water resources and other non-renewable power generation sources which contributed to the huge carbon footprint of the AI industry as a whole.
References
- https://cio.economictimes.indiatimes.com/news/artificial-intelligence/ais-hunger-for-power-can-be-tamed/111302991
- https://earth.org/the-green-dilemma-can-ai-fulfil-its-potential-without-harming-the-environment/
- https://www.technologyreview.com/2019/06/06/239031/training-a-single-ai-model-can-emit-as-much-carbon-as-five-cars-in-their-lifetimes/
- https://www.scientificamerican.com/article/ais-climate-impact-goes-beyond-its-emissions/
- https://insights.grcglobalgroup.com/the-environmental-impact-of-ai/
.webp)
The 2020s mark the emergence of deepfakes in general media discourse. The rise in deepfake technology is defined by a very simple yet concerning fact: it is now possible to create perfect imitations of anyone using AI tools that can create audio in any person's voice and generate realistic images and videos of almost anyone doing pretty much anything. The proliferation of deepfake content in the media poses great challenges to the functioning of democracies. especially as such materials can deprive the public of the accurate information it needs to make informed decisions in elections. Deepfakes are created using AI, which combines different technologies to produce synthetic content.
Understanding Deepfakes
Deepfakes are synthetically generated content created using artificial intelligence (AI). This technology works on an advanced algorithm that creates hyper-realistic videos by using a person’s face, voice or likeness utilising techniques such as machine learning. The utilisation and progression of deepfake technology holds vast potential, both benign and malicious.
An example is when the NGO Malaria No More which had used deepfake technology in 2019 to sync David Beckham’s lip movements with different voices in nine languages, amplified its anti-malaria message.
Deepfakes have a dark side too. They have been used to spread false information, manipulate public opinion, and damage reputations. They can harm mental health and have significant social impacts. The ease of creating deepfakes makes it difficult to verify media authenticity, eroding trust in journalism and creating confusion about what is true and what is not. Their potential to cause harm has made it necessary to consider legal and regulatory approaches.
India’s Legal Landscape Surrounding Deepfakes
India presently lacks a specific law dealing with deepfakes, but the existing legal provisions offer some safeguards against mischief caused.
- Deepfakes created with the intent of spreading misinformation or damaging someone’s reputation can be prosecuted under the Bharatiya Nyaya Sanhita of 2023. It deals with the consequences of such acts under Section 356, governing defamation law.
- The Information Technology Act of 2000, the primary law that regulates Indian cyberspace. Any unauthorised disclosure of personal information which is used to create deepfakes for harassment or voyeurism is a violation of the act.
- The unauthorised use of a person's likeness in a deepfake can become a violation of their intellectual property rights and lead to copyright infringement.
- India’s privacy law, the Digital Personal Data Protection Act, regulates and limits the misuse of personal data. It has the potential to address deepfakes by ensuring that individuals’ likenesses are not used without their consent in digital contexts.
India, at present, needs legislation that can specifically address the challenges deepfakes pose. The proposed legislation, aptly titled, ‘the Digital India Act’ aims to tackle various digital issues, including the misuse of deepfake technology and the spread of misinformation. Additionally, states like Maharashtra have proposed laws targeting deepfakes used for defamation or fraud, highlighting growing concerns about their impact on the digital landscape.
Policy Approaches to Regulation of Deepfakes
- Criminalising and penalising the making, creation and distribution of harmful deepfakes as illegal will act as a deterrent.
- There should be a process that mandates the disclosures for synthetic media. This would be to inform viewers that the content has been created using AI.
- Encouraging tech companies to implement stricter policies on deepfake content moderation can enhance accountability and reduce harmful misinformation.
- The public’s understanding of deepfakes should be promoted. Especially, via awareness campaigns that will empower citizens to critically evaluate digital content and make informed decisions.
Deepfake, Global Overview
There has been an increase in the momentum to regulate deepfakes globally. In October 2023, US President Biden signed an executive order on AI risks instructing the US Commerce Department to form labelling standards for AI-generated content. California and Texas have passed laws against the dangerous distribution of deepfake images that affect electoral contexts and Virginia has targeted a law on the non-consensual distribution of deepfake pornography.
China promulgated regulations requiring explicit marking of doctored content. The European Union has tightened its Code of Practice on Disinformation by requiring social media to flag deepfakes, otherwise they risk facing hefty fines and proposed transparency mandates under the EU AI Act. These measures highlight a global recognition of the risks that deepfakes pose and the need for a robust regulatory framework.
Conclusion
With deepfakes being a significant source of risk to trust and democratic processes, a multi-pronged approach to regulation is in order. From enshrining measures against deepfake technology in specific laws and penalising the same, mandating transparency and enabling public awareness, the legislators have a challenge ahead of them. National and international efforts have highlighted the urgent need for a comprehensive framework to enable measures to curb the misuse and also promote responsible innovation. Cooperation during these trying times will be important to shield truth and integrity in the digital age.
References
- https://digitalcommons.usf.edu/cgi/viewcontent.cgi?article=2245&context=jss
- https://www.thehindu.com/news/national/regulating-deepfakes-generative-ai-in-india-explained/article67591640.ece
- https://www.brennancenter.org/our-work/research-reports/regulating-ai-deepfakes-and-synthetic-media-political-arena
- https://www.responsible.ai/a-look-at-global-deepfake-regulation-approaches/
- https://thesecretariat.in/article/wake-up-call-for-law-making-on-deepfakes-and-misinformation

Executive Summary:
A widely circulated social media post claims that the Government of India has reportedly opened an account—Army Welfare Fund Battle Casualty—at Canara Bank to support the modernization of the Indian Army and assist injured or martyred soldiers. Citizens can voluntarily contribute starting from ₹1, with no upper limit. The fund is said to have been launched based on a suggestion by actor Akshay Kumar, which was later acknowledged by the Prime Minister of India through Mann Ki Baat and social media platforms. However, the fact is that no such decision has been taken by the cabinet recently, and no such decision has been officially announced.

Claim:
A viral social media post claims that the Government of India has launched a new initiative aimed at modernizing the Indian Army and supporting battle casualties through public donations. According to the post, a special bank account has been created to enable citizens to contribute directly toward the procurement of arms and equipment for the armed forces.
It further states that this initiative was introduced following a Cabinet decision and was inspired by a suggestion from Bollywood actor Akshay Kumar, which was reportedly acknowledged by the Prime Minister during his Mann Ki Baat address.
The post encourages individuals to donate any amount starting from ₹1, with no upper limit, and estimates that widespread public participation could generate up to ₹36,000 crore annually to support the armed forces. It also lists two bank accounts—one at Canara Bank (Account No: 90552010165915) and another at State Bank of India (Account No: 40650628094)—allegedly designated for the "Armed Forces Battle Casualties Welfare Fund."
The statement said,” The government established a range of welfare schemes for soldiers killed or disabled while undertaking military operations in recent combat. In 2020, the government established the 'Armed Forces Battle Casualty Welfare Fund (AFBCWF)', which is used to provide immediate financial assistance to families of soldiers, sailors and airmen who lose their lives or sustain grievous injury as a result of active military service.”

We also found a similar post from the past, which can be seen here.
Fact Check:
The Press Information Bureau (PIB) have responded to the viral post stating that it is misleading, and the Government has not launched any message inviting public donations towards the modernisation of the Indian Army or for purchasing Weapons for the army. The only known official initiative by the Ministry of Defence is the "Armed Forces Battle Casualties Welfare Fund", which is an initiative set up to support the families of our soldiers who have been marshalled or grievously disabled in the line of duty, not for buying military equipment.

In addition, the bank account details mentioned in the Viral post are false, and donations and charitable donations submitted to the account have been dishonoured.
The other false claim says that actor Akshay Kumar is promoting or heading this message-there is no official/disclosure record or announcement related to him leading or sponsoring this project. Having said that in 2017, Akshay Kumar encouraged public contributions of just one rupee per month to support the armed forces, through a web portal called “Bharat Ke Veer”. The platform was developed in partnership with the Ministry of Home Affairs


Citizens have to rely on only official government sources and ignore misleading messages on such social media platforms.
Conclusion:
The viral social media post suggesting that the Government of India has initiated a donation drive for the modernisation of the Indian Army and the purchase of weapons is misleading and inaccurate. According to the Press Information Bureau (PIB), no such initiative has been launched by the government, and the bank account details provided in the post are false, with reported cases of dishonoured transactions. The only legitimate initiative is the Armed Forces Battle Casualties Welfare Fund (AFBCWF), which provides financial assistance to the families of soldiers who are martyred or seriously injured in the line of duty. While actor Akshay Kumar played a key role in launching the Bharat Ke Veer portal in 2017 to support paramilitary personnel, he has no official connection to the viral claims.
- Claim: The government has launched a public donation message to fund Army weapon purchases.
- Claimed On: Social Media
- Fact Check: False and Misleading