#FactCheck - Debunking Manipulated Photos of Smiling Secret Service Agents During Trump Assassination Attempt
Executive Summary:
Viral pictures featuring US Secret Service agents smiling while protecting former President Donald Trump during a planned attempt to kill him in Pittsburgh have been clarified as photoshopped pictures. The pictures making the rounds on social media were produced by AI-manipulated tools. The original image shows no smiling agents found on several websites. The event happened with Thomas Mathew Crooks firing bullets at Trump at an event in Butler, PA on July 13, 2024. During the incident one was deceased and two were critically injured. The Secret Service stopped the shooter, and circulating photos in which smiles were faked have stirred up suspicion. The verification of the face-manipulated image was debunked by the CyberPeace Research Team.

Claims:
Viral photos allegedly show United States Secret Service agents smiling while rushing to protect former President Donald Trump during an attempted assassination in Pittsburgh, Pennsylvania.



Fact Check:
Upon receiving the posts, we searched for any credible source that supports the claim made, we found several articles and images of the incident but in those the images were different.

This image was published by CNN news media, in this image we can see the US Secret Service protecting Donald Trump but not smiling. We then checked for AI Manipulation in the image using the AI Image Detection tool, True Media.


We then checked with another AI Image detection tool named, contentatscale AI image detection, which also found it to be AI Manipulated.

Comparison of both photos:

Hence, upon lack of credible sources and detection of AI Manipulation concluded that the image is fake and misleading.
Conclusion:
The viral photos claiming to show Secret Service agents smiling when protecting former President Donald Trump during an assassination attempt have been proven to be digitally manipulated. The original image found on CNN Media shows no agents smiling. The spread of these altered photos resulted in misinformation. The CyberPeace Research Team's investigation and comparison of the original and manipulated images confirm that the viral claims are false.
- Claim: Viral photos allegedly show United States Secret Service agents smiling while rushing to protect former President Donald Trump during an attempted assassination in Pittsburgh, Pennsylvania.
- Claimed on: X, Thread
- Fact Check: Fake & Misleading
Related Blogs

Introduction
In todays time, we can access any information in seconds and from the comfort of our homes or offices. The internet and its applications have been substantial in creating an ease of access to information, but the biggest question which still remains unanswered is Which information is legit and which one is fake? As netizens, we must be critical of what information we access and how.
Influence of Bad actors
The bad actors are one of the biggest threats to our cyberspace as they make the online world full of fear and activities which directly impact the users financial or emotional status by exploitaing their vulnerabilities and attacking them using social engineering. One such issue is website spoofing. In website spoofing, the bad actors try and create a website similar to the original website of any reputed brand. The similarity is so uncanny that the first time or occasional website users find it very difficult to find the difference between the two websites. This is basically an attempt to access sensitive information, such as personal and financial information, and in some cases, to spread malware into the users system to facilitate other forms of cybercrimes. Such websites will have very lucrative offers or deals, making it easier for people to fall prey to such phoney websites In turn, the bad actors can gain sensitive information right from the users without even calling or messaging them.
The Incident
A Noida based senior citizen couple was aggreved by using their dishwasher, and to get it fixed, they looked for the customer care number on their web browser. The couple came across a customer care number- 1800258821 for IFB, a electronics company. As they dialed the number and got in touch with the fake customer care representative, who, upon hearing the couple’s issue, directed them to a supposedly senior official of the company. The senior official spoke to the lady, despite of the call dropping few times, he was admant on staying in touch with the lady, once he had established the trust factor, he asked the lady to download an app which he potrayed to be an app to register complaints and carry out quick actions. The fake senior offical asked the lady to share her location and also asked her to grant few access permissions to the application along with a four digit OTP which looked harmless. He further asked the kady to make a transaction of Rs 10 as part of the complaint processing fee. Till this moment, the couple was under the impression that their complaimt had been registred and the issue with their dishwasher would be rectified soon.
The couple later at night recieved a message from their bank, informing them that Rs 2.25 lakh had been debited from their joint bank account, the following morning, they saw yet another text message informing them of a debit of Rs 5.99 lakh again from their account. The couple immediatly understood that they had become victims to cyber fraud. The couple immediatly launched a complaint on the cyber fraud helpline 1930 and their respective bank. A FIR has been registerd in the Noida Cyber Cell.
How can senior citizens prevent such frauds?
Senior citizens can be particularly vulnerable to cyber frauds due to their lack of familiarity with technology and potential cognitive decline. Here are some safeguards that can help protect them from cyber frauds:
- Educate seniors on common cyber frauds: It’s important to educate seniors about the most common types of cyber frauds, such as phishing, smishing, vishing, and scams targeting seniors.
- Use strong passwords: Encourage seniors to use strong and unique passwords for their online accounts and to change them regularly.
- Beware of suspicious emails and messages: Teach seniors to be wary of suspicious emails and messages that ask for personal or financial information, even if they appear to be from legitimate sources.
- Verify before clicking: Encourage seniors to verify the legitimacy of links before clicking on them, especially in emails or messages.
- Keep software updated: Ensure seniors keep their software, including antivirus and operating system, up to date.
- Avoid public Wi-Fi: Discourage seniors from using public Wi-Fi for sensitive transactions, such as online banking or shopping.
- Check financial statements: Encourage seniors to regularly check their bank and credit card statements for any suspicious transactions.
- Secure devices: Help seniors secure their devices with antivirus and anti-malware software and ensure that their devices are password protected.
- Use trusted sources: Encourage seniors to use trusted sources when making online purchases or providing personal information online.
- Seek help: Advise seniors to seek help if they suspect they have fallen victim to a cyber fraud. They should contact their bank, credit card company or report the fraud to relevant authorities. Calling 1930 should be the first and primary step.
Conclusion
The cyberspace is new space for people of all generations, the older population is a little more vulnerble in this space as they have not used gadgets or internet for most f theur lives, and now they are dependent upon the devices and application for their convinience, but they still do not understand the technology and its dark side. As netizens, we are responsible for safeguarding the youth and the older population to create a wholesome, safe, secured and sustainable cyberecosystem. Its time to put the youth’s understanding of tech and the life experience of the older poplaution in synergy to create SoPs and best practices for erradicating such cyber frauds from our cyberspace. CyberPeace Foundation has created a CyberPeace Helpline number for victims where they will be given timely assitance for resolving their issues; the victims can reach out the helpline on +91 95700 00066 and thay can also mail their issues on helpline@cyberpeace.net.

Starting in mid-December, 2024, a series of attacks have targeted Chrome browser extensions. A data protection company called Cyberhaven, California, fell victim to one of these attacks. Though identified in the U.S., the geographical extent and potential of the attack are yet to be determined. Assessment of these cases can help us to be better prepared for such instances if they occur in the near future.
The Attack
Browser extensions are small software applications that add and enable functionality or a capacity (feature) to a web browser. These are written in CSS, HTML, or JavaScript and like other software, can be coded to deliver malware. Also known as plug-ins, they have access to their own set of Application Programming Interface (APIs). They can also be used to remove unwanted elements as per customisation, such as pop-up advertisements and auto-play videos, when one lands on a website. Some examples of browser extensions include Ad-blockers (for blocking ads and content filtering) and StayFocusd (which limits the time of the users on a particular website).
In the aforementioned attack, the publisher of the browser at Cyberhaven received a phishing mail from an attacker posing to be from the Google Chrome Web Store Developer Support. It mentioned that their browser policies were not compatible and encouraged the user to click on the “Go to Policy”action item, which led the user to a page that enabled permissions for a malicious OAuth called Privacy Policy Extension (Open Authorisation is an adopted standard that is used to authorise secure access for temporary tokens). Once the permission was granted, the attacker was able to inject malicious code into the target’s Chrome browser extension and steal user access tokens and session cookies. Further investigation revealed that logins of certain AI and social media platforms were targeted.
CyberPeace Recommendations
As attacks of such range continue to occur, it is encouraged that companies and developers take active measures that would make their browser extensions less susceptible to such attacks. Google also has a few guidelines on how developers can safeguard their extensions from their end. These include:
- Minimal Permissions For Extensions- It is encouraged that minimal permissions for extensions barring the required APIs and websites that it depends on are acquired as limiting extension privileges limits the surface area an attacker can exploit.
- Prioritising Protection Of Developer Accounts- A security breach on this end could lead to compromising all users' data as this would allow attackers to mess with extensions via their malicious codes. A 2FA (2-factor authentication) by setting a security key is endorsed.
- HTTPS over HTTP- HTTPS should be preferred over HTTP as it requires a Secure Sockets Layer (SSL)/ transport layer security(TLS) certificate from an independent certificate authority (CA). This creates an encrypted connection between the server and the web browser.
Lastly, as was done in the case of the attack at Cyberhaven, it is encouraged to promote the practice of transparency when such incidents take place to better deal with them.
References
- https://indianexpress.com/article/technology/tech-news-technology/hackers-hijack-companies-chrome-extensions-cyberhaven-9748454/
- https://indianexpress.com/article/technology/tech-news-technology/google-chrome-extensions-hack-safety-tips-9751656/
- https://www.techtarget.com/whatis/definition/browser-extension
- https://www.forbes.com/sites/daveywinder/2024/12/31/google-chrome-2fa-bypass-attack-confirmed-what-you-need-to-know/
- https://www.cloudflare.com/learning/ssl/why-use-https/

Introduction
The increasing online interaction and popularity of social media platforms for netizens have made a breeding ground for misinformation generation and spread. Misinformation propagation has become easier and faster on online social media platforms, unlike traditional news media sources like newspapers or TV. The big data analytics and Artificial Intelligence (AI) systems have made it possible to gather, combine, analyse and indefinitely store massive volumes of data. The constant surveillance of digital platforms can help detect and promptly respond to false and misinformation content.
During the recent Israel-Hamas conflict, there was a lot of misinformation spread on big platforms like X (formerly Twitter) and Telegram. Images and videos were falsely shared attributing to the ongoing conflict, and had spread widespread confusion and tension. While advanced technologies such as AI and big data analytics can help flag harmful content quickly, they must be carefully balanced against privacy concerns to ensure that surveillance practices do not infringe upon individual privacy rights. Ultimately, the challenge lies in creating a system that upholds both public security and personal privacy, fostering trust without compromising on either front.
The Need for Real-Time Misinformation Surveillance
According to a recent survey from the Pew Research Center, 54% of U.S. adults at least sometimes get news on social media. The top spots are taken by Facebook and YouTube respectively with Instagram trailing in as third and TikTok and X as fourth and fifth. Social media platforms provide users with instant connectivity allowing them to share information quickly with other users without requiring the permission of a gatekeeper such as an editor as in the case of traditional media channels.
Keeping in mind the data dumps that generated misinformation due to the elections that took place in 2024 (more than 100 countries), the public health crisis of COVID-19, the conflicts in the West Bank and Gaza Strip and the sheer volume of information, both true and false, has been immense. Identifying accurate information amid real-time misinformation is challenging. The dilemma emerges as the traditional content moderation techniques may not be sufficient in curbing it. Traditional content moderation alone may be insufficient, hence the call for a dedicated, real-time misinformation surveillance system backed by AI and with certain human sight and also balancing the privacy of user's data, can be proven to be a good mechanism to counter misinformation on much larger platforms. The concerns regarding data privacy need to be prioritized before deploying such technologies on platforms with larger user bases.
Ethical Concerns Surrounding Surveillance in Misinformation Control
Real-time misinformation surveillance could pose significant ethical risks and privacy risks. Monitoring communication patterns and metadata, or even inspecting private messages, can infringe upon user privacy and restrict their freedom of expression. Furthermore, defining misinformation remains a challenge; overly restrictive surveillance can unintentionally stifle legitimate dissent and alternate perspectives. Beyond these concerns, real-time surveillance mechanisms could be exploited for political, economic, or social objectives unrelated to misinformation control. Establishing clear ethical standards and limitations is essential to ensure that surveillance supports public safety without compromising individual rights.
In light of these ethical challenges, developing a responsible framework for real-time surveillance is essential.
Balancing Ethics and Efficacy in Real-Time Surveillance: Key Policy Implications
Despite these ethical challenges, a reliable misinformation surveillance system is essential. Key considerations for creating ethical, real-time surveillance may include:
- Misinformation-detection algorithms should be designed with transparency and accountability in mind. Third-party audits and explainable AI can help ensure fairness, avoid biases, and foster trust in monitoring systems.
- Establishing clear, consistent definitions of misinformation is crucial for fair enforcement. These guidelines should carefully differentiate harmful misinformation from protected free speech to respect users’ rights.
- Only collecting necessary data and adopting a consent-based approach which protects user privacy and enhances transparency and trust. It further protects them from stifling dissent and profiling for targeted ads.
- An independent oversight body that can monitor surveillance activities while ensuring accountability and preventing misuse or overreach can be created. These measures, such as the ability to appeal to wrongful content flagging, can increase user confidence in the system.
Conclusion: Striking a Balance
Real-time misinformation surveillance has shown its usefulness in counteracting the rapid spread of false information online. But, it brings complex ethical challenges that cannot be overlooked such as balancing the need for public safety with the preservation of privacy and free expression is essential to maintaining a democratic digital landscape. The references from the EU’s Digital Services Act and Singapore’s POFMA underscore that, while regulation can enhance accountability and transparency, it also risks overreach if not carefully structured. Moving forward, a framework for misinformation monitoring must prioritise transparency, accountability, and user rights, ensuring that algorithms are fair, oversight is independent, and user data is protected. By embedding these safeguards, we can create a system that addresses the threat of misinformation and upholds the foundational values of an open, responsible, and ethical online ecosystem. Balancing ethics and privacy and policy-driven AI Solutions for Real-Time Misinformation Monitoring are the need of the hour.
References
- https://www.pewresearch.org/journalism/fact-sheet/social-media-and-news-fact-sheet/
- https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:C:2018:233:FULL