#FactCheck: A digitally altered video of actor Sebastian Stan shows him changing a ‘Tell Modi’ poster to one that reads ‘I Told Modi’ on a display panel.
Executive Summary:
A widely circulated video claiming to feature a poster with the words "I Told Modi" has gone viral, improperly connecting it to the April 2025 Pahalgam attack, in which terrorists killed 26 civilians. The altered Marvel Studios clip is allegedly a mockery of Operation Sindoor, the counterterrorism operation India initiated in response to the attack. This misinformation emphasizes how crucial it is to confirm information before sharing it online by disseminating misleading propaganda and drawing attention away from real events.
Claim:
A man can be seen changing a poster that says "Tell Modi" to one that says "I Told Modi" in a widely shared viral video. This video allegedly makes reference to Operation Sindoor in India, which was started in reaction to the Pahalgam terrorist attack on April 22, 2025, in which militants connected to The Resistance Front (TRF) killed 26 civilians.


Fact check:
Further research, we found the original post from Marvel Studios' official X handle, confirming that the circulating video has been altered using AI and does not reflect the authentic content.

By using Hive Moderation to detect AI manipulation in the video, we have determined that this video has been modified with AI-generated content, presenting false or misleading information that does not reflect real events.

Furthermore, we found a Hindustan Times article discussing the mysterious reveal involving Hollywood actor Sebastian Stan.

Conclusion:
It is untrue to say that the "I Told Modi" poster is a component of a public demonstration. The text has been digitally changed to deceive viewers, and the video is manipulated footage from a Marvel film. The content should be ignored as it has been identified as false information.
- Claim: Viral social media posts confirm a Pakistani military attack on India.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Executive Summary:
A photographer breaking down in tears in a viral photo is not connected to the Ram Mandir opening. Social media users are sharing a collage of images of the recently dedicated Lord Ram idol at the Ayodhya Ram Mandir, along with a claimed shot of the photographer crying at the sight of the deity. A Facebook post that posts this video says, "Even the cameraman couldn't stop his emotions." The CyberPeace Research team found that the event happened during the AFC Asian Cup football match in 2019. During a match between Iraq and Qatar, an Iraqi photographer started crying since Iraq had lost and was out of the competition.
Claims:
The photographer in the widely shared images broke down in tears at seeing the icon of Lord Ram during the Ayodhya Ram Mandir's consecration. The Collage was also shared by many users in other Social Media like X, Reddit, Facebook. An Facebook user shared and the Caption of the Post reads,




Fact Check:
CyberPeace Research team reverse image searched the Photographer, and it landed to several memes from where the picture was taken, from there we landed to a Pinterest Post where it reads, “An Iraqi photographer as his team is knocked out of the Asian Cup of Nations”

Taking an indication from this we did some keyword search and tried to find the actual news behind this Image. We landed at the official Asian Cup X (formerly Twitter) handle where the image was shared 5 years ago on 24 Jan, 2019. The Post reads, “Passionate. Emotional moment for an Iraqi photographer during the Round of 16 clash against ! #AsianCup2019”

We are now confirmed about the News and the origin of this image. To be noted that while we were investigating the Fact Check we also found several other Misinformation news with the Same photographer image and different Post Captions which was all a Misinformation like this one.
Conclusion:
The recent Viral Image of the Photographer claiming to be associated with Ram Mandir Opening is Misleading, the Image of the Photographer was a 5 years old image where the Iraqi Photographer was seen Crying during the Asian Cup Football Competition but not of recent Ram Mandir Opening. Netizens are advised not to believe and share such misinformation posts around Social Media.
- Claim: A person in the widely shared images broke down in tears at seeing the icon of Lord Ram during the Ayodhya Ram Mandir's consecration.
- Claimed on: Facebook, X, Reddit
- Fact Check: Fake
.webp)
Introduction
The rapid advancement of technology, including generative AI, offers immense benefits but also raises concerns about misuse. The Internet Watch Foundation reported that, as of July 2024, over 3,500 new AI-generated child sexual abuse images appeared on the dark web. The UK’s National Crime Agency records 800 monthly arrests for online child threats and estimates 840,000 adults as potential offenders. In response, the UK is introducing legislation to criminalise AI-generated child exploitation imagery, which will be a part of the Crime and Policing Bill when it comes to parliament in the next few weeks, aligning with global AI regulations like the EU AI Act and the US AI Initiative Act. This policy shift strengthens efforts to combat online child exploitation and sets a global precedent for responsible AI governance.
Current Legal Landscape and the Policy Gap
The UK’s Online Safety Act 2023 aims to combat CSAM and deepfake pornography by holding social media and search platforms accountable for user safety. It mandates these platforms to prevent children from accessing harmful content, remove illegal material, and offer clear reporting mechanisms. For adults, major platforms must be transparent about harmful content policies and provide users control over what they see.
However, the Act has notable limitations, including concerns over content moderation overreach, potential censorship of legitimate debates, and challenges in defining "harmful" content. It may disproportionately impact smaller platforms and raise concerns about protecting journalistic content and politically significant discussions. While intended to enhance online safety, these challenges highlight the complexities of balancing regulation with digital rights and free expression.
The Proposed Criminalisation of AI-Generated Sexual Abuse Content
The proposed law by the UK criminalises the creation, distribution, and possession of AI-generated CSAM and deepfake pornography. It mandates enforcement agencies and digital platforms to identify and remove such content, with penalties for non-compliance. Perpetrators may face up to two years in prison for taking intimate images without consent or installing equipment to facilitate such offences. Currently, sharing or threatening to share intimate images, including deepfakes, is an offence under the Sexual Offences Act 2003, amended by the Online Safety Act 2023. The government plans to repeal certain voyeurism offences, replacing them with broader provisions covering unauthorised intimate recordings. This aligns with its September 2024 decision to classify sharing intimate images as a priority offence under the Online Safety Act, reinforcing its commitment to balancing free expression with harm prevention.
Implications for AI Regulation and Platform Responsibility
The UK's move aligns with its AI Safety Summit commitments, placing responsibility on platforms to remove AI-generated sexual abuse content or face Ofcom enforcement. The Crime and Policing Bill is expected to tighten AI regulations, requiring developers to integrate safeguards against misuse, and the licensing frameworks may enforce ethical AI standards, restricting access to synthetic media tools. Given AI-generated abuse's cross-border nature, enforcement will necessitate global cooperation with platforms, law enforcement, and regulators. Bilateral and multilateral agreements could help harmonise legal frameworks, enabling swift content takedown, evidence sharing, and extradition of offenders, strengthening international efforts against AI-enabled exploitation.
Conclusion and Policy Recommendations
The Crime and Policing Bill marks a crucial step in criminalising AI-generated CSAM and deepfake pornography, strengthening online safety and platform accountability. However, balancing digital rights and enforcement remains a challenge. For effective implementation, industry cooperation is essential, with platforms integrating detection tools and transparent reporting systems. AI ethics frameworks should prevent misuse while allowing innovation, and victim support mechanisms must be prioritised. Given AI-driven abuse's global nature, international regulatory alignment is key for harmonised laws, evidence sharing, and cross-border enforcement. This legislation sets a global precedent, emphasising proactive regulation to ensure digital safety, ethical AI development, and the protection of human dignity.
References
- https://www.iwf.org.uk/about-us/why-we-exist/our-research/how-ai-is-being-abused-to-create-child-sexual-abuse-imagery/
- https://www.reuters.com/technology/artificial-intelligence/uk-makes-use-ai-tools-create-child-abuse-material-crime-2025-02-01/
- https://www.financialexpress.com/life/technology-uk-set-to-ban-ai-tools-for-creating-child-sexual-abuse-images-with-new-laws-3735296/
- https://www.gov.uk/government/publications/national-crime-agency-annual-report-and-accounts-2023-to-2024/national-crime-agency-annual-report-and-accounts-2023-to-2024-accessible#part-1--performance-report

Executive Summary:
New Linux malware has been discovered by a cybersecurity firm Volexity, and this new strain of malware is being referred to as DISGOMOJI. A Pakistan-based threat actor alias ‘UTA0137’ has been identified as having espionage aims, with its primary focus on Indian government entities. Like other common forms of backdoors and botnets involved in different types of cyberattacks, DISGOMOJI, the malware allows the use of commands to capture screenshots, search for files to steal, spread additional payloads, and transfer files. DISGOMOJI uses Discord (messaging service) for Command & Control (C2) and uses emojis for C2 communication. This malware targets Linux operating systems.
The DISCOMOJI Malware:
- The DISGOMOJI malware opens a specific channel in a Discord server and every new channel corresponds to a new victim. This means that the attacker can communicate with the victim one at a time.
- This particular malware connects with the attacker-controlled Discord server using Emoji, a form of relay protocol. The attacker provides unique emojis as instructions, and the malware uses emojis as a feedback to the subsequent command status.
- For instance, the ‘camera with flash’ emoji is used to screenshots the device of the victim or to steal, the ‘fox’ emoji cracks all Firefox profiles, and the ‘skull’ emoji kills the malware process.
- This C2 communication is done using emojis to ensure messaging between infected contacts, and it is almost impossible for Discord to shut down the malware as it can always change the account details of Discord it is using once the maliciou server is blocked.
- The malware also has capabilities aside from the emoji-based C2 such as network probing, tunneling, and data theft that are needed to help the UTA0137 threat actor in achieving its espionage goals.
Specific emojis used for different commands by UTA0137:
- Camera with Flash (📸): Captures a picture of the target device’s screen as per the victim’s directions.
- Backhand Index Pointing Down (👇): Extracts files from the targeted device and sends them to the command channel in the form of attachments.
- Backhand Index Pointing Right (👉): This process involves sending a file found on the victim’s device to another web-hosted file storage service known as Oshi or oshi[. ]at.
- Backhand Index Pointing Left (👈): Sends a file from the victim’s device to transfer[. ]sh, which is an online service for sharing files on the Internet.
- Fire (🔥): Finds and transmits all files with certain extensions that exist on the victim’s device, such as *. txt, *. doc, *. xls, *. pdf, *. ppt, *. rtf, *. log, *. cfg, *. dat, *. db, *. mdb, *. odb, *. sql, *. json, *. xml, *. php, *. asp, *. pl, *. sh, *. py, *. ino, *. cpp, *. java,
- Fox (🦊): This works by compressing all Firefox related profiles in the affected device.
- Skull (💀): Kills the malware process in windows using ‘os. Exit()’
- Man Running (🏃♂️): Execute a command on a victim’s device. This command receives an argument, which is the command to execute.
- Index Pointing up (👆) : Upload a file to the victim's device. The file to upload is attached along with this emoji
Analysis:
The analysis was carried out for one of the indicator of compromised SHA-256 hash file- C981aa1f05adf030bacffc0e279cf9dc93cef877f7bce33ee27e9296363cf002.
It is found that most of the vendors have marked the file as trojan in virustotal and the graph explains the malicious nature of the contacted domains and IPs.


Discord & C2 Communication for UTA0137:
- Stealthiness: Discord is a well-known messaging platform used for different purposes, which means that sending any messages or files on the server should not attract suspicion. Such stealthiness makes it possible for UTA0137 to remain dormant for greater periods before launching an attack.
- Customization: UTA0137 connected to Discord is able to create specific channels for distinct victims on the server. Such a framework allows the attackers to communicate with each of the victims individually to make a process more accurate and efficient.
- Emoji-based protocol: For C2 communication, emojis really complicates the attempt that Discord might make to interfere with the operations of the malware. In case the malicious server gets banned, malware could easily be recovered, especially by using the Discord credentials from the C2 server.
- Persistence: The malware, as stated above, has the ability to perpetually exist to hack the system and withstand rebooting of systems so that the virus can continue to operate without being detected by the owner of the hacked system.
- Advanced capabilities: Other features of DISGOMOJI are the Network Map using Nmap scanner, network tunneling through Chisel and Ligolo and Data Exfiltration by File Sharing services. These capabilities thus help in aiding the espionage goals of UTA0137.
- Social engineering: The virus and the trojan can show the pop-up windows and prompt messages, for example the fake update for firefox and similar applications, where the user can be tricked into inputting the password.
- Dynamic credential fetching: The malware does not write the hardcoded values of the credentials in order to connect it to the discord server. This also inconveniences analysts as they are unable to easily locate the position of the C2 server.
- Bogus informational and error messages: They never show any real information or errors because they do not want one to decipher the malicious behavior easily.
Recommendations to mitigate the risk of UTA0137:
- Regularly Update Software and Firmware: It is essential to regularly update all the application software and firmware of different devices, particularly, routers, to prevent hackers from exploiting the discovered and disclosed flaws. This includes fixing bugs such as CVE-2024-3080 and CVE-2024-3912 on ASUS routers, which basically entails solving a set of problems.
- Implement Multi-Factor Authentication: There are statistics that show how often user accounts are attacked, it is important to incorporate multi-factor authentication to further secure the accounts.
- Deploy Advanced Malware Protection: Provide robust guard that will help the user recognize and prevent the execution of the DISGOMOJI malware and similar threats.
- Enhance Network Segmentation: Utilize stringent network isolation mechanisms that seek to compartmentalize the key systems and data from the rest of the network in order to minimize the attack exposure.
- Monitor Network Activity: Scanning Network hour to hour for identifying and handling the security breach and the tools such as Nmap, Chisel, Ligolo etc can be used.
- Utilize Threat Intelligence: To leverage advanced threats intelligence which will help you acquire knowledge on previous threats and vulnerabilities and take informed actions.
- Secure Communication Channels: Mitigate the problem of the leakage of developers’ credentials and ways of engaging with the discord through loss of contact to prevent abusing attacks or gaining control over Discord as an attack vector.
- Enforce Access Control: Regularly review and update the user authentication processes by adopting stricter access control measures that will allow only the right personnel to access the right systems and information.
- Conduct Regular Security Audits: It is important to engage in security audits periodically in an effort to check some of the weaknesses present within the network or systems.
- Implement Incident Response Plan: Conduct a risk assessment, based on that design and establish an efficient incident response kit that helps in the early identification, isolation, and management of security breaches.
- Educate Users: Educate users on cybersecurity hygiene, opportunities to strengthen affinity with the University, and conduct retraining on threats like phishing and social engineering.
Conclusion:
The new threat actor named UTA0137 from Pakistan who was utilizing DISGOMOJI malware to attack Indian government institutions using embedded emojis with a command line through the Discord app was discovered by Volexity. It has the capability to exfiltrate and aims to steal the data of government entities. The UTA0137 was continuously improved over time to permanently communicate with victims. It underlines the necessity of having strong protection from viruses and hacker attacks, using secure passwords and unique codes every time, updating the software more often and having high-level anti-malware tools. Organizations can minimize advanced threats, the likes of DISGOMOJI and protect sensitive data by improving network segmentation, continuous monitoring of activities, and users’ awareness.
References:
https://otx.alienvault.com/pulse/66712446e23b1d14e4f293eb
https://thehackernews.com/2024/06/pakistani-hackers-use-disgomoji-malware.html?m=1
https://cybernews.com/news/hackers-using-emojis-to-command-malware/
https://www.volexity.com/blog/2024/06/13/disgomoji-malware-used-to-target-indian-government/