#FactCheck: Israel Apologizes to Iran’ Video Is AI-Generated
Executive Summary:
A viral video claiming to show Israelis pleading with Iran to "stop the war" is not authentic. As per our research the footage is AI-generated, created using tools like Google’s Veo, and not evidence of a real protest. The video features unnatural visuals and errors typical of AI fabrication. It is part of a broader wave of misinformation surrounding the Israel-Iran conflict, where AI-generated content is widely used to manipulate public opinion. This incident underscores the growing challenge of distinguishing real events from digital fabrications in global conflicts and highlights the importance of media literacy and fact-checking.
Claim:
A X verified user with the handle "Iran, stop the war, we are sorry" posted a video featuring people holding placards and the Israeli flag. The caption suggests that Israeli citizens are calling for peace and expressing remorse, stating, "Stop the war with Iran! We apologize! The people of Israel want peace." The user further claims that Israel, having allegedly initiated the conflict by attacking Iran, is now seeking reconciliation.

Fact Check:
The bottom-right corner of the video displays a "VEO" watermark, suggesting it was generated using Google's AI tool, VEO 3. The video exhibits several noticeable inconsistencies such as robotic, unnatural speech, a lack of human gestures, and unclear text on the placards. Additionally, in one frame, a person wearing a blue T-shirt is seen holding nothing, while in the next frame, an Israeli flag suddenly appears in their hand, indicating possible AI-generated glitches.

We further analyzed the video using the AI detection tool HIVE Moderation, which revealed a 99% probability that the video was generated using artificial intelligence technology. To validate this finding, we examined a keyframe from the video separately, which showed an even higher likelihood of 99% probability of being AI generated. These results strongly indicate that the video is not authentic and was most likely created using advanced AI tools.

Conclusion:
The video is highly likely to be AI-generated, as indicated by the VEO watermark, visual inconsistencies, and a 99% probability from HIVE Moderation. This highlights the importance of verifying content before sharing, as misleading AI-generated media can easily spread false narratives.
- Claim: AI generated video of Israelis saying "Stop the War, Iran We are Sorry".
- Claimed On: Social Media
- Fact Check:AI Generated Mislead
Related Blogs
.webp)
Executive Summary
This report analyses a recently launched social engineering attack that took advantage of Microsoft Teams and AnyDesk to deliver DarkGate malware, a MaaS tool. This way, through Microsoft Teams and by tricking users into installing AnyDesk, attackers received unauthorized remote access to deploy DarkGate that offers such features as credential theft, keylogging, and fileless persistence. The attack was executed using obfuscated AutoIt scripts for the delivery of malware which shows how threat actors are changing their modus operandi. The case brings into focus the need to put into practice preventive security measures for instance endpoint protection, staff awareness, limited utilization of off-ice-connection tools, and compartmentalization to safely work with the new and increased risks that contemporary cyber threats present.
Introduction
Hackers find new technologies and application that are reputable for spreading campaigns. The latest use of Microsoft Teams and AnyDesk platforms for launching the DarkGate malware is a perfect example of how hackers continue to use social engineering and technical vulnerabilities to penetrate the defenses of organizations. This paper focuses on the details of the technical aspect of the attack, the consequences of the attack together with preventive measures to counter the threat.
Technical Findings
1. Attack Initiation: Exploiting Microsoft Teams
The attackers leveraged Microsoft Teams as a trusted communication platform to deceive victims, exploiting its legitimacy and widespread adoption. Key technical details include:
- Spoofed Caller Identity: The attackers used impersonation techniques to masquerade as representatives of trusted external suppliers.
- Session Hijacking Risks: Exploiting Microsoft Teams session vulnerabilities, attackers aimed to escalate their privileges and deploy malicious payloads.
- Bypassing Email Filters: The initial email bombardment was designed to overwhelm spam filters and ensure that malicious communication reached the victim’s inbox.
2. Remote Access Exploitation: AnyDesk
After convincing victims to install AnyDesk, the attackers exploited the software’s functionality to achieve unauthorized remote access. Technical observations include:
- Command and Control (C2) Integration: Once installed, AnyDesk was configured to establish persistent communication with the attacker’s C2 servers, enabling remote control.
- Privilege Escalation: Attackers exploited misconfigurations in AnyDesk to gain administrative privileges, allowing them to disable antivirus software and deploy payloads.
- Data Exfiltration Potential: With full remote access, attackers could silently exfiltrate data or install additional malware without detection.
3. Malware Deployment: DarkGate Delivery via AutoIt Script
The deployment of DarkGate malware utilized AutoIt scripting, a programming language commonly used for automating Windows-based tasks. Technical details include:
- Payload Obfuscation: The AutoIt script was heavily obfuscated to evade signature-based antivirus detection.
- Process Injection: The script employed process injection techniques to embed DarkGate into legitimate processes, such as explorer.exe or svchost.exe, to avoid detection.
- Dynamic Command Loading: The malware dynamically fetched additional commands from its C2 server, allowing real-time adaptation to the victim’s environment.
4. DarkGate Malware Capabilities
DarkGate, now available as a Malware-as-a-Service (MaaS) offering, provides attackers with advanced features. Technical insights include:
- Credential Dumping: DarkGate used the Mimikatz module to extract credentials from memory and secure storage locations.
- Keylogging Mechanism: Keystrokes were logged and transmitted in real-time to the attacker’s server, enabling credential theft and activity monitoring.
- Fileless Persistence: Utilizing Windows Management Instrumentation (WMI) and registry modifications, the malware ensured persistence without leaving traditional file traces.
- Network Surveillance: The malware monitored network activity to identify high-value targets for lateral movement within the compromised environment.
5. Attack Indicators
Trend Micro researchers identified several indicators of compromise (IoCs) associated with the DarkGate campaign:
- Suspicious Domains: example-remotesupport[.]com and similar domains used for C2 communication.
- Malicious File Hashes:some text
- AutoIt Script: 5a3f8d0bd6c91234a9cd8321a1b4892d
- DarkGate Payload: 6f72cde4b7f3e9c1ac81e56c3f9f1d7a
- Behavioral Anomalies:some text
- Unusual outbound traffic to non-standard ports.
- Unauthorized registry modifications under HKCU\Software\Microsoft\Windows\CurrentVersion\Run.
Broader Cyber Threat Landscape
In parallel with this campaign, other phishing and malware delivery tactics have been observed, including:
- Cloud Exploitation: Abuse of platforms like Cloudflare Pages to host phishing sites mimicking Microsoft 365 login pages.
- Quishing Campaigns: Phishing emails with QR codes that redirect users to fake login pages.
- File Attachment Exploits: Malicious HTML attachments embedding JavaScript to steal credentials.
- Mobile Malware: Distribution of malicious Android apps capable of financial data theft.
Implications of the DarkGate Campaign
This attack highlights the sophistication of threat actors in leveraging legitimate tools for malicious purposes. Key risks include:
- Advanced Threat Evasion: The use of obfuscation and process injection complicates detection by traditional antivirus solutions.
- Cross-Platform Risk: DarkGate’s modular design enables its functionality across diverse environments, posing risks to Windows, macOS, and Linux systems.
- Organizational Exposure: The compromise of a single endpoint can serve as a gateway for further network exploitation, endangering sensitive organizational data.
Recommendations for Mitigation
- Enable Advanced Threat Detection: Deploy endpoint detection and response (EDR) solutions to identify anomalous behavior like process injection and dynamic command loading.
- Restrict Remote Access Tools: Limit the use of tools like AnyDesk to approved use cases and enforce strict monitoring.
- Use Email Filtering and Monitoring: Implement AI-driven email filtering systems to detect and block email bombardment campaigns.
- Enhance Endpoint Security: Regularly update and patch operating systems and applications to mitigate vulnerabilities.
- Educate Employees: Conduct training sessions to help employees recognize and avoid phishing and social engineering tactics.
- Implement Network Segmentation: Limit the spread of malware within an organization by segmenting high-value assets.
Conclusion
Using Microsoft Teams and AnyDesk to spread DarkGate malware shows the continuous growth of the hackers’ level. The campaign highlights how organizations have to start implementing adequate levels of security preparedness to threats, including, Threat Identification, Training employees, and Rights to Access.
The DarkGate malware is a perfect example of how these attacks have developed into MaaS offerings, meaning that the barrier to launch highly complex attacks is only decreasing, which proves once again why a layered defense approach is crucial. Both awareness and flexibility are still the key issues in addressing the constantly evolving threat in cyberspace.
Reference:
.jpg)
Introduction
The Indian Cabinet has approved a comprehensive national-level IndiaAI Mission with a budget outlay ofRs.10,371.92 crore. The mission aims to strengthen the Indian AI innovation ecosystem by democratizing computing access, improving data quality, developing indigenous AI capabilities, attracting top AI talent, enabling industry collaboration, providing startup risk capital, ensuring socially-impactful A projects, and bolstering ethical AI. The mission will be implemented by the'IndiaAI' Independent Business Division (IBD) under the Digital India Corporation (DIC) and consists of several components such as IndiaAI Compute Capacity, IndiaAI Innovation Centre (IAIC), IndiaAI Datasets Platform, India AI Application Development Initiative, IndiaAI Future Skills, IndiaAI Startup Financing, and Safe & Trusted AI over the next 5 years.
This financial outlay is intended to befulfilled through a public-private partnership model, to ensure a structured implementation of the IndiaAI Mission. The main objective is to create and nurture an ecosystem for India’s AI innovation. This mission is intended to act as a catalyst for shaping the future of AI for India and the world. AI has the potential to become an active enabler of the digital economy and the Indian government aims to harness its full potential to benefit its citizens and drive the growth of its economy.
Key Objectives of India's AI Mission
● With the advancements in data collection, processing and computational power, intelligent systems can be deployed in varied tasks and decision-making to enable better connectivity and enhance productivity.
● India’s AI Mission will concentrate on benefiting India and addressing societal needs in primary areas of healthcare, education, agriculture, smart cities and infrastructure, including smart mobility and transportation.
● This mission will work with extensive academia-industry interactions to ensure the development of core research capability at the national level. This initiative will involve international collaborations and efforts to advance technological frontiers by generating new knowledge and developing and implementing innovative applications.
The strategies developed for implementing the IndiaAI Mission are via Public-Private Partnerships, Skilling initiatives and AI Policy and Regulation. An example of the work towards the public-private partnership is the pre-bid meeting that the IT Ministry hosted on 29th August2024, which saw industrial participation from Nvidia, Intel, AMD, Qualcomm, Microsoft Azure, AWS, Google Cloud and Palo Alto Networks.
Components of IndiaAI Mission
The IndiaAI Compute Capacity: The IndiaAI Compute pillar will build a high-end scalable AI computing ecosystem to cater to India's rapidly expanding AI start-ups and research ecosystem. The ecosystem will comprise AI compute infrastructure of 10,000 or more GPUs, built through public-private partnerships. An AI marketplace will offer AI as a service and pre-trained models to AI innovators.
The IndiaAI Innovation Centre will undertake the development and deployment of indigenous Large Multimodal Models (LMMs) and domain-specific foundational models in critical sectors. The IndiaAI Datasets Platform will streamline access to quality on-personal datasets for AI innovation.
The IndiaAI Future Skills pillar will mitigate barriers to entry into AI programs and increase AI courses in undergraduate, master-level, and Ph.D. programs. Data and AI Labs will be set up in Tier 2 and Tier 3 cities across India to impart foundational-level courses.
The IndiaAI Startup Financing pillar will support and accelerate deep-tech AI startups, providing streamlined access to funding for futuristic AI projects.
The Safe & Trusted AI pillar will enable the implementation of responsible AI projects and the development of indigenous tools and frameworks, self-assessment check lists for innovators, and other guidelines and governance frameworks by recognising the need for adequate guardrails to advance the responsible development, deployment, and adoption of AI.
CyberPeace Considerations for the IndiaAI Mission
● Data privacy and security are paramount as emerging privacy instruments aim to ensure ethical AI use. Addressing bias and fairness in AI remains a significant challenge, especially with poor-quality or tampered datasets that can lead to flawed decision-making, posing risks to fairness, privacy, and security.
● Geopolitical tensions and export control regulations restrict access to cutting-edge AI technologies and critical hardware, delaying progress and impacting data security. In India, where multilingualism and regional diversity are key characteristics, the unavailability of large, clean, and labeled datasets in Indic languages hampers the development of fair and robust AI models suited to the local context.
● Infrastructure and accessibility pose additional hurdles in India’s AI development. The country faces challenges in building computing capacity, with delays in procuring essential hardware, such as GPUs like Nvidia’s A100 chip, hindering businesses, particularly smaller firms. AI development relies heavily on robust cloud computing infrastructure, which remains in its infancy in India. While initiatives like AIRAWAT signal progress, significant gaps persist in scaling AI infrastructure. Furthermore, the scarcity of skilled AI professionals is a pressing concern, alongside the high costs of implementing AI in industries like manufacturing. Finally, the growing computational demands of AI lead to increased energy consumption and environmental impact, raising concerns about balancing AI growth with sustainable practices.
Conclusion
We advocate for ethical and responsible AI development adoption to ensure ethical usage, safeguard privacy, and promote transparency. By setting clear guidelines and standards, the nation would be able to harness AI's potential while mitigating risks and fostering trust. The IndiaAI Mission will propel innovation, build domestic capacities, create highly-skilled employment opportunities, and demonstrate how transformative technology can be used for social good and enhance global competitiveness.
References
● https://pib.gov.in/PressReleasePage.aspx?PRID=2012375

Executive Summary:
A video circulating online claims to show a man being assaulted by BSF personnel in India for selling Bangladesh flags at a football stadium. The footage has stirred strong reactions and cross border concerns. However, our research confirms that the video is neither recent nor related to the incident that occurred in India. The content has been wrongly framed and shared with misleading claims, misrepresenting the actual incident.
Claim:
It is being claimed through a viral post on social media that a Border Security Force (BSF) soldier physically attacked a man in India for allegedly selling the national flag of Bangladesh in West Bengal. The viral video further implies that the incident reflects political hostility towards Bangladesh within Indian territory.

Fact Check:
After conducting thorough research, including visual verification, reverse image searching, and confirming elements in the video background, we determined that the video was filmed outside of Bangabandhu National Stadium in Dhaka, Bangladesh, during the crowd buildup prior to the AFC Asian Cup. A match featuring Bangladesh against Singapore.

Second layer research confirmed that the man seen being assaulted is a local flag-seller named Hannan. There are eyewitness accounts and local news sources indicating that Bangladeshi Army officials were present to manage the crowd on the day under review. During the crowd control effort a soldier assaulted the vendor with excessive force. The incident created outrage to which the Army responded by identifying the officer responsible and taking disciplinary measures. The victim was reported to have been offered reparations for the misconduct.

Conclusion:
Our research confirms that the viral video does not depict any incident in India. The claim that a BSF officer assaulted a man for selling Bangladesh flags is completely false and misleading. The real incident occurred in Bangladesh, and involved a local army official during a football event crowd-control situation. This case highlights the importance of verifying viral content before sharing, as misinformation can lead to unnecessary panic, tension, and international misunderstanding.
- Claim: Viral video claims BSF personnel thrashing a person selling Bangladesh National Flag in West Bengal
- Claimed On: Social Media
- Fact Check: False and Misleading