#FactCheck-AI-Generated Viral Image of US President Joe Biden Wearing a Military Uniform
Executive Summary:
A circulating picture which is said to be of United States President Joe Biden wearing military uniform during a meeting with military officials has been found out to be AI-generated. This viral image however falsely claims to show President Biden authorizing US military action in the Middle East. The Cyberpeace Research Team has identified that the photo is generated by generative AI and not real. Multiple visual discrepancies in the picture mark it as a product of AI.
Claims:
A viral image claiming to be US President Joe Biden wearing a military outfit during a meeting with military officials has been created using artificial intelligence. This picture is being shared on social media with the false claim that it is of President Biden convening to authorize the use of the US military in the Middle East.

Similar Post:

Fact Check:
CyberPeace Research Team discovered that the photo of US President Joe Biden in a military uniform at a meeting with military officials was made using generative-AI and is not authentic. There are some obvious visual differences that plainly suggest this is an AI-generated shot.

Firstly, the eyes of US President Joe Biden are full black, secondly the military officials face is blended, thirdly the phone is standing without any support.
We then put the image in Image AI Detection tool

The tool predicted 4% human and 96% AI, Which tells that it’s a deep fake content.
Let’s do it with another tool named Hive Detector.

Hive Detector predicted to be as 100% AI Detected, Which likely to be a Deep Fake Content.
Conclusion:
Thus, the growth of AI-produced content is a challenge in determining fact from fiction, particularly in the sphere of social media. In the case of the fake photo supposedly showing President Joe Biden, the need for critical thinking and verification of information online is emphasized. With technology constantly evolving, it is of great importance that people be watchful and use verified sources to fight the spread of disinformation. Furthermore, initiatives to make people aware of the existence and impact of AI-produced content should be undertaken in order to promote a more aware and digitally literate society.
- Claim: A circulating picture which is said to be of United States President Joe Biden wearing military uniform during a meeting with military officials
- Claimed on: X
- Fact Check: Fake
Related Blogs

Introduction
In the age of digital advancement, where technology continually grows, so does the method of crime. The rise of cybercrime has created various threats to individuals and organizations, businesses, and government agencies. To combat such crimes law enforcement agencies are looking out for innovative solutions against these challenges. One such innovative solution is taken by the Surat Police in Gujarat, who have embraced the power of Artificial Intelligence (AI) to bolster their efforts in reducing cybercrimes.
Key Highlights
Surat, India, has launched an AI-based WhatsApp chatbot called "Surat Police Cyber Mitra Chatbot" to tackle growing cybercrime. The chatbot provides quick assistance to individuals dealing with various cyber issues, ranging from reporting cyber crimes to receiving safety tips. The initiative is the first of its kind in the country, showcasing Surat Police's dedication to using advanced technology for public safety. Surat Police Commissioner-in-Charge commended the use of AI in crime control as a positive step forward, while also stressing the need for continuous improvements in various areas, including technological advancements, data acquisition related to cybercrime, and training for police personnel.
The Surat Cyber Mitra Chatbot, available on WhatsApp number 9328523417, offers round-the-clock assistance to citizens, allowing them to access crucial information on cyber fraud and legal matters.
Surat Police's AI Chatbot: Cyber Mitra
- Surat Police in Gujarat, India, has launched an AI-based WhatsApp chatbot, "Surat Police Cyber Mitra Chatbot," to combat growing cybercrime.
- The chatbot provides assistance to individuals dealing with various cyber issues, from reporting cyber crimes to receiving safety tips.
- The initiative is the first of its kind in the country, showcasing Surat Police's dedication to using advanced technology for public safety.
- The Surat Cyber Mitra Chatbot, available on WhatsApp number 9328523417, offers round-the-clock assistance to citizens, providing crucial information on cyber fraud.
The Growing Cybercrime Threat
With the advancement of technology, cybercrime has become more complex due to the interconnectivity of digital devices and the internet. The criminals exploit vulnerabilities in software, networks, and human behavior to perpetrate a wide range of malicious activities to fulfill their illicit gains. Individuals and organizations face a wide range of cyber risks that can cause significant financial, reputational, and emotional harm.
Surat Police’s Strategic Initiative
Surat Police Cyber Mitra Chatbot is an AI-powered tool for instant problem resolution. This innovative approach allows citizens to address any issue or query at their doorstep, providing immediate and accurate responses to concerns. The chatbot is accessible 24/7, 24 hours a day, and serves as a reliable resource for obtaining legal information related to cyber fraud.
The use of AI in police initiatives has been a topic of discussion for some time, and the Surat City Police has taken this step to leverage technology for the betterment of society. The chatbot promises to boost public trust towards law enforcement and improve the legal system by addressing citizen issues within seconds, ranging from financial disputes to cyber fraud incidents.
This accessibility extends to inquiries such as how to report financial crimes or cyber-fraud incidents and understand legal procedures. The availability of accurate information will not only enhance citizens' trust in the police but also contribute to the efficiency of law enforcement operations. The availability of accurate information will lead to more informed interactions between citizens and the police, fostering a stronger sense of community security and collaboration.
The utilisation of this chatbot will facilitate access to information and empower citizens to engage more actively with the legal system. As trust in the police grows and legal processes become more transparent and accessible, the overall integrity and effectiveness of the legal system are expected to improve significantly.
Conclusion
The Surat Police Cyber Mitra Chatbot is an AI-powered tool that provides round-the-clock assistance to citizens, enhancing public trust in law enforcement and streamlining access to legal information. This initiative bridges the gap between law enforcement and the community, fostering a stronger sense of security and collaboration, and driving improvements in the efficiency and integrity of the legal process.
References:
- https://www.ahmedabadmirror.com/surat-first-city-in-india-to-launch-ai-chatbot-to-tackle-cybercrime/81861788.html
- https://government.economictimes.indiatimes.com/news/secure-india/gujarat-surat-police-adopts-ai-to-check-cyber-crimes/107410981
- https://www.timesnownews.com/india/chatbot-and-advanced-analytics-surat-police-utilising-ai-technology-to-reduce-cybercrime-article-107397157
- https://www.grownxtdigital.in/technology/surat-police-ai-cyber-mitra-chatbot-gujarat/

Introduction
All citizens are using tech to their advantage, and so we see a lot of upskilling among the population leading to innovation in India. As we go deeper into cyberspace, we must maintain our cyber security efficiently and effectively. When bad actors use technology to their advantage, we often see data loss or financial loss of the victim, In this blog, we will shine light upon two new forms of cyber attacks, causing havoc upon the innocent. The “Daam” Malware and a new malicious app are the two new issues.
Daam Botnet
Since 2021, the DAAM Android botnet has been used to acquire unauthorised access to targeted devices. Cybercriminals use it to carry out different destructive actions. Using the DAAM Android botnet’s APK binding service, threat actors can combine malicious code with a legitimate application. Keylogging, ransomware, VOIP call records, runtime code execution, browser history collecting, incoming call recording, PII data theft, phishing URL opening, photo capture, clipboard data theft, WiFi and data status switching, and browser history gathering are just a few of the functions offered by the DAAM Android botnet. The DAAM botnet tracks user activity using the Accessibility Service and stores keystrokes it has recorded together with the name of the programme package in a database. It also contains a ransomware module that encrypts and decrypts data on the infected device using the AES method.
Additionally, the botnet uses the Accessibility service to monitor the VOIP call-making features of social media apps like WhatsApp, Skype, Telegram, and others. When a user engages with these elements, the virus begins audio recording.
The Malware
CERT-IN, the central nodal institution that reacts to computer security-related issues, claims that Daam connects with various Android APK files to access a phone. The files on the phone are encrypted using the AES encryption technique, and it is distributed through third-party websites.
It is claimed that the malware can damage call recordings and contacts, gain access to the camera, change passwords, take screenshots, steal SMS, download/upload files, and perform a variety of other things.

Safeguards and Guidelines by Cert-In
Cert-In has released the guideline for combating malware. These were issued in the public interest. The recommendations by Cert-In are as follows-
Only download from official app stores to limit the risk of potentially harmful apps.
Before downloading an app, always read the details and user reviews; likewise, always give permissions that are related to the program’s purpose.
Install Android updates solely from Android device vendors as they become available.
Avoid visiting untrustworthy websites or clicking on untrustworthy
Install and keep anti-virus and anti-spyware software up to date.
Be cautious if you see mobile numbers that appear to be something other than genuine/regular mobile numbers.
Conduct sufficient investigation Before clicking on a link supplied in a communication.
Only click on URLs that clearly display the website domain; avoid abbreviated URLs, particularly those employing bit.ly and tinyurl.
Use secure browsing technologies and filtering tools in antivirus, firewall, and filtering services.
Before providing sensitive information, look for authentic encryption certificates by looking for the green lock in your browser’s URL information, look for authentic encryption certificates by looking for the green lock in your browser’s URL bar.
Any ‘strange’ activity in a user’s bank account must be reported immediately to the appropriate bank.
New Malicious App
From the remote parts of Jharkhand, a new form of malicious application has been circulated among people on the pretext of a bank account closure. The bad actors have always used messaging platforms like Whatsapp and Telegram to circulate malicious links among unaware and uneducated people to dupe them of their hard-earned money.
They send an ordinary-looking message on Whatsapp or Telegram where they mention that the user has a bank account at ICICI bank and, due to irregularity with the credentials, their account is being deactivated. Further, they ask users to update their PAN card to reactivate their account by uploading the PAN card on an application. This app, in turn, is a malicious app that downloads all the user’s personal credentials and shares them with the bad actors via text message, allowing them to bypass banks’ two-factor authentication and drain the money from their accounts. The Jharkhand Police Cyber Cells have registered numerous FIRs pertaining to this type of cybercrime and are conducting full-scale investigations to apprehend the criminals.
Conclusion
Malware and phishing attacks have gained momentum in the previous years and have become a major contributor to the tally of cybercrimes in the country. DaaM malware is one of the examples brought into light due to the timely action by Cert-In, but still, a lot of such malware are deployed by bad actors, and we as netizens need to use our best practices to keep such criminals at bay. Phishing crimes are often substantiated by exploiting vulnerabilities and social engineering. Thus working towards a rise in awareness is the need of the hour to safeguard the population by and large.

Introduction
The use of digital information and communication technologies for healthcare access has been on the rise in recent times. Mental health care is increasingly being provided through online platforms by remote practitioners, and even by AI-powered chatbots, which use natural language processing (NLP) and machine learning (ML) processes to simulate conversations between the platform and a user. Thus, AI chatbots can provide mental health support from the comfort of the home, at any time of the day, via a mobile phone. While this has great potential to enhance the mental health care ecosystem, such chatbots can present technical and ethical challenges as well.
Background
According to the WHO’s World Mental Health Report of 2022, every 1 in 8 people globally is estimated to be suffering from some form of mental health disorder. The need for mental health services worldwide is high but the supply of a care ecosystem is inadequate both in terms of availability and quality. In India, it is estimated that there are only 0.75 psychiatrists per 100,000 patients and only 30% of the mental health patients get help. Considering the slow thawing of social stigma regarding mental health, especially among younger demographics and support services being confined to urban Indian centres, the demand for a telehealth market is only projected to grow. This paves the way for, among other tools, AI-powered chatbots to fill the gap in providing quick, relatively inexpensive, and easy access to mental health counseling services.
Challenges
Users who seek mental health support are already vulnerable, and AI-induced oversight can exacerbate distress due to some of the following reasons:
- Inaccuracy: Apart from AI’s tendency to hallucinate data, chatbots may simply provide incorrect or harmful advice since they may be trained on data that is not representative of the specific physiological and psychological propensities of various demographics.
- Non-Contextual Learning: The efficacy of mental health counseling often relies on rapport-building between the service provider and client, relying on circumstantial and contextual factors. Machine learning models may struggle with understanding interpersonal or social cues, making their responses over-generalised.
- Reinforcement of Unhelpful Behaviors: In some cases, AI chatbots, if poorly designed, have the potential to reinforce unhealthy thought patterns. This is especially true for complex conditions such as OCD, treatment for which requires highly specific therapeutic interventions.
- False Reassurance: Relying solely on chatbots for counseling may create a partial sense of safety, thereby discouraging users from approaching professional mental health support services. This could reinforce unhelpful behaviours and exacerbate the condition.
- Sensitive Data Vulnerabilities: Health data is sensitive personal information. Chatbot service providers will need to clarify how health data is stored, processed, shared, and used. Without strong data protection and transparency standards, users are exposed to further risks to their well-being.
Way Forward
- Addressing Therapeutic Misconception: A lack of understanding of the purpose and capabilities of such chatbots, in terms of care expectations and treatments they can offer, can jeopardize user health. Platforms providing such services should be mandated to lay disclaimers about the limitations of the therapeutic relationship between the platform and its users in a manner that is easy to understand.
- Improved Algorithm Design: Training data for these models must undertake regular updates and audits to enhance their accuracy, incorporate contextual socio-cultural factors for profile analysis, and use feedback loops from customers and mental health professionals.
- Human Oversight: Models of therapy where AI chatbots are used to supplement treatment instead of replacing human intervention can be explored. Such platforms must also provide escalation mechanisms in cases where human-intervention is sought or required.
Conclusion
It is important to recognize that so far, there is no substitute for professional mental health services. Chatbots can help users gain awareness of their mental health condition and play an educational role in this regard, nudging them in the right direction, and provide assistance to both the practitioner and the client/patient. However, relying on this option to fill gaps in mental health services is not enough. Addressing this growing —and arguably already critical— global health crisis requires dedicated public funding to ensure comprehensive mental health support for all.
Sources
- https://www.who.int/news/item/17-06-2022-who-highlights-urgent-need-to-transform-mental-health-and-mental-health-care
- https://health.economictimes.indiatimes.com/news/industry/mental-healthcare-in-india-building-a-strong-ecosystem-for-a-sound-mind/105395767#:~:text=Indian%20mental%20health%20market%20is,access%20to%20better%20quality%20services.
- https://www.frontiersin.org/journals/digital-health/articles/10.3389/fdgth.2023.1278186/full