#FactCheck-AI-Generated Viral Image of US President Joe Biden Wearing a Military Uniform
Executive Summary:
A circulating picture which is said to be of United States President Joe Biden wearing military uniform during a meeting with military officials has been found out to be AI-generated. This viral image however falsely claims to show President Biden authorizing US military action in the Middle East. The Cyberpeace Research Team has identified that the photo is generated by generative AI and not real. Multiple visual discrepancies in the picture mark it as a product of AI.
Claims:
A viral image claiming to be US President Joe Biden wearing a military outfit during a meeting with military officials has been created using artificial intelligence. This picture is being shared on social media with the false claim that it is of President Biden convening to authorize the use of the US military in the Middle East.

Similar Post:

Fact Check:
CyberPeace Research Team discovered that the photo of US President Joe Biden in a military uniform at a meeting with military officials was made using generative-AI and is not authentic. There are some obvious visual differences that plainly suggest this is an AI-generated shot.

Firstly, the eyes of US President Joe Biden are full black, secondly the military officials face is blended, thirdly the phone is standing without any support.
We then put the image in Image AI Detection tool

The tool predicted 4% human and 96% AI, Which tells that it’s a deep fake content.
Let’s do it with another tool named Hive Detector.

Hive Detector predicted to be as 100% AI Detected, Which likely to be a Deep Fake Content.
Conclusion:
Thus, the growth of AI-produced content is a challenge in determining fact from fiction, particularly in the sphere of social media. In the case of the fake photo supposedly showing President Joe Biden, the need for critical thinking and verification of information online is emphasized. With technology constantly evolving, it is of great importance that people be watchful and use verified sources to fight the spread of disinformation. Furthermore, initiatives to make people aware of the existence and impact of AI-produced content should be undertaken in order to promote a more aware and digitally literate society.
- Claim: A circulating picture which is said to be of United States President Joe Biden wearing military uniform during a meeting with military officials
- Claimed on: X
- Fact Check: Fake
Related Blogs

Introduction
Robotic or Robo dogs are created to resemble dogs in conduct and appearance, usually comprising canine features including barking and wagging tails. Some examples include Rhex (hexapod robot), Littledog and BigDog (created by Boston Dynamics robot). Robodogs, on the whole, can even respond to commands and look at a person with large LED-lit puppy eyes.
A four-legged robotic solution was recently concluded through its foremost successful radiation protection test inside the most extensive experimental area at the European Organization for Nuclear Research known as CERN. Each robot created at CERN is carefully crafted to fulfil exceptional challenges and complement each other. Unlike the previous wheeled, tracked or monorail robots, the robodogs will be capable of penetrating unexplored dimensions of the caverns, expanding the spectrum of surroundings that CERN robots can act as a guide. Also, Incorporating the robodog with the existing monorail robots in the Large Hadron Collider (LHC) tunnel will expand the range of places available for monitoring and supervision, improving the security and efficiency of the operation of CERN. Lenovo too has designed a six-legged robot called the "Daystar Bot GS" to be launched this year, which promises "comprehensive data collection."
Use of Robodogs in diverse domains
Due to the enhancement of Artificial Intelligence (AI), robodogs can be a boon for those with exceptional requirements. The advantage of AI is the dependability of its features, which can be programmed to answer certain commands detailed to the user.
In the context of health and well-being, they can be useful if they are programmed to take care of a person with distinct or special requirements (elderly person or visually impaired person). For this reason, they are considered more advantageous than the real dogs. Recently, New Stanford has designed robodogs that can perform several physical activities, including dancing and may also one day assist in putting pediatric patients in comfort during their hospital stays. Similarly, the robodog, "Pupper", is a revamped version of another robotic dog designed at Stanford called "Doggo", an open-source bot with 3D printed elements that one could create on a fairly small budget. They were also created to interact with humans. Furthermore, Robots as friends are a more comfortable hop for the Japanese. The oldest and most successful social robot in Japan is called "Paro", resembling an ordinary plush toy that can help in treating depression, stress, anxiety and also mood swings in a person. Following 1998, several Paro robots were exported overseas and put into service globally, reducing stress among children in ICUs, treating American veterans suffering from Post Traumatic Stress Disorder (PTSD), and assisting dementia patients.
Post-pandemic, the Japanese experiencing loneliness and isolation have been clinging to social robots for mind healing and comfort. Likewise, at a cafe in Japan, proud owners of the AI-driven robot dog "Aibo" have pawed its course into the minds and hearts of the people. Presently, robots are replacing the conventional class guinea pig or bunny at Moriyama Kindergarten in the central Japanese city of Nagoya. According to the teachers here, the bots apparently reduce stress and teach kids to be more humane.
In the security and defence domain, the unique skills of robodogs allow them to be used in hazardous and challenging circumstances. They can even navigate through rugged topography with reassurance to save stranded individuals from natural catastrophes. They could correspondingly help with search and rescue procedures, surveillance, and other circumstances that could be dangerous for humans. Researchers or experts are still fine-tuning the algorithm to develop them by devising the technology and employing affordable off-shelf robots that are already functional. Robodogs are further used for providing surveillance in hostage crises, defusing bombs, besides killing people to stop them from attacking other individuals. Similarly, a breakthrough in AI is being tested by the Australian military that reportedly allows soldiers to control robodogs solely with their minds. Cities like Florida and St. Petersburg also seem bound to keep police robodogs. The U.S. Department of Homeland Security is further seeking plans to deploy robot dogs at the borderlands. Also, the New York City Police Department (NYPD) intends to once again deploy four-legged 'Robodogs' to deal with high-risk circumstances like hostage negotiations. The NYPD has previously employed alike robodogs for high-octane duties in examining unsafe environments where human officers should not be exposed. The U.S. Marine Corps is additionally experimenting with a new breed of robotic canine that can be helpful in the battleground, enhance the safety and mobility of soldiers, and aid in other tasks. The Unitree Go1 robot dog (Nicknamed GOAT-Grounded Open-Air Transport) by the Marines is a four-legged machine that has a built-in AI system, which can be equipped to carry an infantry anti-armour rocket launcher on its back. The GOAT robot dog is designed to help the Marines move hefty loads, analyse topography, and deliver fire support in distant and dangerous places.
However, on the contrary, robodogs may pose ethical and moral predicaments regarding who is accountable for their actions and how to ensure their adherence to the laws of warfare. This may further increase security and privacy situations on how to safeguard the data of the robotic dogs and contain hacking or sabotage.
Conclusion
Teaching robots to traverse the world conventionally has been an extravagant challenge. Though the world has been seeing an increase in their manufacturing, it is simply a machine and can never replace the feeling of owning a real dog. Designers state that intelligent social robots will never replace humans, though robots provide the assurance of social harmony without social contact. Also, they may not be capable of managing complicated or unforeseen circumstances that need instinct or human decision-making. Nevertheless, owning robodogs in the coming decades is expected to become even more common and cost-effective as they evolve or advance with new algorithms being tested and implemented.
References:
- https://home.cern/news/news/engineering/introducing-cerns-robodog
- https://news.stanford.edu/2023/10/04/ai-approach-yields-athletically-intelligent-robotic-dog/
- https://nypost.com/2023/02/17/combat-ai-robodogs-follow-telepathic-commands-from-soldiers/
- https://www.popsci.com/technology/parkour-algorithm-robodog/
- https://ggba.swiss/en/cern-unveils-its-innovative-robodog-for-radiation-detection/
- https://www.themarshallproject.org/2022/12/10/san-francisco-killer-robots-policing-debate
- https://www.cbsnews.com/news/robo-dogs-therapy-bots-artificial-intelligence/
- https://news.stanford.edu/report/2023/08/01/robo-dogs-unleash-fun-joy-stanford-hospital/
- https://www.pcmag.com/news/lenovo-creates-six-legged-daystar-gs-robot
- https://www.foxnews.com/tech/new-breed-military-ai-robo-dogs-could-marines-secret-weapon
- https://www.wptv.com/news/national/new-york-police-will-use-four-legged-robodogs-again
- https://www.dailystar.co.uk/news/us-news/creepy-robodogs-controlled-soldiers-minds-29638615
- https://www.newarab.com/news/robodogs-part-israels-army-robots-gaza-war
- https://us.aibo.com/

Introduction
In the evolving landscape of cybercrime, attackers are not only becoming more sophisticated in their approach but also more adept in their infrastructure. The Indian Cybercrime Coordination Centre (I4C) has issued a warning about the use of ‘disposable domains’ by cybercriminals. These are short-lived websites designed tomimic legitimate platforms, deceive users, and then disappear quickly to avoid detection and legal repercussions.
Although they may appear harmless at first glance, disposable domains form the backbone of countless online scams, phishing campaigns, malware distributionschemes, and disinformation networks. Cybercriminals use them to host fake websites, distribute malicious files, send deceptive emails, and mislead unsuspecting users, all while evading detection and takedown efforts.
As India’s digital economy grows and more citizens, businesses, and public services move online, it is crucial to understand this hidden layer of cybercrime infrastructure.Greater awareness among individuals, enterprises, and policymakers is essential to strengthen defences against fraud, protect users from harm, and build trust in thedigital ecosystem
What Are Disposable Domains?
A disposable domain is a website domain that is registered to be used temporarily, usually for hours or days, typically to evade detection or accountability.
These domains are inexpensive, easy to obtain, and can be set up with minimal information. They are often bought in bulk through domain registrars that do not strictly verify ownership information, sometimes using stolen credit cards or cryptocurrencies to remain anonymous. They differ from legitimate temporary domains used for testing or development in one significant aspect, which is ‘purpose’. Cybercriminals use disposable domains to carry out malicious activities such as phishing, sextortion, malware distribution, fake e-commerce sites, spam email campaigns, and disinformation operations.
How Cybercriminals Utilise Disposable Domains
1. Phishing & Credential Stealing: Attackers tend to register lookalike domains that are similar to legitimate websites (e.g., go0gle-login[.]com or sbi-verification[.]online) and trick victims into entering their login credentials. These domains will be active only long enough to deceive, and then they will disappear.
2. Malware Distribution: Disposable domains are widely used for ransomware and spyware operations for hosting malicious files. Because the domains are temporary, threat intelligence systems tend to notice them too late.
3. Fake E-Commerce & Investment Scams: Cyber crooks clone legitimate e-commerce or investment sites, place ad campaigns, and trick victims into "purchasing" goods or investing in scams. The domain vanishes when the scam runs out.
4. Spam and Botnets: Disposable domains assist in botnet command-and-control activities. They make it more difficult for defenders to block static IPs or trace the attacker's infrastructure.
5. Disinformation and Influence Campaigns: State-sponsored actors and coordinated troll networks use disposable domains to host fabricated news articles, fake government documents, and manipulated videos. When these sites are detected and taken down, they are quickly replaced with new domains, allowing the disinformation cycle to continue uninterrupted.
Why Are They Hard to Stop?
Registering a domain is inexpensive and quick, often requiring no more than an email address and payment. The difficulty is the easy domain registrations and the absence of worldwide enforcement. Domain registrars differ in enforcing Know-Your-Customer (KYC) standards stringently. ICANN (Internet Corporation for Assigned Names and Numbers) has certain regulations in place but enforcement is inconsistent. ICANN does require registrars to maintain accurate Who is information (the “Registrant Data Accuracy Policy”) and to act on abuse complaints. However, ICANN is not an enforcement agency. It oversees contracts with registrars but cannot directly police every registration. Cybercriminals exploit services such as:
- Privacy protection shields that conceal actual WHOIS information.
- Bulletproof hosting that evades takedown notices.
- Fast-flux DNS methods to rapidly alter IP addresses
Additionally, utilisation of IDNs ( Internationalised Domain Names) and homoglyph attacks enables the attackers to register visually similar domains to legitimate ones (e.g., using Cyrillic characters to represent Latin ones).
Real-World Example: India and the Rise of Fake Investment Sites
India has witnessed a wave of monetary scams that are connected with disposable domains. Over hundreds of false websites impersonating government loan schemes, banks or investment websites, and crypto-exchanges were found on disposable domains such as gov-loans-apply[.]xyz, indiabonds-secure[.]top, or rbi-invest[.]store. Most of them placed paid advertisements on sites such as Facebook or Google and harvested user information and payments, only to vanish in 48–72 hours. Victims had no avenue of proper recourse, and the authorities were left with a digital ghost trail.
How Disposable Domains Undermine Cybersecurity
- Bypass Blacklists: Dynamic domains constantly shifting evade static blacklists.
- Delay Attribution: Time is wasted pursuing non-existent owners or takedowns.
- Mass Targeting: One actor can register thousands of domains and attack at scale.
- Undermine Trust: Frequent users become targets when genuine sites are duplicated and it looks realistic.
Recommendations Addressing Legal and Policy Gaps in India
1. There is a need to establish a formal coordination mechanism between domain registrars and national CERTs such as CERT-In to enable effective communication and timely response to domain-based threats.
2. There is a need to strengthen the investigative and enforcement capabilities of law enforcement agencies through dedicated resources, training, and technical support to effectively tackle domain-based scams.
3. There is a need to leverage the provisions of the Digital Personal Data Protection Act, 2023 to take action against phishing websites and malicious domains that collect personal data without consent.
4. There is a need to draft and implement specific regulations or guidelines to address the misuse of digital infrastructure, particularly disposable and fraudulent domains, and close existing regulatory gaps.
What Can Be Done: CyberPeace View
1. Stronger KYC for Domain Registrations: Registrars selling domains to Indian users or based in India should conduct verified KYC processes, with legal repercussions for carelessness.
2. Real-Time Domain Blacklists: CERT-In, along with ISPs and hosting companies, should operate and enforce a real-time blacklist of scam domains known.
3. Public Reporting Tools: Observers or victims should be capable of reporting suspicious domains through an easy interface (tied to cybercrime.gov.in).
4. Collaboration with Tech Platforms: Social media services and online ad platforms should filter out ads associated with disposable or spurious domains and report abuse data to CERT-In.
5. User Awareness: Netizens should be educated to check URLs thoroughly, not click on unsolicited links and they must verify the authenticity of websites.
Conclusion
Disposable domains have silently become the foundation of contemporary cybercrime. They are inexpensive, highly anonymous, and short-lived, which makes them a darling weapon for cybercriminals ranging from solo spammers to nation-state operators. In an increasingly connected Indian society where the penetration rate of internet users is high, this poses an expanding threat to economic security, public confidence, and national resilience. Combating this problem will need a combination of technical defences, policy changes, public-private alliances, and end-user sensitisation. As India develops a Cyber Secure Bharat, monitoring and addressing disposable domain abuse must be the utmost concern.
References
- https://www.bitcot.com/disposable-domains
- https://atdata.com/blog/evolution-of-email-fraud-rise-of-hyper-disposable-domains/
- https://www.cyfirma.com/research/scamonomics-the-dark-side-of-stock-crypto-investments-in-india/
- https://knowledgebase.constantcontact.com/lead-gen-crm/articles/KnowledgeBase/50330-Understanding-Blocked-Forbidden-and-Disposable-Domains?lang=en_US
- https://www.meity.gov.in/
- https://intel471.com/blog/bulletproof-hosting-fast-flux-dns-double-flux-vps
.webp)
Introduction
Misinformation is a major issue in the AI age, exacerbated by the broad adoption of AI technologies. The misuse of deepfakes, bots, and content-generating algorithms have made it simpler for bad actors to propagate misinformation on a large scale. These technologies are capable of creating manipulative audio/video content, propagate political propaganda, defame individuals, or incite societal unrest. AI-powered bots may flood internet platforms with false information, swaying public opinion in subtle ways. The spread of misinformation endangers democracy, public health, and social order. It has the potential to affect voter sentiments, erode faith in the election process, and even spark violence. Addressing misinformation includes expanding digital literacy, strengthening platform detection capabilities, incorporating regulatory checks, and removing incorrect information.
AI's Role in Misinformation Creation
AI's growth in its capabilities to generate content have grown exponentially in recent years. Legitimate uses or purposes of AI many-a-times take a backseat and result in the exploitation of content that already exists on the internet. One of the main examples of misinformation flooding the internet is when AI-powered bots flood social media platforms with fake news at a scale and speed that makes it impossible for humans to track and figure out whether the same is true or false.
The netizens in India are greatly influenced by viral content on social media. AI-generated misinformation can have particularly negative consequences. Being literate in the traditional sense of the word does not automatically guarantee one the ability to parse through the nuances of social media content authenticity and impact. Literacy, be it social media literacy or internet literacy, is under attack and one of the main contributors to this is the rampant rise of AI-generated misinformation. Some of the most common examples of misinformation that can be found are related to elections, public health, and communal issues. These issues have one common factor that connects them, which is that they evoke strong emotions in people and as such can go viral very quickly and influence social behaviour, to the extent that they may lead to social unrest, political instability and even violence. Such developments lead to public mistrust in the authorities and institutions, which is dangerous in any economy, but even more so in a country like India which is home to a very large population comprising a diverse range of identity groups.
Misinformation and Gen AI
Generative AI (GAI) is a powerful tool that allows individuals to create massive amounts of realistic-seeming content, including imitating real people's voices and creating photos and videos that are indistinguishable from reality. Advanced deepfake technology blurs the line between authentic and fake. However, when used smartly, GAI is also capable of providing a greater number of content consumers with trustworthy information, counteracting misinformation.
Generative AI (GAI) is a technology that has entered the realm of autonomous content production and language creation, which is linked to the issue of misinformation. It is often difficult to determine if content originates from humans or machines and if we can trust what we read, see, or hear. This has led to media users becoming more confused about their relationship with media platforms and content and highlighted the need for a change in traditional journalistic principles.
We have seen a number of different examples of GAI in action in recent times, from fully AI-generated fake news websites to fake Joe Biden robocalls telling the Democrats in the U.S. not to vote. The consequences of such content and the impact it could have on life as we know it are almost too vast to even comprehend at present. If our ability to identify reality is quickly fading, how will we make critical decisions or navigate the digital landscape safely? As such, the safe and ethical use and applications of this technology needs to be a top global priority.
Challenges for Policymakers
AI's ability to generate anonymous content makes it difficult to hold perpetrators accountable due to the massive amount of data generated. The decentralised nature of the internet further complicates regulation efforts, as misinformation can spread across multiple platforms and jurisdictions. Balancing the need to protect the freedom of speech and expression with the need to combat misinformation is a challenge. Over-regulation could stifle legitimate discourse, while under-regulation could allow misinformation to propagate unchecked. India's multilingual population adds more layers to already-complex issue, as AI-generated misinformation is tailored to different languages and cultural contexts, making it harder to detect and counter. Therefore, developing strategies catering to the multilingual population is necessary.
Potential Solutions
To effectively combat AI-generated misinformation in India, an approach that is multi-faceted and multi-dimensional is essential. Some potential solutions are as follows:
- Developing a framework that is specific in its application to address AI-generated content. It should include stricter penalties for the originator and spreader and dissemination of fake content in proportionality to its consequences. The framework should establish clear and concise guidelines for social media platforms to ensure that proactive measures are taken to detect and remove AI-generated misinformation.
- Investing in tools that are driven by AI for customised detection and flagging of misinformation in real time. This can help in identifying deepfakes, manipulated images, and other forms of AI-generated content.
- The primary aim should be to encourage different collaborations between tech companies, cyber security orgnisations, academic institutions and government agencies to develop solutions for combating misinformation.
- Digital literacy programs will empower individuals by training them to evaluate online content. Educational programs in schools and communities teach critical thinking and media literacy skills, enabling individuals to better discern between real and fake content.
Conclusion
AI-generated misinformation presents a significant threat to India, and it is safe to say that the risks posed are at scale with the rapid rate at which the nation is developing technologically. As the country moves towards greater digital literacy and unprecedented mobile technology adoption, one must be cognizant of the fact that even a single piece of misinformation can quickly and deeply reach and influence a large portion of the population. Indian policymakers need to rise to the challenge of AI-generated misinformation and counteract it by developing comprehensive strategies that not only focus on regulation and technological innovation but also encourage public education. AI technologies are misused by bad actors to create hyper-realistic fake content including deepfakes and fabricated news stories, which can be extremely hard to distinguish from the truth. The battle against misinformation is complex and ongoing, but by developing and deploying the right policies, tools, digital defense frameworks and other mechanisms, we can navigate these challenges and safeguard the online information landscape.
References:
- https://economictimes.indiatimes.com/news/how-to/how-ai-powered-tools-deepfakes-pose-a-misinformation-challenge-for-internet-users/articleshow/98770592.cms?from=mdr
- https://www.dw.com/en/india-ai-driven-political-messaging-raises-ethical-dilemma/a-69172400
- https://pure.rug.nl/ws/portalfiles/portal/975865684/proceedings.pdf#page=62