#FactCheck - Viral Video of US President Biden Dozing Off during Television Interview is Digitally Manipulated and Inauthentic
Executive Summary:
The claim of a video of US President Joe Biden dozing off during a television interview is digitally manipulated . The original video is from a 2011 incident involving actor and singer Harry Belafonte. He seems to fall asleep during a live satellite interview with KBAK – KBFX - Eyewitness News. Upon thorough analysis of keyframes from the viral video, it reveals that US President Joe Biden’s image was altered in Harry Belafonte's video. This confirms that the viral video is manipulated and does not show an actual event involving President Biden.

Claims:
A video shows US President Joe Biden dozing off during a television interview while the anchor tries to wake him up.


Fact Check:
Upon receiving the posts, we watched the video then divided the video into keyframes using the inVid tool, and reverse-searched one of the frames from the video.
We found another video uploaded on Oct 18, 2011 by the official channel of KBAK - KBFX - Eye Witness News. The title of the video reads, “Official Station Video: Is Harry Belafonte asleep during live TV interview?”

The video looks similar to the recent viral one, the TV anchor could be heard saying the same thing as in the viral video. Taking a cue from this we also did some keyword searches to find any credible sources. We found a news article posted by Yahoo Entertainment of the same video uploaded by KBAK - KBFX - Eyewitness News.

Upon thorough investigation from reverse image search and keyword search reveals that the recent viral video of US President Joe Biden dozing off during a TV interview is digitally altered to misrepresent the context. The original video dated back to 2011, where American Singer and actor Harry Belafonte was the actual person in the TV interview but not US President Joe Biden.
Hence, the claim made in the viral video is false and misleading.
Conclusion:
In conclusion, the viral video claiming to show US President Joe Biden dozing off during a television interview is digitally manipulated and inauthentic. The video is originally from a 2011 incident involving American singer and actor Harry Belafonte. It has been altered to falsely show US President Joe Biden. It is a reminder to verify the authenticity of online content before accepting or sharing it as truth.
- Claim: A viral video shows in a television interview US President Joe Biden dozing off while the anchor tries to wake him up.
- Claimed on: X (Formerly known as Twitter)
- Fact Check: Fake & Misleading
Related Blogs

Introduction
With the ever-growing technology where cyber-crimes are increasing, a new cyber-attack is on the rise, but it’s not in your inbox or your computer- it's targeting your phone, especially your smartphone. Cybercriminals are expanding their reach in India, with a new text-messaging fraud targeting individuals. The Indian Computer Emergency Response Team (CERT-In) has warned against "smishing," or SMS phishing.
Understanding Smishing
Smishing is a combination of the terms "SMS" and "phishing." It entails sending false text messages that appear to be from reputable sources such as banks, government organizations, or well-known companies. These communications frequently generate a feeling of urgency in their readers, prompting them to click on harmful links, expose personal information, or conduct financial transactions.
When hackers "phish," they send out phony emails in the hopes of tricking the receiver into clicking on a dangerous link. Smishing is just the use of text messaging rather than email. In essence, these hackers are out to steal your personal information to commit fraud or other cybercrimes. This generally entails stealing money – usually your own, but occasionally also the money of your firm.
The cybercriminals typically use these tactics to lure victims and steal the information.
Malware- The cyber crooks send the smishing URL link that might tick you into downloading malicious software on your phone itself. This SMS malware may appear as legitimate software, deceiving you into putting in sensitive information and transmitting it to crooks.
Malicious website- The URL in the smishing message may direct you to a bogus website that seeks sensitive personal information. Cybercriminals employ custom-made rogue sites meant to seem like legitimate ones, making it simpler to steal your information.
Smishing text messages often appear to be from your bank, asking you to share personal sensitive information, ATM numbers, or account details. Mobile device cybercrime is increasing, as is mobile device usage. Aside from the fact that texting is the most prevalent usage of cell phones, a few additional aspects make this an especially pernicious security issue. Let's go over how smishing attacks operate.
Modus Operandi
The cyber crooks commit the fraud via SMS. As attackers assume an identity that might be of someone trusted, Smishing attackers can use social engineering techniques to sway a victim's decision-making. Three things are causing this deception:
- Trust- Cyber crooks target individuals, by posing to someone from a legitimate individual and organization, this naturally lowers a person’s defense against threats.
- Context- Using a circumstance that might be relevant to targets helps an attacker to create an effective disguise. The message feels personalized, which helps it overcome any assumption that it is spam.
- Emotion- The nature of the SMS is critical; it makes the victim think that is urgent and requires rapid action. Using these tactics, attackers craft communications that compel the receiver to act.
- Typically, attackers want the victim to click on a URL link within the text message, which takes them to a phishing tool that asks them for sensitive information. This phishing tool is frequently in the form of a website or app that also assumes a phony identity.
How does Smishing Spread?
As we have revealed earlier smishing attacks are delivered through both traditional texts. However, SMS phishing attacks primarily appear to be from known sources People are less careful while they are on their phones. Many people believe that their cell phones are more secure than their desktops. However, smartphone security has limits and cannot always guard against smishing directly.
Considering the fact phones are the target While Android smartphones dominate the market and are a perfect target for malware text messages, iOS devices are as vulnerable. Although Apple's iOS mobile technology has a high reputation for security, no mobile operating system can protect you from phishing-style assaults on its own. A false feeling of security, regardless of platform, might leave users especially exposed.
Kinds of smishing attacks
Some common types of smishing attacks that occurred are;
- COVID-19 Smishing: The Better Business Bureau observed an increase in reports of US government impersonators sending text messages requesting consumers to take an obligatory COVID-19 test via a connected website in April 2020. The concept of these smishing assaults may readily develop, as feeding on pandemic concerns is a successful technique of victimizing the public.
- Gift Smishing: Give away, shopping rewards, or any number of other free offers, this kind of smishing includes free services or products, from a reputable or other company. attackers plan in such a way that the offer is for a limited time or is an exclusive offer and the offers are so lucrative that one gets excited and falls into the trap.
CERT Guidelines
CERT-In shared some steps to avoid falling victim to smishing.
- Never click on any suspicious link in SMS/social media charts or posts.
- Use online resources to validate shortened URLs.
- Always check the link before clicking.
- Use updated antivirus and antimalware tools.
- If you receive any suspicious message pretending to be from a bank or institution, immediately contact the bank or institution.
- Use a separate email account for personal online transactions.
- Enforce multi-factor authentication (MFA) for emails and bank accounts.
- Keep your operating system and software updated with the latest patches.
Conclusion
Smishing uses fraudulent mobile text messages to trick people into downloading malware, sharing sensitive data, or paying cybercriminals money. With the latest technological developments, it has become really important to stay vigilant in the digital era not only protecting your computers but safeguarding the devices that fit in the palm of your hand, CERT warning plays a vital role in this. Awareness and best practices play a pivotal role in safeguarding yourself from evolving threats.
Reference
- https://www.ndtv.com/india-news/government-warns-of-smishing-attacks-heres-how-to-stay-safe-4709458
- https://zeenews.india.com/technology/govt-warns-citizens-about-smishing-scam-how-to-protect-against-this-online-threat-2654285.html
- https://www.the420.in/protect-against-smishing-scams-cert-in-advice-online-safety/

The Ghibli trend has been in the news for the past couple of weeks for multiple reasons, be it good or bad. The nostalgia that everyone has for the art form has made people turn a blind eye to what the trend means to the artists who painstakingly create the art. The open-source platforms may be trained on artistic material without the artist's ‘explicit permission’ making it so that the rights of the artists are downgraded. The artistic community has reached a level where they are questioning their ability to create, which can be recreated by this software in a couple of seconds and without any thought as to what it is doing. OpenAI’s update on ChatGPT makes it simple for users to create illustrations that are like the style created by Hayao Miyazaki and made into anything from personal pictures to movie scenes and making them into Ghibli-style art. The updates in AI to generate art, including Ghibli-style, may raise critical questions about artistic integrity, intellectual property, and data privacy risks.
AI and the Democratization of Creativity
AI-powered tools have lowered barriers and enable more people to engage with artistic expression. AI allows people to create appealing content in the form of art regardless of their artistic capabilities. The update of ChatGPT has made it so that art has been democratized, and the abilities of the user don't matter. It makes art accessible, efficient and a creative experiment to many.
Unfortunately, these developments also pose challenges for the original artistry and the labour of human creators. The concern doesn't just stop at AI replacing artists, but also about the potential misuse it can lead to. This includes unauthorized replication of distinct styles or deepfake applications. When it is used ethically, AI can enhance artistic processes. It can assist with repetitive tasks, improving efficiency, and enabling creative experimentation.
However, its ability to mimic existing styles raises concerns. The potential that AI-generated content has could lead to a devaluation of human artists' work, potential copyright issues, and even data privacy risks. Unauthorized training of AI models that create art can be exploited for misinformation and deepfakes, making human oversight essential. Few artists believe that AI artworks are disrupting the accepted norms of the art world. Additionally, AI can misinterpret prompts, producing distorted or unethical imagery that contradicts artistic intent and cultural values, highlighting the critical need for human oversight.
The Ethical and Legal Dilemmas
The main dilemma that surrounds trends such as the Ghibli trend is whether it compromises human efforts by blurring the line between inspiration and infringement of artistic freedom. Further, an issue that is not considered by most users is whether the personal content (personal pictures in this case) uploaded on AI models is posing a risk to their privacy. This leads to the issue where the potential misuse of AI-generated content can be used to spread misinformation through misleading or inappropriate visuals.
The negative effects can only be balanced if a policy framework is created that can ensure the fair use of AI in Art. Further, this should ensure that the training of AI models is done in a manner that is fair to the artists who are the original creators of a style. Human oversight is needed to moderate the AI-generated content. This oversight can be created by creating ethical AI usage guidelines for platforms that host AI-generated art.
Conclusion: What Can Potentially Be Done?
AI is not a replacement for human effort, it is to ease human effort. We need to promote a balanced AI approach that protects the integrity of artists and, at the same time, continues to foster innovation. And finally, strengthening copyright laws to address AI-generated content. Labelling AI content and ensuring that this content is disclosed as AI-generated is the first step. Furthermore, there should be fair compensation made to the human artists based on whose work the AI model is trained. There is an increasing need to create global AI ethics guidelines to ensure that there is transparency, ethical use and human oversight in AI-driven art. The need of the hour is that industries should work collaboratively with regulators to ensure that there is responsible use of AI.
References
- https://medium.com/@haileyq/my-experience-with-studio-ghibli-style-ai-art-ethical-debates-in-the-gpt-4o-era-b84e5a24cb60
- https://www.bbc.com/future/article/20241018-ai-art-the-end-of-creativity-or-a-new-movement

One of the best forums for many video producers is YouTube. It also has a great chance of generating huge profits. YouTube content producers need assistance to get the most views, likes, comments, and subscribers for their videos and channels. As a result, some people could use YouTube bots to unnaturally raise their ranks on the YouTube site, which might help them get more organic views and reach a larger audience. However, this strategy is typically seen as unfair and can violate the YouTube platform’s terms of service.
As YouTube grows in popularity, so does the usage of YouTube bots. These bots are software programs that may automate operations on the YouTube platform, such as watching, liking, or disliking videos, subscribing to or unsubscribing from channels, making comments, and adding videos to playlists, among others. There have been YouTube bots around for a while. Many YouTubers widely use these computer codes to increase the number of views on their videos and accounts, which helps them rank higher in YouTube’s algorithm. Researchers discovered a new bot that takes private information from YouTube users’ accounts.
CRIL (Cyble Research and Intelligence Labs) has been monitoring new and active malware families CRIL has discovered a new YouTube bot virus capable of viewing, liking, and commenting on YouTube videos. Furthermore, it is capable of stealing sensitive information from browsers and acting as a bot that accepts orders from the Command and Control (C&C) server to carry out other harmful operations.
The Bot Insight
This YouTube bot has the same capabilities as all other YouTube bots, including the ability to view, like, and comment on videos. Additionally, it has the ability to steal private data from browsers and act as a bot that takes commands from a Command and Control (C&C) server for various malicious purposes. Researchers from Cyble discovered the inner workings of this information breach the Youtube bot uses the sample hash(SHA256) e9dac8b677a670e70919730ee65ab66cc27730378b9233d944ad7879c530d312.They discovered that it was created using the.NET compiler and is an executable file with a 32-bit size.
- The virus runs an AntiVM check as soon as it is executed to thwart researchers’ attempts to find and analyze malware in a virtual environment.
- It stops the execution if it finds that it is operating in a regulated setting. If not, it will carry out the tasks listed in the argument strings.
- Additionally, the virus creates a mutex, copies itself to the %appdata% folder as AvastSecurity.exe, and then uses cmd.exe to run.
- The new mutex makes a task scheduler entry and aids in ensuring
- The victim’s system’s installed Chromium browsers are used to harvest cookies, autofill information, and login information by the AvastSecurity.exe program.
- In order to view the chosen video, the virus runs the YouTube Playwright function, passing the previously indicated arguments along with the browser’s path and cookie data.
- YouTube bot uses the YouTube Playwright function to launch the browser environment with the specified parameters and automate actions like watching, liking, and commenting on YouTube videos. The feature is dependent on Microsoft. playwright’s kit.
- The malware establishes a connection to a C2 server and gets instructions to erase the entry for the scheduled task and end its own process, extract log files to the C2 server, download and run other files, and start/stop watching a YouTube movie.
- Additionally, it verifies that the victim’s PC has the required dependencies, including the Playwright package and the Chrome browser, installed. When it gets the command “view,” it will download and install these dependencies if they are missing.
Recommendations
The following is a list of some of the most critical cybersecurity best practices that serve as the first line of defense against intruders. We propose that our readers follow the advice provided below:
- Downloading pirated software from warez/torrent websites should be avoided. Such a virus is commonly found in “Hack Tools” available on websites such as YouTube, pirate sites, etc.
- When feasible, use strong passwords and impose multi-factor authentication.
- Enable automatic software updates on your laptop, smartphone, and other linked devices.
- Use a reputable antivirus and internet security software package on your linked devices, such as your computer, laptop, and smartphone.
- Avoid clicking on suspicious links and opening email attachments without verifying they are legitimate.Inform staff members on how to guard against dangers like phishing and unsafe URLs.
- Block URLs like Torrent/Warez that might be used to propagate malware.To prevent malware or TAs from stealing data, keep an eye on the beacon at the network level.
Conclusion
Using YouTube bots may be a seductive strategy for content producers looking to increase their ranks and expand their viewership on the site. However, the employment of bots is typically regarded as unfair and may violate YouTube’s terms of service. Utilizing YouTube bots carries additional risk because they might be identified, which could lead to account suspension or termination for the user. Mitigating this pressing issue through awareness drives and surveys to determine the bone of contention is best. NonProfits and civil society organizations can bridge the gap between the tech giant and the end user to facilitate better know-how about these unknown bots.