Digitally Altered Photo of Rowan Atkinson Circulates on Social Media
Research Wing
Innovation and Research
PUBLISHED ON
Aug 5, 2024
10
Executive Summary:
A photo claiming that Mr. Rowan Atkinson, the famous actor who played the role of Mr. Bean, lying sick on bed is circulating on social media. However, this claim is false. The image is a digitally altered picture of Mr.Barry Balderstone from Bollington, England, who died in October 2019 from advanced Parkinson’s disease. Reverse image searches and media news reports confirm that the original photo is of Barry, not Rowan Atkinson. Furthermore, there are no reports of Atkinson being ill; he was recently seen attending the 2024 British Grand Prix. Thus, the viral claim is baseless and misleading.
Claims:
A viral photo of Rowan Atkinson aka Mr. Bean, lying on a bed in sick condition.
When we received the posts, we first did some keyword search based on the claim made, but no such posts were found to support the claim made.Though, we found an interview video where it was seen Mr. Bean attending F1 Race on July 7, 2024.
Then we reverse searched the viral image and found a news report that looked similar to the viral photo of Mr. Bean, the T-Shirt seems to be similar in both the images.
The man in this photo is Barry Balderstone who was a civil engineer from Bollington, England, died in October 2019 due to advanced Parkinson’s disease. Barry received many illnesses according to the news report and his application for extensive healthcare reimbursement was rejected by the East Cheshire Clinical Commissioning Group.
Taking a cue from this, we then analyzed the image in an AI Image detection tool named, TrueMedia. The detection tool found the image to be AI manipulated. The original image is manipulated by replacing the face with Rowan Atkinson aka Mr. Bean.
Hence, it is clear that the viral claimed image of Rowan Atkinson bedridden is fake and misleading. Netizens should verify before sharing anything on the internet.
Conclusion:
Therefore, it can be summarized that the photo claiming Rowan Atkinson in a sick state is fake and has been manipulated with another man’s image. The original photo features Barry Balderstone, the man who was diagnosed with stage 4 Parkinson’s disease and subsequently died in 2019. In fact, Rowan Atkinson seemed perfectly healthy recently at the 2024 British Grand Prix. It is important for people to check on the authenticity before sharing so as to avoid the spreading of misinformation.
Claim: A Viral photo of Rowan Atkinson aka Mr. Bean, lying on a bed in a sick condition.
In the age of virtuality, misinformation and misleading techniques shape the macula of the internet, and these threaten human safety and well-being. Recently, an alarming fake information has surfaced, intended to provide a fake Government subsidy scheme with the name of Indian Post. This serves criminals, who attack people's weaknesses, laying them off with proposals of receiving help in exchange for info. In this informative blog, we take a deep dive into one of the common schemes of fraud during this time. We will go through the stages involved which illustrates how one is deceived and offer practical tips to avoid the fall.
Introduction:
Digital communication reaches individuals faster, and as a result, misinformation and mails have accelerated their spread globally. People, therefore, are susceptible to online scams as they add credibility to phenomena. In India, the recently increased fake news draws its target with the deceptive claims of being a subsidy from the Government mainly through the Indian post. These fraudulent schemes frequently are spread via social networks and messaging platforms, influence trust of the individual’s in respectable establishments to establish fraud and collect private data.
Understanding the Claim:
There is a claim circulating on the behalf of the Government at the national level of a great subsidy of $1066 for deserving residents. The individual will be benefited with the subsidy when they complete the questionnaire they have received through social media. The questionnaire may have been designed to steal the individual’s confidential information by way of taking advantage of naivety and carelessness.
The Deceptive Journey Unveiled:
Bogus Offer Presentation: The scheme often appeals to people, by providing a misleading message or a commercial purposely targeted at convincing them to act immediately by instilling the sense of an urgent need. Such messages usually combine the mood of persuasion and highly evaluative material to create an illusion of being authentic.
Questionnaire Requirement: After the visitors land on attractive content material they are directed to fill in the questionnaire which is supposedly required for processing the economic assistance. This questionnaire requests for non private information in their nature.
False Sense of Urgency: Simultaneously, in addition to the stress-causing factor of it being a fake news, even the false deadline may be brought out to push in the technique of compliance. This data collection is intended to put people under pressure and influence them to make the information transfer that immediate without thorough examination.
Data Harvesting Tactics: Despite the financial help actually serving, you might be unaware but lies beneath it is a vile motive, data harvesting. The collection of facts through questionnaires may become something priceless for scammers that they can use for a good while to profit from identity theft, financial crimes and other malicious means.
Analysis Highlights:
It is important to note that at this particular point, there has not been any official declaration or a proper confirmation of an offer made by the India Post or from the Government. So, people must be very careful when encountering such messages because they are often employed as lures in phishing attacks or misinformation campaigns. Before engaging or transmitting such claims, it is always advisable to authenticate the information from trustworthy sources in order to protect oneself online and prevent the spread of wrongful information
The campaign is hosted on a third party domain instead of any official Government Website, this raised suspicion. Also the domain has been registered in very recent times.
Note: Cybercriminal used Cloudflare technology to mask the actual IP address of the fraudulent website.
CyberPeace Advisory:
Verification and Vigilance: It makes complete sense in this case that you should be cautious and skeptical. Do not fall prey to this criminal act. Examine the arguments made and the facts provided by either party and consult credible sources before disclosures are made.
Official Channels: Governments usually invoke the use of reliable channels which can as well be by disseminating subsidies and assistance programs through official websites and the legal channels. Take caution for schemes that are not following the protocols previously established.
Educational Awareness: Providing awareness through education and consciousness about on-line scams and the approaches which are fraudulent has to be considered a primary requirement. Through empowering individuals with capabilities and targets we, as a collective, can be armed with information that will prevent erroneous scheme spreading.
Reporting and Action: In a case of mission suspicious and fraudulent images, let them understand immediately by making the authorities and necessary organizations alert. Your swift actions do not only protect yourself but also help others avoid the costs of related security compromises.
Conclusion:
The rise of the ‘Indian Post Countrywide - government subsidy fake news’ poses a stern warning of the present time that the dangers within the virtual ecosystem are. The art of being wise and sharp in terms of scams always reminds us to show a quick reaction to the hacks and try to do the things that we should identify as per the CyberPeace advisories; thereby, we will contribute to a safer Cyberspace for everyone. Likewise, the ability to critically judge, and remain alert, is important to help defeat the variety of tricks offenders use to mislead you online.
Considering the development of technology, Voice cloning schemes are one such issue that has recently come to light. Scammers are moving forward with AI, and their methods and plans for deceiving and scamming people have also altered. Deepfake technology creates realistic imitations of a person’s voice that can be used to conduct fraud, dupe a person into giving up crucial information, or even impersonate a person for illegal purposes. We will look at the dangers and risks associated with AI voice cloning frauds, how scammers operate and how one might protect themselves from one.
What is Deepfake?
Artificial intelligence (AI), known as “deepfake,” can produce fake or altered audio, video, and film that pass for the real thing. The words “deep learning” and “fake” are combined to get the name “deep fake”. Deepfake technology creates content with a realistic appearance or sound by analysing and synthesising diverse volumes of data using machine learning algorithms. Con artists employ technology to portray someone doing something that has never been in audio or visual form. The best example is the American President, who used deep voice impersonation technology. Deep voice impersonation technology can be used maliciously, such as in deep voice fraud or disseminating false information. As a result, there is growing concerned about the potential influence of deep fake technology on society and the need for effective tools to detect and mitigate the hazards it may provide.
What exactly are deepfake voice scams?
Artificial intelligence (AI) is sometimes utilised in deepfake speech frauds to create synthetic audio recordings that seem like real people. Con artists can impersonate someone else over the phone and pressure their victims into providing personal information or paying money by using contemporary technology. A con artist may pose as a bank employee, a government official, or a friend or relative by utilising a deep false voice. It aims to earn the victim’s trust and raise the likelihood that they will fall for the hoax by conveying a false sense of familiarity and urgency. Deep fake speech frauds are increasing in frequency as deep fake technology becomes more widely available, more sophisticated, and harder to detect. In order to avoid becoming a victim of such fraud, it is necessary to be aware of the risks and take appropriate measures.
Why do cybercriminals use AI voice deep fake?
In order to mislead users into providing private information, money, or system access, cybercriminals utilise artificial intelligence (AI) speech-deep spoofing technology to claim to be people or entities. Using AI voice-deep fake technology, cybercriminals can create audio recordings that mimic real people or entities, such as CEOs, government officials, or bank employees, and use them to trick victims into taking activities that are advantageous to the criminals. This can involve asking victims for money, disclosing login credentials, or revealing sensitive information. In phishing assaults, where fraudsters create audio recordings that impersonate genuine messages from organisations or people that victims trust, deepfake AI voice technology can also be employed. These audio recordings can trick people into downloading malware, clicking on dangerous links, or giving out personal information. Additionally, false audio evidence can be produced using AI voice-deep fake technology to support false claims or accusations. This is particularly risky regarding legal processes because falsified audio evidence may lead to wrongful convictions or acquittals. Artificial intelligence voice deep fake technology gives con artists a potent tool for tricking and controlling victims. Every organisation and the general population must be informed of this technology’s risk and adopt the appropriate safety measures.
How to spot voice deepfake and avoid them?
Deep fake technology has made it simpler for con artists to edit audio recordings and create phoney voices that exactly mimic real people. As a result, a brand-new scam called the “deep fake voice scam” has surfaced. In order to trick the victim into handing over money or private information, the con artist assumes another person’s identity and uses a fake voice. What are some ways to protect oneself from deepfake voice scams? Here are some guidelines to help you spot them and keep away from them:
Steer clear of telemarketing calls
One of the most common tactics used by deep fake voice con artists, who pretend to be bank personnel or government officials, is making unsolicited phone calls.
Listen closely to the voice
Anyone who phones you pretending to be someone else should pay special attention to their voice. Are there any peculiar pauses or inflexions in their speech? Something that doesn’t seem right can be a deep voice fraud.
Verify the caller’s identity
It’s crucial to verify the caller’s identity in order to avoid falling for a deep false voice scam. You might ask for their name, job title, and employer when in doubt. You can then do some research to be sure they are who they say they are.
Never divulge confidential information
No matter who calls, never give out personal information like your Aadhar, bank account information, or passwords over the phone. Any legitimate companies or organisations will never request personal or financial information over the phone; if they do, it’s a warning sign that they’re a scammer.
Report any suspicious activities
Inform the appropriate authorities if you think you’ve fallen victim to a deep voice fraud. This may include your bank, credit card company, local police station, or the nearest cyber cell. By reporting the fraud, you could prevent others from being a victim.
Conclusion
In conclusion, the field of AI voice deep fake technology is fast expanding and has huge potential for beneficial and detrimental effects. While deep fake voice technology has the potential to be used for good, such as improving speech recognition systems or making voice assistants sound more realistic, it may also be used for evil, such as deep fake voice frauds and impersonation to fabricate stories. Users must be aware of the hazard and take the necessary precautions to protect themselves as AI voice deep fake technology develops, making it harder to detect and prevent deep fake schemes. Additionally, it is necessary to conduct ongoing research and develop efficient techniques to identify and control the risks related to this technology. We must deploy AI appropriately and ethically to ensure that AI voice-deep fake technology benefits society rather than harming or deceiving it.
False information spread on social media that Flight Lieutenant Shivangi Singh, India’s first female Rafale pilot, had been captured by Pakistan during “Operation Sindoor”. The allegations are untrue and baseless as no credible or official confirmation supports the claim, and Singh is confirmed to be safe and actively serving. The rumor, likely originating from unverified sources, sparked public concern and underscored the serious threat fake news poses to national security.
Claim:
An X user posted stating that “ Initial image released of a female Indian Shivani singh Rafale pilot shot down in Pakistan”. It was falsely claimed that Flight Lieutenant Shivangi Singh had been captured, and that the Rafale aircraft was shot down by Pakistan.
After doing reverse image search, we found an instagram post stating the two Indian Air Force pilots—Wing Commander Tejpal (50) and trainee Bhoomika (28)—who had ejected from a Kiran Jet Trainer during a routine training sortie from Bengaluru before it crashed near Bhogapuram village in Karnataka. The aircraft exploded upon impact, but both pilots were later found alive, though injured and exhausted.
Also we found a youtube channel which is showing the video from the past and not what it was claimed to be.
Conclusion:
The false claims about Flight Lieutenant Shivangi Singh being captured by Pakistan and her Rafale jet being shot down have been debunked. The image used was unrelated and showed IAF pilots from a separate training incident. Several media also confirmed that its video made no mention of Ms. Singh’s arrest. This highlights the dangers of misinformation, especially concerning national security. Verifying facts through credible sources and avoiding the spread of unverified content is essential to maintain public trust and protect the reputation of those serving in the armed forces.
Claim: False claims about Flight Lieutenant Shivangi Singh being captured by Pakistan and her Rafale jet being shot down
Claimed On: Social Media
Fact Check: False and Misleading
Become a part of our vision to make the digital world safe for all!
Numerous avenues exist for individuals to unite with us and our collaborators in fostering global cyber security
Awareness
Stay Informed: Elevate Your Awareness with Our Latest Events and News Articles Promoting Cyber Peace and Security.
Your institution or organization can partner with us in any one of our initiatives or policy research activities and complement the region-specific resources and talent we need.