#FactCheck: Fake Claim on Delhi Authority Culling Dogs After Supreme Court Stray Dog Ban Directive 11 Aug 2025
Executive Summary:
A viral claim alleges that following the Supreme Court of India’s August 11, 2025 order on relocating stray dogs, authorities in Delhi NCR have begun mass culling. However, verification reveals the claim to be false and misleading. A reverse image search of the viral video traced it to older posts from outside India, probably linked to Haiti or Vietnam, as indicated by the use of Haitian Creole and Vietnamese language respectively. While the exact location cannot be independently verified, it is confirmed that the video is not from Delhi NCR and has no connection to the Supreme Court’s directive. Therefore, the claim lacks authenticity and is misleading
Claim:
There have been several claims circulating after the Supreme Court of India on 11th August 2025 ordered the relocation of stray dogs to shelters. The primary claim suggests that authorities, following the order, have begun mass killing or culling of stray dogs, particularly in areas like Delhi and the National Capital Region. This narrative intensified after several videos purporting to show dead or mistreated dogs allegedly linked to the Supreme Court’s directive—began circulating online.

Fact Check:
After conducting a reverse image search using a keyframe from the viral video, we found similar videos circulating on Facebook. Upon analyzing the language used in one of the posts, it appears to be Haitian Creole (Kreyòl Ayisyen), which is primarily spoken in Haiti. Another similar video was also found on Facebook, where the language used is Vietnamese, suggesting that the post associates the incident with Vietnam.
However, it is important to note that while these posts point towards different locations, the exact origin of the video cannot be independently verified. What can be established with certainty is that the video is not from Delhi NCR, India, as is being claimed. Therefore, the viral claim is misleading and lacks authenticity.


Conclusion:
The viral claim linking the Supreme Court’s August 11, 2025 order on stray dogs to mass culling in Delhi NCR is false and misleading. Reverse image search confirms the video originated outside India, with evidence of Haitian Creole and Vietnamese captions. While the exact source remains unverified, it is clear the video is not from Delhi NCR and has no relation to the Court’s directive. Hence, the claim lacks credibility and authenticity.
Claim: Viral fake claim of Delhi Authority culling dogs after the Supreme Court directive on the ban of stray dogs as on 11th August 2025
Claimed On: Social Media
Fact Check: False and Misleading
Related Blogs

Executive Summary:
A photo claiming that Mr. Rowan Atkinson, the famous actor who played the role of Mr. Bean, lying sick on bed is circulating on social media. However, this claim is false. The image is a digitally altered picture of Mr.Barry Balderstone from Bollington, England, who died in October 2019 from advanced Parkinson’s disease. Reverse image searches and media news reports confirm that the original photo is of Barry, not Rowan Atkinson. Furthermore, there are no reports of Atkinson being ill; he was recently seen attending the 2024 British Grand Prix. Thus, the viral claim is baseless and misleading.

Claims:
A viral photo of Rowan Atkinson aka Mr. Bean, lying on a bed in sick condition.



Fact Check:
When we received the posts, we first did some keyword search based on the claim made, but no such posts were found to support the claim made.Though, we found an interview video where it was seen Mr. Bean attending F1 Race on July 7, 2024.

Then we reverse searched the viral image and found a news report that looked similar to the viral photo of Mr. Bean, the T-Shirt seems to be similar in both the images.

The man in this photo is Barry Balderstone who was a civil engineer from Bollington, England, died in October 2019 due to advanced Parkinson’s disease. Barry received many illnesses according to the news report and his application for extensive healthcare reimbursement was rejected by the East Cheshire Clinical Commissioning Group.
Taking a cue from this, we then analyzed the image in an AI Image detection tool named, TrueMedia. The detection tool found the image to be AI manipulated. The original image is manipulated by replacing the face with Rowan Atkinson aka Mr. Bean.



Hence, it is clear that the viral claimed image of Rowan Atkinson bedridden is fake and misleading. Netizens should verify before sharing anything on the internet.
Conclusion:
Therefore, it can be summarized that the photo claiming Rowan Atkinson in a sick state is fake and has been manipulated with another man’s image. The original photo features Barry Balderstone, the man who was diagnosed with stage 4 Parkinson’s disease and subsequently died in 2019. In fact, Rowan Atkinson seemed perfectly healthy recently at the 2024 British Grand Prix. It is important for people to check on the authenticity before sharing so as to avoid the spreading of misinformation.
- Claim: A Viral photo of Rowan Atkinson aka Mr. Bean, lying on a bed in a sick condition.
- Claimed on: X, Facebook
- Fact Check: Fake & Misleading

Introduction
In today’s time, everything is online, and the world is interconnected. Cases of data breaches and cyberattacks have been a reality for various organisations and industries, In the recent case (of SAS), Scandinavian Airlines experienced a cyberattack that resulted in the exposure of customer details, highlighting the critical importance of preventing customer privacy. The incident is a wake-up call for Airlines and businesses to evaluate their cyber security measures and learn valuable lessons to safeguard customers’ data. In this blog, we will explore the incident and discuss the strategies for protecting customers’ privacy in this age of digitalisation.
Analysing the backdrop
The incident has been a shocker for the aviation industry, SAS Scandinavian Airlines has been a victim of a cyberattack that compromised consumer data. Let’s understand the motive of cyber crooks and the technique they used :
Motive Behind the Attack: Understanding the reasons that may have driven the criminals is critical to comprehending the context of the Scandinavian Airlines cyber assault. Financial gain, geopolitical conflicts, activism, or personal vendettas are common motivators for cybercriminals. Identifying the purpose of the assault can provide insight into the attacker’s aims and the possible impact on both the targeted organisation and its consumers. Understanding the attack vector and strategies used by cyber attackers reveals the amount of complexity and possible weaknesses in an organisation’s cybersecurity defences. Scandinavian Airlines’ cyber assault might have included phishing, spyware, ransomware, or exploiting software weaknesses. Analysing these tactics allows organisations to strengthen their security against similar assaults.
Impact on Victims: The Scandinavian Airlines (SAS) cyber attack victims, including customers and individuals related to the company, have suffered substantial consequences. Data breaches and cyber-attack have serious consequences due to the leak of personal information.
1)Financial Losses and Fraudulent Activities: One of the most immediate and upsetting consequences of a cyber assault is the possibility of financial loss. Exposed personal information, such as credit card numbers, can be used by hackers to carry out illegal activities such as unauthorised transactions and identity theft. Victims may experience financial difficulties and the need to spend time and money resolving these concerns.
2)Concerns about privacy and personal security: A breach of personal data can significantly impact the privacy and personal security of victims. The disclosed information, including names, addresses, and contact information, might be exploited for nefarious reasons, such as targeted phishing or physical harassment. Victims may have increased anxiety about their safety and privacy, which can interrupt their everyday life and create mental pain.
3) Reputational Damage and Trust Issues: The cyber attack may cause reputational harm to persons linked with Scandinavian Airlines, such as workers or partners. The breach may diminish consumers’ and stakeholders’ faith in the organisation, leading to a bad view of its capacity to protect personal information. This lack of trust might have long-term consequences for the impacted people’s professional and personal relationships.
4) Emotional Stress and Psychological Impact: The psychological impact of a cyber assault can be severe. Fear, worry, and a sense of violation induced by having personal information exposed can create emotional stress and psychological suffering. Victims may experience emotions of vulnerability, loss of control, and distrust toward digital platforms, potentially harming their overall quality of life.
5) Time and Effort Required for Remediation: Addressing the repercussions of a cyber assault demands significant time and effort from the victims. They may need to call financial institutions, reset passwords, monitor accounts for unusual activity, and use credit monitoring services. Resolving the consequences of a data breach may be a difficult and time-consuming process, adding stress and inconvenience to the victims’ lives.
6) Secondary Impacts: The impacts of an online attack could continue beyond the immediate implications. Future repercussions for victims may include trouble acquiring credit or insurance, difficulties finding future work, and continuous worry about exploiting their personal information. These secondary effects can seriously affect victims’ financial and general well-being.
Apart from this, the trust lost would take time to rebuild.

Takeaways from this attack
The cyber-attack on Scandinavian Airlines (SAS) is a sharp reminder of cybercrime’s ever-present and increasing menace. This event provides crucial insights that businesses and people may use to strengthen cybersecurity defences. In the lessons that were learned from the Scandinavian Airlines cyber assault and examine the steps that may be taken to improve cybersecurity and reduce future risks. Some of the key points that can be considered are as follows:
Proactive Risk Assessment and Vulnerability Management: The cyber assault on Scandinavian Airlines emphasises the significance of regular risk assessments and vulnerability management. Organisations must proactively identify and fix possible system and network vulnerabilities. Regular security audits, penetration testing, and vulnerability assessments can help identify flaws before bad actors exploit them.
Strong security measures and best practices: To guard against cyber attacks, it is necessary to implement effective security measures and follow cybersecurity best practices. Lessons from the Scandinavian Airlines cyber assault emphasise the importance of effective firewalls, up-to-date antivirus software, secure setups, frequent software patching, and strong password rules. Using multi-factor authentication and encryption technologies for sensitive data can also considerably improve security.
Employee Training and Awareness: Human mistake is frequently a big component in cyber assaults. Organisations should prioritise employee training and awareness programs to educate employees about phishing schemes, social engineering methods, and safe internet practices. Employees may become the first line of defence against possible attacks by cultivating a culture of cybersecurity awareness.
Data Protection and Privacy Measures: Protecting consumer data should be a key priority for businesses. Lessons from the Scandinavian Airlines cyber assault emphasise the significance of having effective data protection measures, such as encryption and access limits. Adhering to data privacy standards and maintaining safe data storage and transfer can reduce the risks connected with data breaches.
Collaboration and Information Sharing: The Scandinavian Airlines cyber assault emphasises the need for collaboration and information sharing among the cybersecurity community. Organisations should actively share threat intelligence, cooperate with industry partners, and stay current on developing cyber threats. Sharing information and experiences can help to build the collective defence against cybercrime.
Conclusion
The Scandinavian Airlines cyber assault is a reminder that cybersecurity must be a key concern for organisations and people. Organisations may improve their cybersecurity safeguards, proactively discover vulnerabilities, and respond effectively to prospective attacks by learning from this occurrence and adopting the lessons learned. Building a strong cybersecurity culture, frequently upgrading security practices, and encouraging cooperation within the cybersecurity community are all critical steps toward a more robust digital world. We may aim to keep one step ahead of thieves and preserve our important information assets by constantly monitoring and taking proactive actions.
.webp)
Introduction
AI-generated fake videos are proliferating on the Internet indeed becoming more common by the day. There is a use of sophisticated AI algorithms that help manipulate or generate multimedia content such as videos, audio, and images. As a result, it has become increasingly difficult to differentiate between genuine, altered, or fake content, and these AI-manipulated videos look realistic. A recent study has shown that 98% of deepfake-generated videos have adult content featuring young girls, women, and children, with India ranking 6th among the nations that suffer from misuse of deepfake technology. This practice has dangerous consequences and could harm an individual's reputation, and criminals could use this technology to create a false narrative about a candidate or a political party during elections.
The working of deepfake videos is based on algorithms that refine the fake content, and the generators are built and trained in such a way as to get the desired output. The process is repeated several times, allowing the generator to improve the content until it seems realistic, making it more flawless. Deepfake videos are created by specific approaches some of them are: -
- Lip syncing: This is the most common technique used in deepfake. Here, the voice recordings of the video, make it appear as to what was originally said by the person appearing in the video.
- Audio deepfake: For Audio-generated deepfake, a generative adversarial network (GAN) is used to colon a person’s voice, based on the vocal patterns and refine it till the desired output is generated.
- Deepfake has become so serious that the technology could be used by bad actors or by cyber-terrorist squads to set their Geo-political agendas. Looking at the present situation in the past few the number of cases has just doubled, targeting children, women and popular faces.
- Greater Risk: in the last few years the cases of deep fake have risen. by the end of the year 2022, the number of cases has risen to 96% against women and children according to a survey.
- Every 60 seconds, a deepfake pornographic video is created, now quicker and more affordable than ever, it takes less than 25 minutes and costs using just one clean face image.
- The connection to deepfakes is that people can become targets of "revenge porn" without the publisher having sexually explicit photographs or films of the victim. They may be made using any number of random pictures or images collected from the internet to obtain the same result. This means that almost everyone who has taken a selfie or shared a photograph of oneself online faces the possibility of a deepfake being constructed in their image.
Deepfake-related security concerns
As deepfakes proliferate, more people are realising that they can be used not only to create non-consensual porn but also as part of disinformation and fake news campaigns with the potential to sway elections and rekindle frozen or low-intensity conflicts.
Deepfakes have three security implications: at the international level, strategic deepfakes have the potential to destroy precarious peace; at the national level, deepfakes may be used to unduly influence elections, and the political process, or discredit opposition, which is a national security concern, and at the personal level, the scope for using Women suffer disproportionately from exposure to sexually explicit content as compared to males, and they are more frequently threatened.
Policy Consideration
Looking at the present situation where the cases of deepfake are on the rise against women and children, the policymakers need to be aware that deepfakes are utilized for a variety of valid objectives, including artistic and satirical works, which policymakers should be aware of. Therefore, simply banning deepfakes is not a way consistent with fundamental liberties. One conceivable legislative option is to require a content warning or disclaimer. Deepfake is an advanced technology and misuse of deepfake technology is a crime.
What are the existing rules to combat deepfakes?
It's worth noting that both the IT Act of 2000 and the IT Rules of 2021 require social media intermediaries to remove deep-fake videos or images as soon as feasible. Failure to follow these guidelines can result in up to three years in jail and a Rs 1 lakh fine. Rule 3(1)(b)(vii) requires social media intermediaries to guarantee that its users do not host content that impersonates another person, and Rule 3(2)(b) requires such content to be withdrawn within 24 hours of receiving a complaint. Furthermore, the government has stipulated that any post must be removed within 36 hours of being published online. Recently government has also issued an advisory to social media intermediaries to identify misinformation and deepfakes.
Conclusion
It is important to foster ethical and responsible consumption of technology. This can only be achieved by creating standards for both the creators and users, educating individuals about content limits, and providing information. Internet-based platforms should also devise techniques to deter the uploading of inappropriate information. We can reduce the negative and misleading impacts of deepfakes by collaborating and ensuring technology can be used in a better manner.
References
- https://timesofindia.indiatimes.com/life-style/parenting/moments/how-social-media-scandals-like-deepfake-impact-minors-and-students-mental-health/articleshow/105168380.cms?from=mdr
- https://www.aa.com.tr/en/science-technology/deepfake-technology-putting-children-at-risk-say-experts/2980880
- https://wiisglobal.org/deepfakes-as-a-security-issue-why-gender-matters/