#FactCheck: Fake viral AI video captures a real-time bridge failure incident in Bihar
Executive Summary:
A video went viral on social media claiming to show a bridge collapsing in Bihar. The video prompted panic and discussions across various social media platforms. However, an exhaustive inquiry determined this was not real video but AI-generated content engineered to look like a real bridge collapse. This is a clear case of misinformation being harvested to create panic and ambiguity.

Claim:
The viral video shows a real bridge collapse in Bihar, indicating possible infrastructure failure or a recent incident in the state.
Fact Check:
Upon examination of the viral video, various visual anomalies were highlighted, such as unnatural movements, disappearing people, and unusual debris behavior which suggested the footage was generated artificially. We used Hive AI Detector for AI detection, and it confirmed this, labelling the content as 99.9% AI. It is also noted that there is the absence of realism with the environment and some abrupt animation like effects that would not typically occur in actual footage.

No valid news outlet or government agency reported a recent bridge collapse in Bihar. All these factors clearly verify that the video is made up and not real, designed to mislead viewers into thinking it was a real-life disaster, utilizing artificial intelligence.
Conclusion:
The viral video is a fake and confirmed to be AI-generated. It falsely claims to show a bridge collapsing in Bihar. This kind of video fosters misinformation and illustrates a growing concern about using AI-generated videos to mislead viewers.
Claim: A recent viral video captures a real-time bridge failure incident in Bihar.
Claimed On: Social Media
Fact Check: False and Misleading
Related Blogs

Executive Summary:
QakBot, a particular kind of banking trojan virus, is capable of stealing personal data, banking passwords, and session data from a user's computer. Since its first discovery in 2009, Qakbot has had substantial modifications.
C2 Server commands infected devices and receives stolen data, which is essentially the brain behind Qakbot's operations.Qakbot employs PEDLL (Communication Files), a malicious program, to interact with the server in order to accomplish its main goals. Sensitive data, including passwords or personal information, is taken from the victims and sent to the C2 server. Referrer files start the main line of communication between Qakbot and the C2 server, such as phishing papers or malware droppers. WHOIS data includes registration details for this server, which helps to identify its ownership or place of origin.
This report specifically focuses on the C2 server infrastructure located in India, shedding light on its architecture, communication patterns, and threat landscape.
Introduction:
QakBot is also known as Pinkslipbot, QuakBot, and QBot, capable of stealing personal data, banking passwords, and session data from a user's computer. Malware is bad since it spreads very quickly to other networks, affecting them like a worm.,It employs contemporary methods like web injection to eavesdrop on customer online banking interactions. Qakbot is a member of a kind of malware that has robust persistence techniques, which are said to be the most advanced in order to gain access to compromised computers for extended periods of time.
Technical Analysis:
The following IP addresses have been confirmed as active C2 servers supporting Qbot malware activity:

Sample IP's
- 123.201.40[.]112
- 117.198.151[.]182
- 103.250.38[.]115
- 49.33.237[.]65
- 202.134.178[.]157
- 124.123.42[.]115
- 115.96.64[.]9
- 123.201.44[.]86
- 117.202.161[.]73
- 136.232.254[.]46
These servers have been operational in the past 14 days (report created in the month of Nov) and are being leveraged to perpetuate malicious activities globally.
URL/IP: 123.201.40[.]112

- inetnum: 123.201.32[.]0 - 123.201.47[.]255
- netname: YOUTELE
- descr: YOU Telecom India Pvt Ltd
- country: IN
- admin-c: HA348-AP
- tech-c: NI23-AP
- status: ASSIGNED NON-PORTABLE
- mnt-by: MAINT-IN-YOU
- last-modified: 2022-08-16T06:43:19Z
- mnt-irt: IRT-IN-YOU
- source: APNIC
- irt: IRT-IN-YOU
- address: YOU Broadband India Limited
- address: 2nd Floor, Millennium Arcade
- address: Opp. Samarth Park, Adajan-Hazira Road
- address: Surat-395009,Gujarat
- address: India
- e-mail: abuse@youbroadband.co.in
- abuse-mailbox: abuse@youbroadband.co.in
- admin-c: HA348-AP
- tech-c: NI23-AP
- auth: # Filtered
- mnt-by: MAINT-IN-YOU
- last-modified: 2022-08-08T10:30:51Z
- source: APNIC
- person: Harindra Akbari
- nic-hdl: HA348-AP
- e-mail: harindra.akbari@youbroadband.co.in
- address: YOU Broadband India Limited
- address: 2nd Floor, Millennium Arcade
- address: Opp. Samarth Park, Adajan-Hazira Road
- address: Surat-395009,Gujarat
- address: India
- phone: +91-261-7113400
- fax-no: +91-261-2789501
- country: IN
- mnt-by: MAINT-IN-YOU
- last-modified: 2022-08-10T11:01:47Z
- source: APNIC
- person: NOC IQARA
- nic-hdl: NI23-AP
- e-mail: network@youbroadband.co.in
- address: YOU Broadband India Limited
- address: 2nd Floor, Millennium Arcade
- address: Opp. Samarth Park, Adajan-Hazira Road
- address: Surat-395009,Gujarat
- address: India
- phone: +91-261-7113400
- fax-no: +91-261-2789501
- country: IN
- mnt-by: MAINT-IN-YOU
- last-modified: 2022-08-08T10:18:09Z
- source: APNIC
- route: 123.201.40.0/24
- descr: YOU Broadband & Cable India Ltd.
- origin: AS18207
- mnt-lower: MAINT-IN-YOU
- mnt-routes: MAINT-IN-YOU
- mnt-by: MAINT-IN-YOU
- last-modified: 2012-01-25T11:25:55Z
- source: APNIC


IP 123.201.40[.]112 uses the requested URL-path to make a GET request on the IP-address at port 80. "NOT RESPONDED" is the response status code for the request "C:\PROGRAM FILES GOOGLE CHROME APPLICATION CHROME.EXE" that was started by the process.
Programs that retrieve their server data using a GET request are considered legitimate. The Google Chrome browser, a fully functional application widely used for web browsing, was used to make the actual request. It asks to get access to the server with IP 123.201.40[.]112 in order to collect its data and other resources.
Malware uses GET requests to retrieve more commands or to send data back to the command and control servers. In this instance, it may be an attack server making the request to a known IP address with a known port number. Since the server has not replied to the request, the response status "NOT RESPONDED" may indicate that the activity was carried out with malicious intent.
This graph illustrates how the Qakbot virus operates and interacts with its C2 server, located in India and with the IP address 123.201.40[.]112.

Impact
Qbot is a kind of malware that is typically distributed through hacked websites, malicious email attachments, and phishing operations. It targets private user information, including corporate logins or banking passwords. The deployment of ransomware: Payloads from organizations such as ProLock and Egregor ransomware are delivered by Qbot, a predecessor. Network Vulnerability: Within corporate networks, compromised systems will act as gateways for more lateral movement.
Proposed Recommendations for Mitigation
- Quick Action: To stop any incoming or outgoing traffic, the discovered IP addresses will be added to intrusion detection/prevention systems and firewalls.
- Network monitoring: Examining network log information for any attempts to get in touch with these IPs
- Email security: Give permission for anti-phishing programs.
- Endpoint Protection: To identify and stop Qbot infestations, update antivirus definitions.,Install tools for endpoint detection and response.
- Patch management: To reduce vulnerabilities that Qbot exploits, update all operating systems and software on a regular basis.
- Incident Response: Immediately isolate compromised computers.
- Awareness: Dissemination of this information to block the IP addresses of active C2 servers supporting Qbot malware activity has to be carried out.
Conclusion:
The discovery of these C2 servers reveals the growing danger scenario that Indian networks must contend with. To protect its infrastructure from future abuse, organizations are urged to act quickly and put the aforementioned precautions into place.
Reference:
- Threat Intelligence - ANY.RUN
- https://www.virustotal.com/gui
- https://www.virustotal.com/gui/ip-address/123.201.40.112/relations
.webp)
Introduction
The rise of unreliable social media newsgroups on online platforms has significantly altered the way people consume and interact with news, contributing to the spread of misinformation and leading to sources of unverified and misleading content. Unlike traditional news outlets that adhere to journalistic standards, these newsgroups often lack proper fact-checking and editorial oversight, leading to the rapid dissemination of false or distorted information. Social media transformed individuals into active content creators. Social media newsgroups (SMNs) are social media platforms used as sources of news and information. According to a survey by the Pew Research Center (July-August 2024), 54% of U.S. adults now rely on social media for news. This rise in SMNs has raised concerns over the integrity of online news and undermines trust in legitimate news sources. Social media users are advised to consume information and news from authentic sources or channels available on social media platforms.
The Growing Issue of Misinformation in Social Media Newsgroups
Social media newsgroups have become both a source of vital information and a conduit for misinformation. While these platforms allow rapid news sharing and facilitate political and social campaigns, they also pose significant risks of unverified information. Misleading information, often driven by algorithms designed to maximise user engagement, proliferates in these spaces. This has led to increasing challenges, as SMNs cater to diverse communities with varying political affiliations, gender demographics, and interests. This sometimes results in the creation of echo chambers where information is not critically assessed, amplifying the confirmation bias and enabling the unchecked spread of misinformation. A prominent example is the false narratives surrounding COVID-19 vaccines that spread across SMNs, contributing to widespread vaccine hesitancy and public health risks.
Understanding the Susceptibility of Online Newsgroups to Misinformation
Several factors make social media newsgroups particularly susceptible to misinformation. Some of the factors are listed below:
- The lack of robust fact-checking mechanisms in social media news groups can lead to false narratives which can spread easily.
- The lack of expertise from admins of online newsgroups, who are often regular users without journalism knowledge, can result in the spreading of inaccurate information. Their primary goal of increasing engagement may overshadow concerns about accuracy and credibility.
- The anonymity of users exacerbates the problem of misinformation. It allows users to share unverified or misleading content without accountability.
- The viral nature of social media also leads to the vast spread of misinformation to audiences instantly, often outpacing efforts to correct it.
- Unlike traditional media outlets, online newsgroups often lack formal fact-checking processes. This absence allows misinformation to circulate without verification, making it easier for inaccuracies to go unchallenged.
- The sheer volume of user engagement in the form of posts has created the struggle to moderate content effectively imposing significant challenges.
- Social Media Platforms have algorithms designed to enhance user engagement and inadvertently amplify sensational or emotionally charged content, which is more likely to be false.
Consequences of Misinformation in Newsgroups
The societal impacts of misinformation in SMNs are profound. Political polarisation can fuel one-sided views and create deep divides in democratic societies. Health risks emerge when false information spreads about critical issues, such as the anti-vaccine movements or misinformation related to public health crises. Misinformation has dire long-term implications and has the potential to destabilise governments and erode trust in media, in both traditional and social media leading to undermining democracy. If unaddressed, the consequences could continue to ripple through society, perpetuating false narratives that shape public opinion.
Steps to Mitigate Misinformation in Social Media Newsgroups
- Educating users in social media literacy education can empower critical assessment of the information encountered, reducing the spread of false narratives.
- Introducing stricter platform policies, including penalties for deliberately sharing misinformation, may act as a deterrent against sharing unverified information.
- Collaborative fact-checking initiatives with involvement from social media platforms, independent journalists, and expert organisations can provide a unified front against the spread of false information.
- From a policy perspective, a holistic approach that combines platform responsibility with user education and governmental and industry oversight is essential to curbing the spread of misinformation in social media newsgroups.
Conclusion
The emergence of Social media newsgroups has revolutionised the dissemination of information. This rapid spread of misinformation poses a significant challenge to the integrity of news in the digital age. It gets further amplified by algorithmic echo chambers unchecked user engagement and profound societal implications. A multi-faceted approach is required to tackle these issues, combining stringent platform policies, AI-driven moderation, and collaborative fact-checking initiatives. User empowerment concerning media literacy is an important factor in promoting critical thinking and building cognitive defences. By adopting these measures, we can better navigate the complexities of consuming news from social media newsgroups and preserve the reliability of online information. Furthermore, users need to consume news from authoritative sources available on social media platforms.
References

Introduction
The advent of AI-driven deepfake technology has facilitated the creation of explicit counterfeit videos for sextortion purposes. There has been an alarming increase in the use of Artificial Intelligence to create fake explicit images or videos for sextortion.
What is AI Sextortion and Deepfake Technology
AI sextortion refers to the use of artificial intelligence (AI) technology, particularly deepfake algorithms, to create counterfeit explicit videos or images for the purpose of harassing, extorting, or blackmailing individuals. Deepfake technology utilises AI algorithms to manipulate or replace faces and bodies in videos, making them appear realistic and often indistinguishable from genuine footage. This enables malicious actors to create explicit content that falsely portrays individuals engaging in sexual activities, even if they never participated in such actions.
Background on the Alarming Increase in AI Sextortion Cases
Recently there has been a significant increase in AI sextortion cases. Advancements in AI and deepfake technology have made it easier for perpetrators to create highly convincing fake explicit videos or images. The algorithms behind these technologies have become more sophisticated, allowing for more seamless and realistic manipulations. And the accessibility of AI tools and resources has increased, with open-source software and cloud-based services readily available to anyone. This accessibility has lowered the barrier to entry, enabling individuals with malicious intent to exploit these technologies for sextortion purposes.

The proliferation of sharing content on social media
The proliferation of social media platforms and the widespread sharing of personal content online have provided perpetrators with a vast pool of potential victims’ images and videos. By utilising these readily available resources, perpetrators can create deepfake explicit content that closely resembles the victims, increasing the likelihood of success in their extortion schemes.
Furthermore, the anonymity and wide reach of the internet and social media platforms allow perpetrators to distribute manipulated content quickly and easily. They can target individuals specifically or upload the content to public forums and pornographic websites, amplifying the impact and humiliation experienced by victims.
What are law agencies doing?
The alarming increase in AI sextortion cases has prompted concern among law enforcement agencies, advocacy groups, and technology companies. This is high time to make strong Efforts to raise awareness about the risks of AI sextortion, develop detection and prevention tools, and strengthen legal frameworks to address these emerging threats to individuals’ privacy, safety, and well-being.
There is a need for Technological Solutions, which develops and deploys advanced AI-based detection tools to identify and flag AI-generated deepfake content on platforms and services. And collaboration with technology companies to integrate such solutions.
Collaboration with Social Media Platforms is also needed. Social media platforms and technology companies can reframe and enforce community guidelines and policies against disseminating AI-generated explicit content. And can ensure foster cooperation in developing robust content moderation systems and reporting mechanisms.
There is a need to strengthen the legal frameworks to address AI sextortion, including laws that specifically criminalise the creation, distribution, and possession of AI-generated explicit content. Ensure adequate penalties for offenders and provisions for cross-border cooperation.
Proactive measures to combat AI-driven sextortion
Prevention and Awareness: Proactive measures raise awareness about AI sextortion, helping individuals recognise risks and take precautions.
Early Detection and Reporting: Proactive measures employ advanced detection tools to identify AI-generated deepfake content early, enabling prompt intervention and support for victims.
Legal Frameworks and Regulations: Proactive measures strengthen legal frameworks to criminalise AI sextortion, facilitate cross-border cooperation, and impose offender penalties.
Technological Solutions: Proactive measures focus on developing tools and algorithms to detect and remove AI-generated explicit content, making it harder for perpetrators to carry out their schemes.
International Cooperation: Proactive measures foster collaboration among law enforcement agencies, governments, and technology companies to combat AI sextortion globally.
Support for Victims: Proactive measures provide comprehensive support services, including counselling and legal assistance, to help victims recover from emotional and psychological trauma.
Implementing these proactive measures will help create a safer digital environment for all.

Misuse of Technology
Misusing technology, particularly AI-driven deepfake technology, in the context of sextortion raises serious concerns.
Exploitation of Personal Data: Perpetrators exploit personal data and images available online, such as social media posts or captured video chats, to create AI- manipulation violates privacy rights and exploits the vulnerability of individuals who trust that their personal information will be used responsibly.
Facilitation of Extortion: AI sextortion often involves perpetrators demanding monetary payments, sexually themed images or videos, or other favours under the threat of releasing manipulated content to the public or to the victims’ friends and family. The realistic nature of deepfake technology increases the effectiveness of these extortion attempts, placing victims under significant emotional and financial pressure.
Amplification of Harm: Perpetrators use deepfake technology to create explicit videos or images that appear realistic, thereby increasing the potential for humiliation, harassment, and psychological trauma suffered by victims. The wide distribution of such content on social media platforms and pornographic websites can perpetuate victimisation and cause lasting damage to their reputation and well-being.
Targeting teenagers– Targeting teenagers and extortion demands in AI sextortion cases is a particularly alarming aspect of this issue. Teenagers are particularly vulnerable to AI sextortion due to their increased use of social media platforms for sharing personal information and images. Perpetrators exploit to manipulate and coerce them.
Erosion of Trust: Misusing AI-driven deepfake technology erodes trust in digital media and online interactions. As deepfake content becomes more convincing, it becomes increasingly challenging to distinguish between real and manipulated videos or images.
Proliferation of Pornographic Content: The misuse of AI technology in sextortion contributes to the proliferation of non-consensual pornography (also known as “revenge porn”) and the availability of explicit content featuring unsuspecting individuals. This perpetuates a culture of objectification, exploitation, and non-consensual sharing of intimate material.
Conclusion
Addressing the concern of AI sextortion requires a multi-faceted approach, including technological advancements in detection and prevention, legal frameworks to hold offenders accountable, awareness about the risks, and collaboration between technology companies, law enforcement agencies, and advocacy groups to combat this emerging threat and protect the well-being of individuals online.