#FactCheck: Fake viral AI video captures a real-time bridge failure incident in Bihar
Executive Summary:
A video went viral on social media claiming to show a bridge collapsing in Bihar. The video prompted panic and discussions across various social media platforms. However, an exhaustive inquiry determined this was not real video but AI-generated content engineered to look like a real bridge collapse. This is a clear case of misinformation being harvested to create panic and ambiguity.

Claim:
The viral video shows a real bridge collapse in Bihar, indicating possible infrastructure failure or a recent incident in the state.
Fact Check:
Upon examination of the viral video, various visual anomalies were highlighted, such as unnatural movements, disappearing people, and unusual debris behavior which suggested the footage was generated artificially. We used Hive AI Detector for AI detection, and it confirmed this, labelling the content as 99.9% AI. It is also noted that there is the absence of realism with the environment and some abrupt animation like effects that would not typically occur in actual footage.

No valid news outlet or government agency reported a recent bridge collapse in Bihar. All these factors clearly verify that the video is made up and not real, designed to mislead viewers into thinking it was a real-life disaster, utilizing artificial intelligence.
Conclusion:
The viral video is a fake and confirmed to be AI-generated. It falsely claims to show a bridge collapsing in Bihar. This kind of video fosters misinformation and illustrates a growing concern about using AI-generated videos to mislead viewers.
Claim: A recent viral video captures a real-time bridge failure incident in Bihar.
Claimed On: Social Media
Fact Check: False and Misleading
Related Blogs

Introduction
The European Union has fined the meta $ 1.3 billion for infringing the EU privacy laws by transferring the personal data of Facebook users to the United States. The EU fined Meta’s business in Ireland. As per the European Union, transferring Personal data to the US is a breach of the General data protection Regulation or European Union law on data protection and privacy.
GDPR Compliance
The terms of GDPR promise to gather users’ personal information legally and under strict conditions. And those who collect and manage personal data must protect users’ personal data from exploitation. The GDPR restricts an organisation’s capacity to transfer personal data outside the EU if the transfer is solely based on that body’s evaluation of the sufficiency of the personal data’s protection. Transfers should only be made where European authorities have determined that a third country, a territory within that third country, or an international organisation provides acceptable protection for data protection.
Violation by Meta
The punishment, announced by Ireland’s Data Protection Commission, might be one of the most significant in the five years since the European Union passed the landmark General Data Protection Regulation. According to regulators, Facebook failed to comply with a 2020 judgment by the European Union’s top court that Facebook data transferred over the Atlantic was not sufficiently safeguarded from American espionage agencies. However, whether Meta will ever need to encrypt Facebook users’ data in Europe is still being determined. Meta announced it would appeal the ruling, launching a potentially legal procedure.
Simultaneously, European Union and American officials are negotiating a new data-sharing pact that would provide legal protections for Meta and scores of other companies to continue moving information between the US and Europe. This pact could overturn much of the European Union’s Monday ruling.
Article 46(1) GDPR Has been violated by the meta, And as per the Irish privacy.
What is required by the GDPR before transferring personal information across national boundaries?

Personal data transfers to countries outside the European Economic Area are generally permitted if these nations are regarded to provide a sufficient degree of data protection. According to Article 45 of the GDPR, the European Commission evaluates the degree of personal data protection in third countries.
The European Union judgment demonstrates how government rules are upending the borderless way data has traditionally migrated. Companies are increasingly being pressed to store data within the country where it is acquired rather than allowing it to transfer freely to data centres around the world as a result of data-protection requirements, national security laws, and other regulations.
The US internet giant had previously warned that if forced to stop using SCCs (standard contractual clauses) without a proper alternative data transfer agreement in place, it would be compelled to shut down services such as Facebook and Instagram in Europe.
What will happen next for Facebook in Europe?
The ruling includes a six-month transition period before it must halt data flows, meaning the service will continue to operate in the meantime. (More specifically, Meta has been given a five-month transition period to freeze any future transfer of personal data to the United States and a six-month deadline to terminate the unlawful processing and/or storage of European user data it has previously transferred without a legitimate legal basis. Meta has also stated that it will appeal and appears to seek a stay of execution while it pursues its legal arguments in court.
Conclusion
The GDPR places restrictions on transferring personal data outside the European Union to third-party nations or international bodies to ensure that the GDPR’s level of protection for individuals is not jeopardised. But the meta violated the European Union’s privacy laws by the user’s personal information to the US. Under the compliance of GDPR, transferring and sending personal information to users intentionally is an offence. and presently, the personal data of Facebook users has been breached by the Meta, as they shared the information with the US.
.webp)
Introduction
Union Minister of State for Electronics and IT, Rajeev Chandrasekhar, announced that rules for the Digital Personal Data Protection (DPDP) Act are expected to be released by the end of January. The rules will be subject to a month-long consultation process, but their notification may be delayed until after the general elections in April-May 2024. Chandrasekhar mentioned changes to the current IT regulations would be made in the next few days to address the problem of deepfakes on social networking sites.
The government has observed a varied response from platforms regarding advisory measures on deepfakes, leading to the decision to enforce more specific rules. During the Digital India Dialogue, platforms were made aware of existing provisions and the consequences of non-compliance. An advisory was issued, and new amended IT rules will be released if satisfaction with compliance is not achieved.
When Sachin Tendulkar reported a deepfake on a site where he was seen endorsing a gaming application, it raised concerns about the exploitation of deepfakes. Tendulkar urged the reporting of such incidents and underlined the need for social media companies to be watchful, receptive to grievances, and quick to address disinformation and deepfakes.
The DPDP Act, 2023
The Digital Personal Data Protection Act (DPDP) 2023 is a brand-new framework for digital personal data protection that aims to protect individuals' digital personal data. The act ensures compliance by the platforms collecting personal data. The act aims to provide consent-based data collection techniques. DPDP Act 2023 is an important step toward protecting individual privacy. The Act, which requires express consent for the acquisition, administration, and processing of personal data, seeks to guarantee that organisations follow the stated objective for which user consent was granted. This proactive strategy coincides with global data protection trends and demonstrates India's commitment to safeguarding user information in the digital era.
Amendments to IT rules
Minister Chandrasekhar declared that existing IT regulations would be amended in order to combat the rising problem of deepfakes and disinformation on social media platforms. These adjustments, which will be published over the next few days, are primarily aimed at countering widespread of false information and deepfake. The decision follows a range of responses from platforms to deepfake recommendations made during Digital India Dialogues.
The government's stance: blocking non-compliant platforms
Minister Chandrasekhar reaffirmed the government's commitment to enforcing the updated guidelines. If platforms fail to follow compliance, the government may consider banning them. This severe position demonstrates the government's commitment to safeguarding Indian residents from the possible harm caused by false information.
Empowering Users with Education and Awareness
In addition to the upcoming DPDP Act Rules/recommendations and IT regulation changes, the government recognises the critical role that user education plays in establishing a robust digital environment. Minister Rajeev Chandrasekhar emphasised the necessity for comprehensive awareness programs to educate individuals about their digital rights and the need to protect personal information.
These instructional programs seek to equip users to make informed decisions about giving consent to their data. By developing a culture of digital literacy, the government hopes to guarantee that citizens have the information to safeguard themselves in an increasingly linked digital environment.
Balancing Innovation with User Protection
As India continues to explore its digital frontier, the junction of technology innovation and user safety remains a difficult balance. The upcoming Rules on the DPDP Act and modifications to existing IT rules represent the government's proactive efforts to build a strong framework that supports innovation while protecting user privacy and combating disinformation. Recognising the changing nature of the digital world, the government is actively participating in continuing discussions with stakeholders such as industry professionals, academia, and civil society. These conversations promote a collaborative approach to policy creation, ensuring that legislation is adaptable to the changing nature of cyber risks and technology breakthroughs. Such inclusive talks demonstrate the government's dedication to transparent and participatory governance, in which many viewpoints contribute to the creation of effective and nuanced policy. These advances reflect an important milestone in India's digital journey, as the country prepares to set a good example by creating responsible and safe digital ecosystems for its residents.
Reference :
- https://economictimes.indiatimes.com/tech/technology/govt-may-release-personal-data-bill-rules-in-a-fortnight/articleshow/106162669.cms?from=mdr
- https://www.business-standard.com/india-news/dpdp-rules-expected-to-be-released-by-end-of-the-month-mos-chandrasekhar-124011600679_1.html
.webp)
Introduction
The rise of unreliable social media newsgroups on online platforms has significantly altered the way people consume and interact with news, contributing to the spread of misinformation and leading to sources of unverified and misleading content. Unlike traditional news outlets that adhere to journalistic standards, these newsgroups often lack proper fact-checking and editorial oversight, leading to the rapid dissemination of false or distorted information. Social media transformed individuals into active content creators. Social media newsgroups (SMNs) are social media platforms used as sources of news and information. According to a survey by the Pew Research Center (July-August 2024), 54% of U.S. adults now rely on social media for news. This rise in SMNs has raised concerns over the integrity of online news and undermines trust in legitimate news sources. Social media users are advised to consume information and news from authentic sources or channels available on social media platforms.
The Growing Issue of Misinformation in Social Media Newsgroups
Social media newsgroups have become both a source of vital information and a conduit for misinformation. While these platforms allow rapid news sharing and facilitate political and social campaigns, they also pose significant risks of unverified information. Misleading information, often driven by algorithms designed to maximise user engagement, proliferates in these spaces. This has led to increasing challenges, as SMNs cater to diverse communities with varying political affiliations, gender demographics, and interests. This sometimes results in the creation of echo chambers where information is not critically assessed, amplifying the confirmation bias and enabling the unchecked spread of misinformation. A prominent example is the false narratives surrounding COVID-19 vaccines that spread across SMNs, contributing to widespread vaccine hesitancy and public health risks.
Understanding the Susceptibility of Online Newsgroups to Misinformation
Several factors make social media newsgroups particularly susceptible to misinformation. Some of the factors are listed below:
- The lack of robust fact-checking mechanisms in social media news groups can lead to false narratives which can spread easily.
- The lack of expertise from admins of online newsgroups, who are often regular users without journalism knowledge, can result in the spreading of inaccurate information. Their primary goal of increasing engagement may overshadow concerns about accuracy and credibility.
- The anonymity of users exacerbates the problem of misinformation. It allows users to share unverified or misleading content without accountability.
- The viral nature of social media also leads to the vast spread of misinformation to audiences instantly, often outpacing efforts to correct it.
- Unlike traditional media outlets, online newsgroups often lack formal fact-checking processes. This absence allows misinformation to circulate without verification, making it easier for inaccuracies to go unchallenged.
- The sheer volume of user engagement in the form of posts has created the struggle to moderate content effectively imposing significant challenges.
- Social Media Platforms have algorithms designed to enhance user engagement and inadvertently amplify sensational or emotionally charged content, which is more likely to be false.
Consequences of Misinformation in Newsgroups
The societal impacts of misinformation in SMNs are profound. Political polarisation can fuel one-sided views and create deep divides in democratic societies. Health risks emerge when false information spreads about critical issues, such as the anti-vaccine movements or misinformation related to public health crises. Misinformation has dire long-term implications and has the potential to destabilise governments and erode trust in media, in both traditional and social media leading to undermining democracy. If unaddressed, the consequences could continue to ripple through society, perpetuating false narratives that shape public opinion.
Steps to Mitigate Misinformation in Social Media Newsgroups
- Educating users in social media literacy education can empower critical assessment of the information encountered, reducing the spread of false narratives.
- Introducing stricter platform policies, including penalties for deliberately sharing misinformation, may act as a deterrent against sharing unverified information.
- Collaborative fact-checking initiatives with involvement from social media platforms, independent journalists, and expert organisations can provide a unified front against the spread of false information.
- From a policy perspective, a holistic approach that combines platform responsibility with user education and governmental and industry oversight is essential to curbing the spread of misinformation in social media newsgroups.
Conclusion
The emergence of Social media newsgroups has revolutionised the dissemination of information. This rapid spread of misinformation poses a significant challenge to the integrity of news in the digital age. It gets further amplified by algorithmic echo chambers unchecked user engagement and profound societal implications. A multi-faceted approach is required to tackle these issues, combining stringent platform policies, AI-driven moderation, and collaborative fact-checking initiatives. User empowerment concerning media literacy is an important factor in promoting critical thinking and building cognitive defences. By adopting these measures, we can better navigate the complexities of consuming news from social media newsgroups and preserve the reliability of online information. Furthermore, users need to consume news from authoritative sources available on social media platforms.