#FactCheck: Fake viral AI video captures a real-time bridge failure incident in Bihar
Executive Summary:
A video went viral on social media claiming to show a bridge collapsing in Bihar. The video prompted panic and discussions across various social media platforms. However, an exhaustive inquiry determined this was not real video but AI-generated content engineered to look like a real bridge collapse. This is a clear case of misinformation being harvested to create panic and ambiguity.

Claim:
The viral video shows a real bridge collapse in Bihar, indicating possible infrastructure failure or a recent incident in the state.
Fact Check:
Upon examination of the viral video, various visual anomalies were highlighted, such as unnatural movements, disappearing people, and unusual debris behavior which suggested the footage was generated artificially. We used Hive AI Detector for AI detection, and it confirmed this, labelling the content as 99.9% AI. It is also noted that there is the absence of realism with the environment and some abrupt animation like effects that would not typically occur in actual footage.

No valid news outlet or government agency reported a recent bridge collapse in Bihar. All these factors clearly verify that the video is made up and not real, designed to mislead viewers into thinking it was a real-life disaster, utilizing artificial intelligence.
Conclusion:
The viral video is a fake and confirmed to be AI-generated. It falsely claims to show a bridge collapsing in Bihar. This kind of video fosters misinformation and illustrates a growing concern about using AI-generated videos to mislead viewers.
Claim: A recent viral video captures a real-time bridge failure incident in Bihar.
Claimed On: Social Media
Fact Check: False and Misleading
Related Blogs
.webp)
Misinformation spread has become a cause for concern for all stakeholders, be it the government, policymakers, business organisations or the citizens. The current push for combating misinformation is rooted in the growing awareness that misinformation leads to sentiment exploitation and can result in economic instability, personal risks, and a rise in political, regional, and religious tensions. The circulation of misinformation poses significant challenges for organisations, brands and administrators of all types. The spread of misinformation online poses a risk not only to the everyday content consumer, but also creates concerns for the sharer but the platforms themselves. Sharing misinformation in the digital realm, intentionally or not, can have real consequences.
Consequences for Platforms
Platforms have been scrutinised for the content they allow to be published and what they don't. It is important to understand not only how this misinformation affects platform users, but also its impact and consequences for the platforms themselves. These consequences highlight the complex environment that social media platforms operate in, where the stakes are high from the perspective of both business and societal impact. They are:
- Legal Consequences: Platforms can be fined by regulators if they fail to comply with content moderation or misinformation-related laws and a prime example of such a law is the Digital Services Act of the EU, which has been created for the regulation of digital services that act as intermediaries for consumers and goods, services, and content. They can face lawsuits by individuals, organisations or governments for any damages due to misinformation. Defamation suits are part of the standard practice when dealing with misinformation-causing vectors. In India, the Prohibition of Fake News on Social Media Bill of 2023 is in the pipeline and would establish a regulatory body for fake news on social media platforms.
- Reputational Consequences: Platforms employ a trust model where the user trusts it and its content. If a user loses trust in the platform because of misinformation, it can reduce engagement. This might even lead to negative coverage that affects the public opinion of the brand, its value and viability in the long run.
- Financial Consequences: Businesses that engage with the platform may end their engagement with platforms accused of misinformation, which can lead to a revenue drop. This can also have major consequences affecting the long-term financial health of the platform, such as a decline in stock prices.
- Operational Consequences: To counter the scrutiny from regulators, the platform might need to engage in stricter content moderation policies or other resource-intensive tasks, increasing operational costs for the platforms.
- Market Position Loss: If the reliability of a platform is under question, then, platform users can migrate to other platforms, leading to a loss in the market share in favour of those platforms that manage misinformation more effectively.
- Freedom of Expression vs. Censorship Debate: There needs to be a balance between freedom of expression and the prevention of misinformation. Censorship can become an accusation for the platform in case of stricter content moderation and if the users feel that their opinions are unfairly suppressed.
- Ethical and Moral Responsibilities: Accountability for platforms extends to moral accountability as they allow content that affects different spheres of the user's life such as public health, democracy etc. Misinformation can cause real-world harm like health misinformation or inciting violence, which leads to the fact that platforms have social responsibility too.
Misinformation has turned into a global issue and because of this, digital platforms need to be vigilant while they navigate the varying legal, cultural and social expectations across different jurisdictions. Efforts to create standardised practices and policies have been complicated by the diversity of approaches, leading platforms to adopt flexible strategies for managing misinformation that align with global and local standards.
Addressing the Consequences
These consequences can be addressed by undertaking the following measures:
- The implementation of a more robust content moderation system by the platforms using a combination of AI and human oversight for the identification and removal of misinformation in an effective manner.
- Enhancing the transparency in platform policies for content moderation and decision-making would build user trust and reduce the backlash associated with perceived censorship.
- Collaborations with fact checkers in the form of partnerships to help verify the accuracy of content and reduce the spread of misinformation.
- Engage with regulators proactively to stay ahead of legal and regulatory requirements and avoid punitive actions.
- Platforms should Invest in media literacy initiatives and help users critically evaluate the content available to them.
Final Takeaways
The accrual of misinformation on digital platforms has resulted in presenting significant challenges across legal, reputational, financial, and operational functions for all stakeholders. As a result, a critical need arises where the interlinked, but seemingly-exclusive priorities of preventing misinformation and upholding freedom of expression must be balanced. Platforms must invest in the creation and implementation of a robust content moderation system with in-built transparency, collaborating with fact-checkers, and media literacy efforts to mitigate the adverse effects of misinformation. In addition to this, adapting to diverse international standards is essential to maintaining their global presence and societal trust.
References
- https://pirg.org/edfund/articles/misinformation-on-social-media/
- https://www.mdpi.com/2076-0760/12/12/674
- https://scroll.in/article/1057626/israel-hamas-war-misinformation-is-being-spread-across-social-media-with-real-world-consequences
- https://www.who.int/europe/news/item/01-09-2022-infodemics-and-misinformation-negatively-affect-people-s-health-behaviours--new-who-review-finds

Introduction
In recent times the evolution of cyber laws has picked up momentum, primarily because of new and emerging technologies. However, just as with any other law, the same is also strengthened and substantiated by judicial precedents and judgements. Recently Delhi High Court has heard a matter between Tata Sky and Linkedin, where the court has asked them to present their Chief Grievance Officer details and SoP per the intermediary guidelines 2021.
Furthermore, in another news, officials from RBI and Meity have been summoned by the Parliamentary Standing Committee in order to address the rising issues of cyber securities and cybercrimes in India. This comes on the very first day of the monsoon session of the parliament this year. As we move towards the aspects of digital India, addressing these concerns are of utmost importance to safeguard the Indian Netizen.
The Issue
Tata Sky changed its name to Tata Play last year and has since then made its advent in the OTT sector as well. As the rebranding took place, the company was very cautious of anyone using the name Tata Sky in a bad light. Tata Play found that a lot of people on Linkedin had posted their work experience in Tata Sky for multiple years, as any new recruiter cannot verify the same. This poses a misappropriation of the brand’s name. This issue was reported to Linkedin multiple times by officials of Tata Play, but no significant action was seen. This led to an issue between the two brands; hence, a matter has been filed in front of the Hon’ble Delhi High Court to address the issue. The court has taken due cognisance of the issue, and hence in accordance with the Intermediary Guidelines 2021, the court has directed Linkedlin to provide the details of their Cheif Grievance Officer in the public domain and also to share the SoP for the redressal of issues and grievances. The guidelines made it mandatory for all intermediaries to set up a dedicated office in India and appoint a Chief Grievance Officer responsible for effective and efficient redressal of the platform-related offences and grievances within the stipulated period.
The job platform has also been ordered to share the SoPs and the various requirements and safety checks for users to create profiles over Linkedin. The policy of Linkedin is focused towards the users as well as the companies existing on the platform in order to create a synergy between the two.
RBI and Meity Official at Praliament
As we go deeper into cyberspace, especially after the pandemic, we have seen an exponential rise in cybercrimes. Based on statistics, 4 out of 10 people have been victims of cybercrimes in 2022-23, and it is estimated that 70% of the population has been subjected to direct or indirect cybercrime. As per the latest statistics, 85% of Indian children have been subjected to cyberbullying in some form or the other.
The government has taken note of the rising numbers of such crimes and threats, and hence the Parliamentary Committee has summoned the officials from RBI and the Ministery of Electronics and Information Technology to the parliament on July 20, 2023, i.e. the first day of monsoon session at the parliament. This comes at a very crucial time as the Digital Personal Data Protection Bill is to be tabled in the parliament this session and this marks the revamping of the legislation and regulations in the Indian cyberspace. As emerging technologies have started to surround us it is pertinent to create legal safeguards and practices to protect the Indian Netizen at large.
Conclusion
The legal crossroads between Tata Sky and Linkedin will go a long way in establishing the mandates under the Intermediary guidelines in the form of legal precedents. The compliance with the rule of law is the most crucial aspect of any democracy. Hence the separation of power between the Legislature, Judiciary and Execution has been fundamental in safeguarding basic and fundamental rights. Similarly, the RBI and Meity officials being summoned to the parliament shows the transparency in the system and defines the true spirit of democracy., which will contribute towards creating a safe and secured Indian Cyberspace.

Introduction
Netflix is no stranger to its subscribers being targeted by SMS and email-led phishing campaigns. But the most recent campaign has been deployed at a global scale, affecting paid users in as many as 23 countries according to cybersecurity firm Bitdefender. In this particular campaign, attackers are using the carrot-and-stick tactic of either creating a false sense of urgency or promising rewards to steal financial information and Netflix credentials. For example, users may be contacted via SMS and told that their account is being suspended due to payment failures. A fake website may be shared through a link, encouraging the individual to share sensitive information to restore their account. Once this information has been input, it is now accessible to the attackers. This can create significant stress and even financial loss for its users. Thus, they are encouraged to develop the necessary skills to recognize and respond to these threats effectively.
How The Netflix Scam Works
Users are typically contacted through SMS. Bitdefender reports that these messages may look something like this:
"NETFLIX: There was an issue processing your payment. To keep your services active, please sign in and confirm your details at: https://account-details[.]com"
On clicking the link, the victim is directed to a website designed to mimic an authentic user experience interface, containing Netflix’s logo, color scheme, and grammatically-correct text. The website uses this interface to encourage the victim to divulge sensitive personal information, such as account credentials and payment details. Since this is a phishing website, the user’s personal information becomes accessible to the attacker as soon as it is entered. This information is then sold individually or in bundles on the dark web.
Practical Steps to Stay Safe
- Know Netflix’s Customer Interface: According to Netflix, it will never ask users to share personal information including credit or debit card numbers, bank account details, and Netflix passwords. It will also never ask for payment through a third-party vendor or website.
- Verify Authenticity: Do not open links from unknown sources sent by email or sms. If unsure, access Netflix directly by typing the URL into the browser instead of clicking on links in emails or texts. If the link has been opened, do not enter any information.
- Use Netflix’s Official Support Channels: Confirm any suspicious communication through Netflix’s verified help page or app. Write to phishing@netflix.com with any complaints about such an issue.
- Contact Your Financial Institution: If you have entered your personal information into a phishing website, you should immediately reach out to your bank to block your card and change your Netflix password. Contact the authorities via www.cybercrime.gov.in or by calling the helpline at 1930 in case of loss of funds.
- Use Strong Passwords and Enable MFA/2FA: Users are advised to use a unique, strong password with multiple characters. Enable Multi-Factor Authentication or Two Factor Authentication to your accounts, if available, to add an extra level of security.
Conclusion
Phishing campaigns which are designed to gather customer data through fraudulent means often involve sending links to as many users as possible, with the aim of monetizing stolen information. Attackers exploit user trust in online platforms to steal sensitive personal information, making such campaigns more sophisticated as highlighted above. This underscores the need for users of online platforms to practice good cyber hygiene by verifying information, learning to detect suspicious information and ignoring it, and staying aware of the types of online fraud they may be exposed to.
Sources
- https://www.bitdefender.com/en-gb/blog/hotforsecurity/netflix-scam-stay-safe
- https://help.netflix.com/en/node/65674
- https://timesofindia.indiatimes.com/technology/tech-news/netflix-users-beware-this-netflix-subscription-scam-is-active-in-23-countries-how-to-spot-one-and-stay-safe/articleshow/115820070.cms