#FactCheck - AI-Generated Image Falsely Linked to Mira–Bhayandar Bridge
Executive Summary
Mumbai’s Mira–Bhayandar bridge has recently been in the news due to its unusual design. In this context, a photograph is going viral on social media showing a bus seemingly stuck on the bridge. Some users are also sharing the image while claiming that it is from Sonpur subdivision in Bihar. However, an research by the CyberPeace has found that the viral image is not real. The bridge shown in the image is indeed the Mira–Bhayandar bridge, which is under discussion because its design causes it to suddenly narrow from four lanes to two lanes. That said, the bridge is not yet operational, and the viral image showing a bus stuck on it has been created using Artificial Intelligence (AI).
Claim
An Instagram user shared the viral image on January 29, 2026, with the caption:“Are Indian taxpayers happy to see that this is funded by their money?” The link, archive link, and screenshot of the post can be seen below.

Fact Check:
To verify the claim, we first conducted a Google Lens reverse image search. This led us to a post shared by X (formerly Twitter) user Manoj Arora on January 29. While the bridge structure in that image matches the viral photo, no bus is visible in the original post.This raised suspicion that the viral image had been digitally manipulated.

We then ran the viral image through the AI detection tool Hive Moderation, which flagged it as over 99% likely to be AI-generated

Conclusion
The CyberPeace research confirms that while the Mira–Bhayandar bridge is real and has been in the news due to its design, the viral image showing a bus stuck on the bridge has been created using AI tools. Therefore, the image circulating on social media is misleading.
Related Blogs

The European Union (EU) has made trailblazing efforts regarding protection and privacy, coming up with the most comprehensive and detailed regulation called the GDPR (General Data Protection Regulation). As countries worldwide continue to grapple with setting their laws, the EU is already taking on issues with tech giants and focusing on the road ahead. Its contentious issues with Meta and the launch of Meta’s AI assistant in the EU are thus seen as a complex process, shaped by stringent data privacy regulations, ongoing debates over copyright, and ethical AI practices. This development is considered important as previously, the EU and Meta have had issues (including fines and and also received a pushback concerning its services), which broadly include data privacy regarding compliance with GDPR, antitrust law concerns- targeting ads, facebook marketplace activities and content moderation with respect to the spread of misinformation.
Privacy and Data Protection Concerns
A significant part of operating Large Language Models (LLMs) is the need to train them with a repository of data/ plausible answers from which they can source. If it doesn’t find relevant information or the request is out of its scope, programmed to answer, it shall continue to follow orders, but with a reduction in the accuracy of its response. Meta's initial plans to train its AI models using publicly available content from adult users in the EU received a setback from privacy regulators. The Irish Data Protection Commission (DPC), acting as Meta's lead privacy regulator in Europe, raised the issue and requested a delay in the rollout to assess its compliance with GDPR. It has also raised similar concerns with Grok, the AI tool of X, to assess whether the EU users’ data was lawfully processed for training it.
In response, Meta stalled the release of this feature for around a year and agreed to exclude private messages and data from users under the age of 18 and implemented an opt-out mechanism for users who do not wish their public data to be used for AI training. This approach aligns with GDPR requirements, which mandate a clear legal basis for processing personal data, such as obtaining explicit consent or demonstrating legitimate interest, along with the option of removal of consent at a later stage, as the user wishes. The version/service available at the moment is a text-based assistant which is not capable of things like image generation, but can provide services and assistance which include brainstorming, planning, and answering queries from web-based information. However, Meta has assured its users of expansion and exploration regarding the AI features in the near future as it continues to cooperate with the regulators.
Regulatory Environment and Strategic Decisions
The EU's regulatory landscape, characterised by the GDPR and the forthcoming AI Act, presents challenges for tech companies like Meta. Citing the "unpredictable nature" of EU regulations, Meta has decided not to release its multimodal Llama AI model—capable of processing text, images, audio, and video—in the EU. This decision underscores the tension between innovation and regulatory compliance, as companies navigate the complexities of deploying advanced AI technologies within strict legal frameworks.
Implications and Future Outlook
Meta's experience highlights the broader challenges faced by AI developers operating in jurisdictions with robust data protection laws. The most critical issue that remains for now is to strike a balance between leveraging user data for AI advancement while respecting individual privacy rights.. As the EU continues to refine its regulatory approach to AI, companies need to adapt their strategies to ensure compliance while fostering innovation. Stringent measures and regular assessment also keep in check the accountability of big tech companies as they make for profit as well as for the public.
Reference:
- https://thehackernews.com/2025/04/meta-resumes-eu-ai-training-using.html
- https://www.thehindu.com/sci-tech/technology/meta-to-train-ai-models-on-european-users-public-data/article69451271.ece
- https://about.fb.com/news/2025/04/making-ai-work-harder-for-europeans/
- https://www.theregister.com/2025/04/15/meta_resume_ai_training_eu_user_posts/
- https://noyb.eu/en/twitters-ai-plans-hit-9-more-gdpr-complaints
- https://www.businesstoday.in/technology/news/story/meta-ai-finally-comes-to-europe-after-a-year-long-delay-but-with-some-limitations-468809-2025-03-21
- https://www.bloomberg.com/news/articles/2025-02-13/meta-opens-facebook-marketplace-to-rivals-in-eu-antitrust-clash
- https://www.nytimes.com/2023/05/22/business/meta-facebook-eu-privacy-fine.html#:~:text=Many%20civil%20society%20groups%20and,million%20for%20a%20data%20leak.
- https://ec.europa.eu/commission/presscorner/detail/en/ip_24_5801
- https://www.thehindu.com/sci-tech/technology/european-union-accuses-facebook-owner-meta-of-breaking-digital-rules-with-paid-ad-free-option/article68358039.ece
- https://www.theregister.com/2025/04/14/ireland_investigation_into_x/
- https://www.theverge.com/2024/7/18/24201041/meta-multimodal-llama-ai-model-launch-eu-regulations?utm_source=chatgpt.com
- https://www.axios.com/2024/07/17/meta-future-multimodal-ai-models-eu?utm_source=chatgpt.com
.webp)
Introduction
Misinformation is a major issue in the AI age, exacerbated by the broad adoption of AI technologies. The misuse of deepfakes, bots, and content-generating algorithms have made it simpler for bad actors to propagate misinformation on a large scale. These technologies are capable of creating manipulative audio/video content, propagate political propaganda, defame individuals, or incite societal unrest. AI-powered bots may flood internet platforms with false information, swaying public opinion in subtle ways. The spread of misinformation endangers democracy, public health, and social order. It has the potential to affect voter sentiments, erode faith in the election process, and even spark violence. Addressing misinformation includes expanding digital literacy, strengthening platform detection capabilities, incorporating regulatory checks, and removing incorrect information.
AI's Role in Misinformation Creation
AI's growth in its capabilities to generate content have grown exponentially in recent years. Legitimate uses or purposes of AI many-a-times take a backseat and result in the exploitation of content that already exists on the internet. One of the main examples of misinformation flooding the internet is when AI-powered bots flood social media platforms with fake news at a scale and speed that makes it impossible for humans to track and figure out whether the same is true or false.
The netizens in India are greatly influenced by viral content on social media. AI-generated misinformation can have particularly negative consequences. Being literate in the traditional sense of the word does not automatically guarantee one the ability to parse through the nuances of social media content authenticity and impact. Literacy, be it social media literacy or internet literacy, is under attack and one of the main contributors to this is the rampant rise of AI-generated misinformation. Some of the most common examples of misinformation that can be found are related to elections, public health, and communal issues. These issues have one common factor that connects them, which is that they evoke strong emotions in people and as such can go viral very quickly and influence social behaviour, to the extent that they may lead to social unrest, political instability and even violence. Such developments lead to public mistrust in the authorities and institutions, which is dangerous in any economy, but even more so in a country like India which is home to a very large population comprising a diverse range of identity groups.
Misinformation and Gen AI
Generative AI (GAI) is a powerful tool that allows individuals to create massive amounts of realistic-seeming content, including imitating real people's voices and creating photos and videos that are indistinguishable from reality. Advanced deepfake technology blurs the line between authentic and fake. However, when used smartly, GAI is also capable of providing a greater number of content consumers with trustworthy information, counteracting misinformation.
Generative AI (GAI) is a technology that has entered the realm of autonomous content production and language creation, which is linked to the issue of misinformation. It is often difficult to determine if content originates from humans or machines and if we can trust what we read, see, or hear. This has led to media users becoming more confused about their relationship with media platforms and content and highlighted the need for a change in traditional journalistic principles.
We have seen a number of different examples of GAI in action in recent times, from fully AI-generated fake news websites to fake Joe Biden robocalls telling the Democrats in the U.S. not to vote. The consequences of such content and the impact it could have on life as we know it are almost too vast to even comprehend at present. If our ability to identify reality is quickly fading, how will we make critical decisions or navigate the digital landscape safely? As such, the safe and ethical use and applications of this technology needs to be a top global priority.
Challenges for Policymakers
AI's ability to generate anonymous content makes it difficult to hold perpetrators accountable due to the massive amount of data generated. The decentralised nature of the internet further complicates regulation efforts, as misinformation can spread across multiple platforms and jurisdictions. Balancing the need to protect the freedom of speech and expression with the need to combat misinformation is a challenge. Over-regulation could stifle legitimate discourse, while under-regulation could allow misinformation to propagate unchecked. India's multilingual population adds more layers to already-complex issue, as AI-generated misinformation is tailored to different languages and cultural contexts, making it harder to detect and counter. Therefore, developing strategies catering to the multilingual population is necessary.
Potential Solutions
To effectively combat AI-generated misinformation in India, an approach that is multi-faceted and multi-dimensional is essential. Some potential solutions are as follows:
- Developing a framework that is specific in its application to address AI-generated content. It should include stricter penalties for the originator and spreader and dissemination of fake content in proportionality to its consequences. The framework should establish clear and concise guidelines for social media platforms to ensure that proactive measures are taken to detect and remove AI-generated misinformation.
- Investing in tools that are driven by AI for customised detection and flagging of misinformation in real time. This can help in identifying deepfakes, manipulated images, and other forms of AI-generated content.
- The primary aim should be to encourage different collaborations between tech companies, cyber security orgnisations, academic institutions and government agencies to develop solutions for combating misinformation.
- Digital literacy programs will empower individuals by training them to evaluate online content. Educational programs in schools and communities teach critical thinking and media literacy skills, enabling individuals to better discern between real and fake content.
Conclusion
AI-generated misinformation presents a significant threat to India, and it is safe to say that the risks posed are at scale with the rapid rate at which the nation is developing technologically. As the country moves towards greater digital literacy and unprecedented mobile technology adoption, one must be cognizant of the fact that even a single piece of misinformation can quickly and deeply reach and influence a large portion of the population. Indian policymakers need to rise to the challenge of AI-generated misinformation and counteract it by developing comprehensive strategies that not only focus on regulation and technological innovation but also encourage public education. AI technologies are misused by bad actors to create hyper-realistic fake content including deepfakes and fabricated news stories, which can be extremely hard to distinguish from the truth. The battle against misinformation is complex and ongoing, but by developing and deploying the right policies, tools, digital defense frameworks and other mechanisms, we can navigate these challenges and safeguard the online information landscape.
References:
- https://economictimes.indiatimes.com/news/how-to/how-ai-powered-tools-deepfakes-pose-a-misinformation-challenge-for-internet-users/articleshow/98770592.cms?from=mdr
- https://www.dw.com/en/india-ai-driven-political-messaging-raises-ethical-dilemma/a-69172400
- https://pure.rug.nl/ws/portalfiles/portal/975865684/proceedings.pdf#page=62

Assembly elections are due to be held in Assam later this year, with polling likely in April or May. Ahead of the elections, a video claiming to be an Aaj Tak news bulletin is being widely circulated on social media.
In the viral video, Aaj Tak anchor Rajiv Dhoundiyal is allegedly seen stating that a leaked intelligence report has issued a warning for the ruling Bharatiya Janata Party (BJP) in Assam. The clip claims that according to this purported report, the BJP may suffer significant losses in the upcoming Assembly elections. Several social media users sharing the video have also claimed that the alleged intelligence report signals the possible removal of Assam Chief Minister Himanta Biswa Sarma from office.
However, an investigation by the Cyber Peace Foundation found the viral claim to be false. Our probe clearly established that no leaked intelligence report related to the Assam Assembly elections exists.
Further, Aaj Tak has neither published nor broadcast any such report on its official television channel, website, or social media platforms. The investigation also revealed that the viral video itself is not authentic and has been created using deepfake technology.
Claim
On social media platform Facebook, a user shared the viral video claiming that the BJP has been pushed on the back foot following organisational changes in the Congress—appointing Priyanka Gandhi Vadra as chairperson of the election screening committee and Gaurav Gogoi as the Assam Pradesh Congress Committee president. The post further claims that an Intelligence Bureau report predicts that the current Assam government will not return to power.
(Link to the post, archive link, and screenshots are available.)
FactCheck:
To verify the claim, we first searched for reports related to any alleged leaked intelligence assessment concerning the Assam Assembly elections using relevant keywords. However, no credible or reliable reports supporting the claim were found. We then reviewed Aaj Tak’s official website, social media pages, and YouTube channel. Our examination confirmed that no such news bulletin has been published or broadcast by the network on any of its official platforms.
- https://www.facebook.com/aajtak/?locale=hi_IN
- https://www.instagram.com/aajtak/
- https://x.com/aajtak
- https://www.youtube.com/channel/UCt4t-jeY85JegMlZ-E5UWtA
To further verify the authenticity of the video, its audio was scanned using the deepfake voice detection tool HIVE Moderation.
The analysis revealed that the voice heard in the video is 99 per cent AI-generated, clearly indicating that the audio is not genuine and has been artificially created using artificial intelligence.

Additionally, the video was analysed using another AI detection tool, Aurigin AI, which also identified the viral clip as AI-generated.

Conclusion:
The investigation clearly establishes that there is no leaked intelligence report predicting BJP’s defeat in the Assam Assembly elections. Aaj Tak has not published or broadcast any such content on its official platforms. The video circulating on social media is not authentic and has been created using deepfake technology to mislead viewers.