#FactCheck- Delhi Metro Rail Corporation Price Hike
Executive Summary:
Recently, a viral social media post alleged that the Delhi Metro Rail Corporation Ltd. (DMRC) had increased ticket prices following the BJP’s victory in the Delhi Legislative Assembly elections. After thorough research and verification, we have found this claim to be misleading and entirely baseless. Authorities have asserted that no fare hike has been declared.
Claim:
Viral social media posts have claimed that the Delhi Metro Rail Corporation Ltd. (DMRC) increased metro fares following the BJP's victory in the Delhi Legislative Assembly elections.


Fact Check:
After thorough research, we conclude that the claims regarding a fare hike by the Delhi Metro Rail Corporation Ltd. (DMRC) following the BJP’s victory in the Delhi Legislative Assembly elections are misleading. Our review of DMRC’s official website and social media handles found no mention of any fare increase.Furthermore, the official X (formerly Twitter) handle of DMRC has also clarified that no such price hike has been announced. We urge the public to rely on verified sources for accurate information and refrain from spreading misinformation.

Conclusion:
Upon examining the alleged fare hike, it is evident that the increase pertains to Bengaluru, not Delhi. To verify this, we reviewed the official website of Bangalore Metro Rail Corporation Limited (BMRCL) and cross-checked the information with appropriate evidence, including relevant images. Our findings confirm that no fare hike has been announced by the Delhi Metro Rail Corporation Ltd. (DMRC).

- Claim: Delhi Metro price Hike after BJP’s victory in election
- Claimed On: X (Formerly Known As Twitter)
- Fact Check: False and Misleading
Related Blogs

Introduction
Your iPhone isn’t just a device: it’s a central hub for almost everything in your life. From personal photos and videos to sensitive data, it holds it all. You rely on it for essential services, from personal to official communications, sharing of information, banking and financial transactions, and more. With so much critical information stored on your device, protecting it from cyber threats becomes essential. This is where the iOS Lockdown Mode feature comes in as a digital bouncer to keep cyber crooks at bay.
Apple introduced the ‘lockdown’ mode in 2022. It is a new optional security feature and is available on iPhones, iPads, and Mac devices. It works as an extreme and optional protection mechanism for a certain segment of users who might be at a higher risk of being targeted by serious cyber threats and intrusions into their digital security. So people like journalists, activists, government officials, celebrities, cyber security professionals, law enforcement professionals, and lawyers etc are some of the intended beneficiaries of the feature. Sometimes the data on their devices can be highly confidential and it can cause a lot of disruption if leaked or compromised by cyber threats. Given how prevalent cyber attacks are in this day and age, the need for such a feature cannot be overstated. This feature aims at providing an additional firewall by limiting certain functions of the device and hence reducing the chances of the user being targeted in any digital attack.
How to Enable Lockdown Mode in Your iPhone
On your iPhone running on iOS 16 Developer Beta 3, you just need to go to Settings - Privacy and Security - Lockdown Mode. Tap on Turn on Lockdown Mode, and read all the information regarding the features that will be unavailable on your device if you go forward, and if you’re satisfied with the same all you have to do is scroll down and tap on Turn on Lockdown Mode. Your iPhone will get restarted with Lockdown Mode enabled.
Easy steps to enable lockdown mode are as follows:
- Open the Settings app.
- Tap Privacy & Security.
- Scroll down, tap Lockdown Mode, then tap Turn On Lockdown Mode.
How Lockdown Mode Protects You
Lockdown Mode is a security feature that prevents certain apps and features from functioning properly when enabled. For example, your device will not automatically connect to Wi-Fi networks without security and will disconnect from a non-secure network when Lockdown Mode is activated. Many other features may be affected because the system will prioritise security standards above the typical operational functions. Since lockdown mode restricts certain features and activities, one can exclude a particular app or website in Safari from being impacted and limited by restrictions. Only exclude trusted apps or websites if necessary.
References:
- https://support.apple.com/en-in/105120#:~:text=Tap%20Privacy%20%26%20Security.,then%20enter%20your%20device%20passcode
- https://www.business-standard.com/technology/tech-news/apple-lockdown-mode-what-is-it-and-how-it-prevents-spyware-attacks-124041200667_1.html
.webp)
Introduction
Cyber slavery is a form of modern exploitation that begins with online deception and evolves into physical human trafficking. In recent times, cyber slavery has emerged as a serious threat that involves exploiting individuals through digital means under coercive or deceptive conditions. Offenders target innocent individuals and lure them by giving fake promises to offer them employment or alike. Cyber slavery can occur on a global scale, targeting vulnerable individuals worldwide through the internet and is a disturbing continuum of online manipulation that leads to real-world abuse and exploitation, where individuals are entrapped by false promises and subjected to severe human rights violations. It can take many different forms, such as coercive involvement in cybercrime, forced employment in online frauds, exploitation in the gig economy, or involuntary slavery. This issue has escalated to the highest level where Indians are being trafficked for jobs in countries like Laos and Cambodia. Recently over 5,000 Indians were reported to be trapped in Southeast Asia, where they are allegedly being coerced into carrying out cyber fraud. It was reported that particularly Indian techies were lured to Cambodia for high-paying jobs and later they found themselves trapped in cyber fraud schemes, forced to work 16 hours a day under severe conditions. This is the harsh reality for thousands of Indian tech professionals who are lured under false pretences to employment in Southeast Asia, where they are forced into committing cyber crimes.
Over 5,000 Indians Held in Cyber Slavery and Human Trafficking Rings
India has rescued 250 citizens in Cambodia who were forced to run online scams, with more than 5,000 Indians stuck in Southeast Asia. The victims, mostly young and tech-savvy, are lured into illegal online work ranging from money laundering and crypto fraud to love scams, where they pose as lovers online. It was reported that Indians are being trafficked for jobs in countries like Laos and Cambodia, where they are forced to conduct cybercrime activities. Victims are often deceived about where they would be working, thinking it will be in Thailand or the Philippines. Instead, they are sent to Cambodia, where their travel documents are confiscated and they are forced to carry out a variety of cybercrimes, from stealing life savings to attacking international governmental or non-governmental organizations. The Indian embassy in Phnom Penh has also released an advisory warning Indian nationals of advertisements for fake jobs in the country through which victims are coerced to undertake online financial scams and other illegal activities.
Regulatory Landscape
Trafficking in Human Beings (THB) is prohibited under the Constitution of India under Article
23 (1). The Immoral Traffic (Prevention) Act, of 1956 (ITPA) is the premier legislation for the prevention of trafficking for commercial sexual exploitation. Section 111 of the Bharatiya Nyaya Sanhita (BNS), 2023, is a comprehensive legal provision aimed at combating organized crime and will be useful in persecuting people involved in such large-scale scams. India has also ratified certain bilateral agreements with several countries to facilitate intelligence sharing and coordinated efforts to combat transnational organized crime and human trafficking.
CyberPeace Policy Recommendations
● Misuse of Technology has exploited the new genre of cybercrimes whereby cybercriminals utilise social media platforms as a tool for targeting innocent individuals. It requires collective efforts from social media companies and regulatory authorities to time to time address the new emerging cybercrimes and develop robust preventive measures to counter them.
● Despite the regulatory mechanism in place, there are certain challenges such as jurisdictional challenges, challenges in detection due to anonymity, and investigations challenges which significantly make the issue of cyber human trafficking a serious evolving threat. Hence International collaboration between the countries is encouraged to address the issue considering the present situation in a technologically driven world. Robust legislation that addresses both national and international cases of human trafficking and contains strict penalties for offenders must be enforced.
● Cybercriminals target innocent people by offering fake high-pay job opportunities, building trust and luring them. It is high time that all netizens should be aware of such tactics deployed by bad actors and recognise the early signs of them. By staying vigilant and cross-verifying the details from authentic sources, netizens can safeguard themselves from such serious threats which even endanger their life by putting them under restrictions once they are being trafficked. It is a notable fact that the Indian government and its agencies are continuously making efforts to rescue the victims of cyber human trafficking or cyber slavery, they must further develop robust mechanisms in place to conduct specialised operations by specialised government agencies to rescue the victims in a timely manner.
● Capacity building and support mechanisms must be encouraged by government entities, cyber security experts and Non-Governmental Organisations (NGOs) to empower the netizens to follow best practices while navigating the online landscape, providing them with helpline or help centres to report any suspicious activity or behaviour they encounter, and making them empowered to feel safe on the Internet while simultaneously building defenses to stay protected from cyber threats.
References:
2. https://www.bbc.com/news/world-asia-india-68705913
3. https://therecord.media/india-rescued-cambodia-scam-centers-citizens
4. https://www.the420.in/rescue-indian-tech-workers-cambodia-cyber-fraud-awareness/
7. https://www.dyami.services/post/intel-brief-250-indian-citizens-rescued-from-cyber-slavery
8. https://www.mea.gov.in/human-trafficking.htm
9. https://www.drishtiias.com/blog/the-vicious-cycle-of-human-trafficking-and-cybercrime

As AI language models become more powerful, they are also becoming more prone to errors. One increasingly prominent issue is AI hallucinations, instances where models generate outputs that are factually incorrect, nonsensical, or entirely fabricated, yet present them with complete confidence. Recently, ChatGPT released two new models—o3 and o4-mini, which differ from earlier versions as they focus more on step-by-step reasoning rather than simple text prediction. With the growing reliance on chatbots and generative models for everything from news summaries to legal advice, this phenomenon poses a serious threat to public trust, information accuracy, and decision-making.
What Are AI Hallucinations?
AI hallucinations occur when a model invents facts, misattributes quotes, or cites nonexistent sources. This is not a bug but a side effect of how Large Language Models (LLMs) work, and it is only the probability that can be reduced, not their occurrence altogether. Trained on vast internet data, these models predict what word is likely to come next in a sequence. They have no true understanding of the world or facts, they simulate reasoning based on statistical patterns in text. What is alarming is that the newer and more advanced models are producing more hallucinations, not fewer. seemingly counterintuitive. This has been prevalent reasoning-based models, which generate answers step-by-step in a chain-of-thought style. While this can improve performance on complex tasks, it also opens more room for errors at each step, especially when no factual retrieval or grounding is involved.
As per reports shared on TechCrunch, it mentioned that when users asked AI models for short answers, hallucinations increased by up to 30%. And a study published in eWeek found that ChatGPT hallucinated in 40% of tests involving domain-specific queries, such as medical and legal questions. This was not, however, limited to this particular Large Language Model, but also similar ones like DeepSeek. Even more concerning are hallucinations in multimodal models like those used for deepfakes. Forbes reports that some of these models produce synthetic media that not only look real but are also capable of contributing to fabricated narratives, raising the stakes for the spread of misinformation during elections, crises, and other instances.
It is also notable that AI models are continually improving with each version, focusing on reducing hallucinations and enhancing accuracy. New features, such as providing source links and citations, are being implemented to increase transparency and reliability in responses.
The Misinformation Dilemma
The rise of AI-generated hallucinations exacerbates the already severe problem of online misinformation. Hallucinated content can quickly spread across social platforms, get scraped into training datasets, and re-emerge in new generations of models, creating a dangerous feedback loop. However, it helps that the developers are already aware of such instances and are actively charting out ways in which we can reduce the probability of this error. Some of them are:
- Retrieval-Augmented Generation (RAG): Instead of relying purely on a model’s internal knowledge, RAG allows the model to “look up” information from external databases or trusted sources during the generation process. This can significantly reduce hallucination rates by anchoring responses in verifiable data.
- Use of smaller, more specialised language models: Lightweight models fine-tuned on specific domains, such as medical records or legal texts. They tend to hallucinate less because their scope is limited and better curated.
Furthermore, transparency mechanisms such as source citation, model disclaimers, and user feedback loops can help mitigate the impact of hallucinations. For instance, when a model generates a response, linking back to its source allows users to verify the claims made.
Conclusion
AI hallucinations are an intrinsic part of how generative models function today, and such a side-effect would continue to occur until foundational changes are made in how models are trained and deployed. For the time being, developers, companies, and users must approach AI-generated content with caution. LLMs are, fundamentally, word predictors, brilliant but fallible. Recognising their limitations is the first step in navigating the misinformation dilemma they pose.
References
- https://www.eweek.com/news/ai-hallucinations-increase/
- https://www.resilience.org/stories/2025-05-11/better-ai-has-more-hallucinations/
- https://www.ekathimerini.com/nytimes/1269076/ai-is-getting-more-powerful-but-its-hallucinations-are-getting-worse/
- https://techcrunch.com/2025/05/08/asking-chatbots-for-short-answers-can-increase-hallucinations-study-finds/
- https://en.as.com/latest_news/is-chatgpt-having-robot-dreams-ai-is-hallucinating-and-producing-incorrect-information-and-experts-dont-know-why-n/
- https://www.newscientist.com/article/2479545-ai-hallucinations-are-getting-worse-and-theyre-here-to-stay/
- https://www.forbes.com/sites/conormurray/2025/05/06/why-ai-hallucinations-are-worse-than-ever/
- https://towardsdatascience.com/how-i-deal-with-hallucinations-at-an-ai-startup-9fc4121295cc/
- https://www.informationweek.com/machine-learning-ai/getting-a-handle-on-ai-hallucinations