#FactCheck - Viral Claim of Highway in J&K Proven Misleading
Executive Summary:
A viral post on social media shared with misleading captions about a National Highway being built with large bridges over a mountainside in Jammu and Kashmir. However, the investigation of the claim shows that the bridge is from China. Thus the video is false and misleading.

Claim:
A video circulating of National Highway 14 construction being built on the mountain side in Jammu and Kashmir.

Fact Check:
Upon receiving the image, Reverse Image Search was carried out, an image of an under-construction road, falsely linked to Jammu and Kashmir has been proven inaccurate. After investigating we confirmed the road is from a different location that is G6911 Ankang-Laifeng Expressway in China, highlighting the need to verify information before sharing.


Conclusion:
The viral claim mentioning under-construction Highway from Jammu and Kashmir is false. The post is actually from China and not J&K. Misinformation like this can mislead the public. Before sharing viral posts, take a brief moment to verify the facts. This highlights the importance of verifying information and relying on credible sources to combat the spread of false claims.
- Claim: Under-Construction Road Falsely Linked to Jammu and Kashmir
- Claimed On: Instagram and X (Formerly Known As Twitter)
- Fact Check: False and Misleading
Related Blogs

AI has grown manifold in the past decade and so has its reliance. A MarketsandMarkets study estimates the AI market to reach $1,339 billion by 2030. Further, Statista reports that ChatGPT amassed more than a million users within the first five days of its release, showcasing its rapid integration into our lives. This development and integration have their risks. Consider this response from Google’s AI chatbot, Gemini to a student’s homework inquiry: “You are not special, you are not important, and you are not needed…Please die.” In other instances, AI has suggested eating rocks for minerals or adding glue to pizza sauce. Such nonsensical outputs are not just absurd; they’re dangerous. They underscore the urgent need to address the risks of unrestrained AI reliance.
AI’s Rise and Its Limitations
The swiftness of AI’s rise, fueled by OpenAI's GPT series, has revolutionised fields like natural language processing, computer vision, and robotics. Generative AI Models like GPT-3, GPT-4 and GPT-4o with their advanced language understanding, enable learning from data, recognising patterns, predicting outcomes and finally improving through trial and error. However, despite their efficiency, these AI models are not infallible. Some seemingly harmless outputs can spread toxic misinformation or cause harm in critical areas like healthcare or legal advice. These instances underscore the dangers of blindly trusting AI-generated content and highlight the importance and the need to understand its limitations.
Defining the Problem: What Constitutes “Nonsensical Answers”?
Harmless errors due to AI nonsensical responses can be in the form of a wrong answer for a trivia question, whereas, critical failures could be as damaging as wrong legal advice.
AI algorithms sometimes produce outputs that are not based on training data, are incorrectly decoded by the transformer or do not follow any identifiable pattern. This response is known as a Nonsensical Answer and the situation is known as an “AI Hallucination”. It can be factual inaccuracies, irrelevant information or even contextually inappropriate responses.
A significant source of hallucination in machine learning algorithms is the bias in input that it receives. If the inputs for the AI model are full of biased datasets or unrepresentative data, it may lead to the model hallucinating and producing results that reflect these biases. These models are also vulnerable to adversarial attacks, wherein bad actors manipulate the output of an AI model by tweaking the input data ina subtle manner.
The Need for Policy Intervention
Nonsensical AI responses risk eroding user trust and causing harm, highlighting the need for accountability despite AI’s opaque and probabilistic nature. Different jurisdictions address these challenges in varied ways. The EU’s AI Act enforces stringent reliability standards with a risk-based and transparent approach. The U.S. emphasises creating ethical guidelines and industry-driven standards. India’s DPDP Act indirectly tackles AI safety through data protection, focusing on the principles of accountability and consent. While the EU prioritises compliance, the U.S. and India balance innovation with safeguards. This reflects on the diverse approaches that nations have to AI regulation.
Where Do We Draw the Line?
The critical question is whether AI policies should demand perfection or accept a reasonable margin for error. Striving for flawless AI responses may be impractical, but a well-defined framework can balance innovation and accountability. Adopting these simple measures can lead to the creation of an ecosystem where AI develops responsibly while minimising the societal risks it can pose. Key measures to achieve this include:
- Ensure that users are informed about AI and its capabilities and limitations. Transparent communication is the key to this.
- Implement regular audits and rigorous quality checks to maintain high standards. This will in turn prevent any form of lapses.
- Establishing robust liability mechanisms to address any harms caused by AI-generated material which is in the form of misinformation. This fosters trust and accountability.
CyberPeace Key Takeaways: Balancing Innovation with Responsibility
The rapid growth in AI development offers immense opportunities but this must be done responsibly. Overregulation of AI can stifle innovation, on the other hand, being lax could lead to unintended societal harm or disruptions.
Maintaining a balanced approach to development is essential. Collaboration between stakeholders such as governments, academia, and the private sector is important. They can ensure the establishment of guidelines, promote transparency, and create liability mechanisms. Regular audits and promoting user education can build trust in AI systems. Furthermore, policymakers need to prioritise user safety and trust without hindering creativity while making regulatory policies.
We can create a future that is AI-development-driven and benefits us all by fostering ethical AI development and enabling innovation. Striking this balance will ensure AI remains a tool for progress, underpinned by safety, reliability, and human values.
References
- https://timesofindia.indiatimes.com/technology/tech-news/googles-ai-chatbot-tells-student-you-are-not-needed-please-die/articleshow/115343886.cms
- https://www.forbes.com/advisor/business/ai-statistics/#2
- https://www.reuters.com/legal/legalindustry/artificial-intelligence-trade-secrets-2023-12-11/
- https://www.indiatoday.in/technology/news/story/chatgpt-has-gone-mad-today-openai-says-it-is-investigating-reports-of-unexpected-responses-2505070-2024-02-21
.webp)
Introduction
Against the dynamic backdrop of Mumbai, where the intersection of age-old markets and cutting-edge innovation is a daily reality, an initiative of paramount importance has begun to take shape within the hallowed walls of the Reserve Bank of India (RBI). This is not just a tweak, a nudge in policy, or a subtle refinement of protocols. What we're observing is nothing short of a paradigmatic shift, a recalibration of systemic magnitude, that aims to recalibrate the way India's financial monoliths oversee, manage, and secure their informational bedrock – their treasured IT systems.
On the 7th of November, 2023, the Reserve Bank of India, that bastion of monetary oversight and national fiscal stability, unfurled a new doctrine – the 'Master Direction on Information Technology Governance, Risk, Controls, and Assurance Practices.' A document comprehensive in its reach, it presents not merely an update but a consolidation of all previously issued guidelines, instructions, and circulars relevant to IT governance, plaited into a seamless narrative that extols virtues of structured control and unimpeachable assurance practices. Moreover, it grasps the future potential of Business Continuity and Disaster Recovery Management, testaments to RBI's forward-thinking vision.
This novel edict has been crafted with a target audience that spans the varied gamut of financial entities – from Scheduled Commercial Banks to Non-Banking Financial Companies, from Credit Information Companies to All India Financial Institutions. These are the juggernauts that keep the economic wheels of the nation churning, and RBI's precision-guided document is an unambiguous acknowledgment of the vital role IT holds in maintaining the heartbeat of these financial bodies. Here lies a riveting declaration that robust governance structures aren't merely preferred but essential to manage the landscape of IT-related risks that balloon in an era of ever-proliferating digital complexity.
Directive Structure
The directive's structure is a combination of informed precision and intuitive foresight. Its seven chapters are not simply a grouping of topics; they are the seven pillars upon which the temple of IT governance is to be erected. The introductory chapter does more than set the stage – it defines the very reality, the scope, and the applicability of the directive, binding the reader in an inextricable covenant of engagement and anticipation. It's followed by a deep dive into the cradle of IT governance in the second chapter, drawing back the curtain to reveal the nuanced roles and defiant responsibilities bestowed upon the Board of Directors, the IT Strategy Committee, the clairvoyant Senior Management, the IT Steering Committee, and the pivotal Head of IT Function.
As we move along to the third chapter, we encounter the nuts and bolts of IT Infrastructure & Services Management. This is not just a checklist; it is an orchestration of the management of IT services, third-party liaisons, the calculus of capacity management, and the nuances of project management. Here terms like change and patch management, cryptographic controls, and physical and environmental safeguards leap from the page – alive with earnest practicality, demanding not just attention but action.
Transparency deepens as we glide into the fourth chapter with its robust exploration of IT and Information Security Risk Management. Here, the demand for periodic dissection of IT-related perils is made clear, along with the edifice of an IT and Information Security Risk Management Framework, buttressed by the imperatives of Vulnerability Assessment and Penetration Testing.
The fifth chapter presents a tableau of circumspection and preparedness, as it waxes eloquent on the necessity and architecture of a well-honed Business Continuity Plan and a disaster-ready DR Policy. It is a paean to the anticipatory stance financial institutions must employ in a world fraught with uncertainty.
Continuing the narrative, the sixth chapter places the spotlight on Information Systems Audit, delineating the precise role played by the Audit Committee of the Board in ushering in accountability through an exhaustive IS Audit of the institution's virtual expanse.
And as we perch on the final chapter, we're privy to the 'repeal and other provisions' of the directive, underscoring the interplay of other applicable laws and the interpretation a reader may yield from the directive's breadth.
Conclusion
To proclaim that this directive is a mere step forward in the RBI's exhaustive and assiduous efforts to propel India's financial institutions onto the digital frontier would be a grave understatement. What we are witnessing is the inception of a more adept, more secure, and more resilient financial sector. This directive is nothing less than a beacon, shepherding in an epoch of IT governance marked by impervious governance structures, proactive risk management, and an unyielding commitment to the pursuit of excellence and continuous improvement. This is no ephemeral shift - this is, indisputably, a revolutionary stride into a future where confidence and competence stand as the watchwords in navigating the digital terra incognita.
References:
.webp)
Introduction
Big Tech has been pushing back against regulatory measures, particularly regarding data handling practices. X Corp (formerly Twitter) has taken a prominent stance in India. The platform has filed a petition against the Central and State governments, challenging content-blocking orders and opposing the Center’s newly launched Sahyog portal. The X Corp has furthermore labelled the Sahyog Portal as a 'censorship portal' that enables government agencies to issue blocking orders using a standardized template.
The key regulations governing the tech space in India include the IT Act of 2000, IT Rules 2021 and 2023 (which stress platform accountability and content moderation), and the DPDP Act 2023, which intersects with personal data governance. This petition by the X Corp raises concerns for digital freedom, platform accountability, and the evolving regulatory frameworks in India.
Elon Musk vs Indian Government: Key Issues at Stake
The 2021 IT Rules, particularly Rule 3(1)(d) of Part II, outline intermediaries' obligations regarding ‘Content Takedowns’. Intermediaries must remove or disable access to unlawful content within 36 hours of receiving a court order or government notification. Notably, the rules do not require government takedown requests to be explicitly in writing, raising concerns about potential misuse.
X’s petition also focuses on the Sahyog Portal, a government-run platform that allows various agencies and state police to request content removal directly. They contend that the failure to comply with such orders can expose intermediaries' officers to prosecution. This has sparked controversy, with platforms like Elon Musk’s X arguing that such provisions grant the government excessive control, potentially undermining free speech and fostering undue censorship.
The broader implications include geopolitical tensions, potential business risks for big tech companies, and significant effects on India's digital economy, user engagement, and platform governance. Balancing regulatory compliance with digital rights remains a crucial challenge in this evolving landscape.
The Global Context: Lessons from Other Jurisdictions
The ‘EU's Digital Services Act’ establishes a baseline 'notice and takedown' system. According to the Act, hosting providers, including online platforms, must enable third parties to notify them of illegal content, which they must promptly remove to retain their hosting defence. The DSA also mandates expedited removal processes for notifications from trusted flaggers, user suspension for those with frequent violations, and enhanced protections for minors. Additionally, hosting providers have to adhere to specific content removal obligations, including the elimination of terrorist content within one hour and deploying technology to detect known or new CSAM material and remove it.
In contrast to the EU, the US First Amendment protects speech from state interference but does not extend to private entities. Dominant digital platforms, however, significantly influence discourse by moderating content, shaping narratives, and controlling advertising markets. This dual role creates tension as these platforms balance free speech, platform safety, and profitability.
India has adopted a model closer to the EU's approach, emphasizing content moderation to curb misinformation, false narratives, and harmful content. Drawing from the EU's framework, India could establish third-party notification mechanisms, enforce clear content takedown guidelines, and implement detection measures for harmful content like terrorist material and CSAM within defined timelines. This would balance content regulation with platform accountability while aligning with global best practices.
Key Concerns and Policy Debates
As the issue stands, the main concerns that arise are:
- The need for transparency in government orders for takedowns, the reasons and a clear framework for why they are needed and the guidelines for doing so.
- The need for balancing digital freedom with national security and the concerns that arise out of it for tech companies. Essentially, the role platforms play in safeguarding the democratic values enshrined in the Constitution of India.
- This court ruling by the Karnataka HC will have the potential to redefine the principles upon which the intermediary guidelines function under the Indian laws.
Potential Outcomes and the Way Forward
While we wait for the Hon’ble Court’s directives and orders in response to the filed suit, while the court's decision could favour either side or lead to a negotiated resolution, the broader takeaway is the necessity of collaborative policymaking that balances governmental oversight with platform accountability. This debate underscores the pressing need for a structured and transparent regulatory framework for content moderation. Additionally, this case also highlights the importance of due process in content regulation and the need for legal clarity for tech companies operating in India. Ultimately, a consultative and principles-based approach will be key to ensuring a fair and open digital ecosystem.
References
- https://www.thehindu.com/sci-tech/technology/elon-musks-x-sues-union-government-over-alleged-censorship-and-it-act-violations/article69352961.ece
- https://www.hindustantimes.com/india-news/elon-musk-s-x-sues-union-government-over-alleged-censorship-and-it-act-violations-101742463516588.html
- https://www.financialexpress.com/life/technology-explainer-why-has-x-accused-govt-of-censorship-3788648/
- https://thelawreporters.com/elon-musk-s-x-sues-indian-government-over-alleged-censorship-and-it-act-violations
- https://www.linklaters.com/en/insights/blogs/digilinks/2023/february/the-eu-digital-services-act---a-new-era-for-online-harms-and-intermediary-liability