#FactCheck - Viral Claim of Highway in J&K Proven Misleading
Executive Summary:
A viral post on social media shared with misleading captions about a National Highway being built with large bridges over a mountainside in Jammu and Kashmir. However, the investigation of the claim shows that the bridge is from China. Thus the video is false and misleading.

Claim:
A video circulating of National Highway 14 construction being built on the mountain side in Jammu and Kashmir.

Fact Check:
Upon receiving the image, Reverse Image Search was carried out, an image of an under-construction road, falsely linked to Jammu and Kashmir has been proven inaccurate. After investigating we confirmed the road is from a different location that is G6911 Ankang-Laifeng Expressway in China, highlighting the need to verify information before sharing.


Conclusion:
The viral claim mentioning under-construction Highway from Jammu and Kashmir is false. The post is actually from China and not J&K. Misinformation like this can mislead the public. Before sharing viral posts, take a brief moment to verify the facts. This highlights the importance of verifying information and relying on credible sources to combat the spread of false claims.
- Claim: Under-Construction Road Falsely Linked to Jammu and Kashmir
- Claimed On: Instagram and X (Formerly Known As Twitter)
- Fact Check: False and Misleading
Related Blogs
.webp)
Introduction
The spread of misinformation online has become a significant concern, with far-reaching social, political, economic and personal implications. The degree of vulnerability to misinformation differs from person to person, dependent on psychological elements such as personality traits, familial background and digital literacy combined with contextual factors like information source, repetition, emotional content and topic. How to reduce misinformation susceptibility in real-world environments where misinformation is regularly consumed on social media remains an open question. Inoculation theory has been proposed as a way to reduce susceptibility to misinformation by informing people about how they might be misinformed. Psychological inoculation campaigns on social media are effective at improving misinformation resilience at scale.
Prebunking has gained prominence as a means to preemptively build resilience against anticipated exposure to misinformation. This approach, grounded in Inoculation Theory, allows people to analyse and avoid manipulation without prior knowledge of specific misleading content by helping them build generalised resilience. We may draw a parallel here with broad spectrum antibiotics that can be used to fight infections and protect the body against symptoms before one is able to identify the particular pathogen at play.
Inoculation Theory and Prebunking
Inoculation theory is a promising approach to combat misinformation in the digital age. It involves exposing individuals to weakened forms of misinformation before encountering the actual false information. This helps develop resistance and critical thinking skills to identify and counter deceptive content.
Inoculation theory has been established as a robust framework for countering unwanted persuasion and can be applied within the modern context of online misinformation:
- Preemptive Inoculation: Preemptive inoculation entails exposing people to weaker kinds of misinformation before they encounter genuine erroneous information. Individuals can build resistance and critical thinking abilities by being exposed to typical misinformation methods and strategies.
- Technique/logic based Inoculation: Individuals can educate themselves about typical manipulative strategies used in online misinformation, which could be emotionally manipulative language, conspiratorial reasoning, trolling and logical fallacies. Learning to recognise these tactics as indicators of misinformation is an important first step to being able to recognise and reject the same. Through logical reasoning, individuals can recognize such tactics for what they are: attempts to distort the facts or spread misleading information. Individuals who are equipped with the capacity to discern weak arguments and misleading methods may properly evaluate the reliability and validity of information they encounter on the Internet.
- Educational Campaigns: Educational initiatives that increase awareness about misinformation, its consequences, and the tactics used to manipulate information can be useful inoculation tools. These programmes equip individuals with the knowledge and resources they need to distinguish between reputable and fraudulent sources, allowing them to navigate the online information landscape more successfully.
- Interactive Games and Simulations: Online games and simulations, such as ‘Bad News,’ have been created as interactive aids to protect people from misinformation methods. These games immerse users in a virtual world where they may learn about the creation and spread of misinformation, increasing their awareness and critical thinking abilities.
- Joint Efforts: Combining inoculation tactics with other anti-misinformation initiatives, such as accuracy primes, building resilience on social media platforms, and media literacy programmes, can improve the overall efficacy of our attempts to combat misinformation. Expert organisations and people can build a stronger defence against the spread of misleading information by using many actions at the same time.
CyberPeace Policy Recommendations for Tech/Social Media Platforms
Implementation of the Inoculation Theory on social media platforms can be seen as an effective strategy point for building resilience among users and combating misinformation. Tech/social media platforms can develop interactive and engaging content in the form of educational prebunking videos, short animations, infographics, tip sheets, and misinformation simulations. These techniques can be deployed through online games, collaborations with influencers and trusted sources that help design and deploy targeted campaigns whilst also educating netizens about the usefulness of Inoculation Theory so that they can practice critical thinking.
The approach will inspire self-monitoring amongst netizens so that people consume information mindfully. It is a powerful tool in the battle against misinformation because it not only seeks to prevent harm before it occurs, but also actively empowers the target audience. In other words, Inoculation Theory helps build people up, and takes them on a journey of transformation from ‘potential victim’ to ‘warrior’ in the battle against misinformation. Through awareness-building, this approach makes people more aware of their own vulnerabilities and attempts to exploit them so that they can be on the lookout while they read, watch, share and believe the content they receive online.
Widespread adoption of Inoculation Theory may well inspire systemic and technological change that goes beyond individual empowerment: these interventions on social media platforms can be utilized to advance digital tools and algorithms so that such interventions and their impact are amplified. Additionally, social media platforms can explore personalized inoculation strategies, and customized inoculation approaches for different audiences so as to be able to better serve more people. One such elegant solution for social media platforms can be to develop a dedicated prebunking strategy that identifies and targets specific themes and topics that could be potential vectors for misinformation and disinformation. This will come in handy, especially during sensitive and special times such as the ongoing elections where tools and strategies for ‘Election Prebunks’ could be transformational.
Conclusion
Applying Inoculation Theory in the modern context of misinformation can be an effective method of establishing resilience against misinformation, help in developing critical thinking and empower individuals to discern fact from fiction in the digital information landscape. The need of the hour is to prioritize extensive awareness campaigns that encourage critical thinking, educate people about manipulation tactics, and pre-emptively counter false narratives associated with information. Inoculation strategies can help people to build mental amour or mental defenses against malicious content and malintent that they may encounter in the future by learning about it in advance. As they say, forewarned is forearmed.
References
- https://www.science.org/doi/10.1126/sciadv.abo6254
- https://stratcomcoe.org/publications/download/Inoculation-theory-and-Misinformation-FINAL-digital-ISBN-ebbe8.pdf

Introduction
Search Engine Optimisation (SEO) is a process through which one can improve website visibility on search engine platforms like Google, Microsoft Bing, etc. There is an implicit understanding that SEO suggestions or the links that are generated on top are the more popular information sources and, hence, are deemed to be more trustworthy. This trust, however, is being misused by threat actors through a process called SEO poisoning.
SEO poisoning is a method used by threat actors to attack and obtain information about the user by using manipulative methods that position their desired link, web page, etc to appear at the top of the search engine algorithm. The end goal is to lure the user into clicking and downloading their malware, presented in the garb of legitimate marketing or even as a valid result for Google search.
An active example of attempts at SEO poisoning has been discussed in a report by the Hindustan Times on 11th November, 2024. It highlights that using certain keywords could make a user more susceptible to hacking. Hackers are now targeting people who enter specific words or specific combinations in search engines. According to the report, users who looked up and clicked on links at the top related to the search query “Are Bengal cats legal in Australia?” had details regarding their personal information posted online soon after.
SEO Poisoning - Modus Operandi Of Attack
There are certain tactics that are used by the attackers on SEO poisoning, these are:
- Keyword stuffing- This method involves overloading a webpage with irrelevant words, which helps the false website appear higher in ranking.
- Typosquatting- This method involves creating domain names or links similar to the more popular and trusted websites. A lack of scrutiny before clicking would lead the user to download malware, from what they thought was a legitimate site.
- Cloaking- This method operates by showing different content to both the search engines and the user. While the search engine sees what it assumes to be a legitimate website, the user is exposed to harmful content.
- Private Link Networks- Threat actors create a group of unrelated websites in order to increase the number of referral links, which enables them to rank higher on search engine platforms.
- Article Spinning- This method involves imitating content from other pre-existing, legitimate websites, while making a few minor changes, giving the impression to search engine crawlers of it being original content.
- Sneaky Redirect- This method redirects the users to malicious websites (without their knowledge) instead of the ones the user had intended to click.
CyberPeace Recommendations
- Employee Security Awareness Training: Security awareness training can help employees familiarise themselves with tactics of SEO poisoning, encouraging them to either spot such inconsistencies early on or even alert the security team at the earliest.
- Tool usage: Companies can use Digital Risk Monitoring tools to catch instances of typosquatting. Endpoint Detection and Response (EDR) tools also help keep an eye on client history and assess user activities during security breaches to figure out the source of the affected file.
- Internal Security Measures: To refer to lists of Indicators of Compromise (IOC). IOC has URL lists that show evidence of the strange behaviour of websites, and this can be used to practice caution. Deploying Web Application Firewalls (WAFs) to mitigate and detect malicious traffic is helpful.
Conclusion
The nature of SEO poisoning is such that it inherently promotes the spread of misinformation, and facilitates cyberattacks. Misinformation regarding the legitimacy of the links and the content they display, in order to lure users into clicking on them, puts personal information under threat. As people trust their favoured search engines, and there is a lack of awareness of such tactics in use, one must exercise caution while clicking on links that seem to be popular, despite them being hosted by trusted search engines.
References
- https://www.checkpoint.com/cyber-hub/cyber-security/what-is-cyber-attack/what-is-seo-poisoning/
- https://www.vectra.ai/topics/seo-poisoning
- https://www.techtarget.com/whatis/definition/search-poisoning
- https://www.blackberry.com/us/en/solutions/endpoint-security/ransomware-protection/seo-poisoning
- https://www.coalitioninc.com/blog/seo-poisoning-attacks
- https://www.sciencedirect.com/science/article/abs/pii/S0160791X24000186
- https://www.repindia.com/blog/secure-your-organisation-from-seo-poisoning-and-malvertising-threats/
- https://www.hindustantimes.com/technology/typing-these-6-words-on-google-could-make-you-a-target-for-hackers-101731286153415.html

INTRODUCTION:
The Ministry of Defence has recently designated the Additional Directorate General of Strategic Communication in the Indian Army as the nodal officer now authorised to send removal requests and notices to social media intermediaries regarding posts consisting of illegal content with respect to the Army. Earlier, this process was followed through the Ministry of Electronics and Information Technology (MeitY). The recent designation gives the Army the autonomy of circumnavigating the old process and enables them to send direct notices (as deemed appropriate by the government and its agency). Let us look at the legal framework that allows them to do so and its policy implications.
BACKGROUND AND LEGAL FRAMEWORK:
Section 69 of the IT Act 2000 gives the government the power to issue directions for interception, monitoring or decryption of any data/information through any computer resource. This is done so under six reasons related to:
- Upholding the sovereignty or integrity of India
- Security of the state
- Defence of India
- Friendly relations with foreign states
- Public order or for preventing incitement of any cognisable offence
- Investigations of offences related to the aforementioned reasons
Section 79(3)(b) of the Information Technology Act 2000 is another aspect of the law related to the removal of data on notification. It allows for all intermediaries (including internet service providers and social media platforms) to have safety harbours from the liability of the content put out by third parties/users on their platforms. This, however, is only applicable when the intermediary has either received a notification or actual knowledge by the appropriate government or its agency of the data on their platform being used for unlawful acts and complies promptly by removing the data from their platform without tampering with evidence.
PLAUSIBLE REASONS FOR POLICY DECISION:
Cases related to the Indian Army are sensitive for a number of reasons, rooted in the fact that they directly pertain to the nation's security, integrity and sovereignty. The impact of the spread of misinformation and disinformation is almost instantaneous and the stakes are high in any circumstance, but exceptionally so when it comes to the Armed Forces and the nation’s security status. A mechanism to tackle cases of such a security level should allow for quick action from the authorities. Owing to the change in the ability to notify directly rather than through another ministry, the army can now promptly deal with these concerns as and when they arise. One immediate benefit of this change is that the forces can now quickly respond to instances where foreign states and actors with malicious intent put out information that can cause harm to the nation’s interests, image and integrity.
This step helps the forces deal with countering misinformation, ensuring national security and even addressing issues of online propaganda. An example of sensitive content about the army leading to legal intervention is the case of Delhi-based magazine The Caravan. The Defence Ministry, along with the Intelligence Bureau and the Jammu and Kashmir police ordered the Delhi-based publication to remove an article claiming the murder and torture of civilians by the Indian army in Jammu and Kashmir citing the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The instruction was challenged by the magazine in the courts.
CONCLUSION:
This move brings with it potential benefits along with risks and the focus should always be on maintaining a balanced approach. Transparency and accountability are imperative and checks on related guidelines so as to prevent misuse while simultaneously protecting national security should be at the centre of the objective of the policy approach. Misinformation in and about the armed forces must be dealt with immediately.
REFERENCES:
- https://www.hindustantimes.com/india-news/army-can-now-directly-issue-notices-to-remove-online-posts-101730313177838.html
- https://www.hindustantimes.com/india-news/inside-79-3-b-the-content-blocking-provision-with-many-legal-grey-areas-101706987924882.html
- https://www.thehindu.com/news/national/govt-orders-magazine-to-take-down-article-on-army-torture-and-murder-in-jammu/article67840790.ece
- https://myind.net/Home/viewArticle/army-gains-authority-to-directly-issue-notice-to-take-down-online-posts