#FactCheck - Viral Claim of Highway in J&K Proven Misleading
Executive Summary:
A viral post on social media shared with misleading captions about a National Highway being built with large bridges over a mountainside in Jammu and Kashmir. However, the investigation of the claim shows that the bridge is from China. Thus the video is false and misleading.

Claim:
A video circulating of National Highway 14 construction being built on the mountain side in Jammu and Kashmir.

Fact Check:
Upon receiving the image, Reverse Image Search was carried out, an image of an under-construction road, falsely linked to Jammu and Kashmir has been proven inaccurate. After investigating we confirmed the road is from a different location that is G6911 Ankang-Laifeng Expressway in China, highlighting the need to verify information before sharing.


Conclusion:
The viral claim mentioning under-construction Highway from Jammu and Kashmir is false. The post is actually from China and not J&K. Misinformation like this can mislead the public. Before sharing viral posts, take a brief moment to verify the facts. This highlights the importance of verifying information and relying on credible sources to combat the spread of false claims.
- Claim: Under-Construction Road Falsely Linked to Jammu and Kashmir
- Claimed On: Instagram and X (Formerly Known As Twitter)
- Fact Check: False and Misleading
Related Blogs

Introduction
Discussions took place focused on cybersecurity measures, specifically addressing cybercrime in the context of emerging technologies such as Non-Fungible Tokens (NFTs), Artificial Intelligence (AI), and the Metaverse. Session 5 of the conference focused on the interconnectedness between the darknet and cryptocurrency and the challenges it poses for law enforcement agencies and regulators. They discussed that Understanding AI is necessary for enterprises. AI models have difficulties, but we are looking forward to trustworthy AIs. and AI technology must be transparent.
Darknet and Cryptocurrency
The darknet refers to the hidden part of the internet where illicit activities have proliferated in recent years. It was initially developed to provide anonymity, privacy, and protection to specific individuals such as journalists, activists, and whistleblowers. However, it has now become a playground for criminal activities. Cryptocurrency, particularly Bitcoin, has been widely adopted on the darknet due to its anonymous nature, enabling anti-money laundering and unlawful transactions.
Three major points emerge from this relationship: the integrated nature of the darknet and cryptocurrency, the need for regulations to prevent darknet-based crimes, and the importance of striking a balance between privacy and security.
Key Challenges:
- Integrated Relations: The darknet and cryptocurrency have evolved independently, with different motives and purposes. It is crucial to understand the integrated relationship between them and how criminals exploit this connection.
- Regulatory Frameworks: There is a need for effective regulations to prevent crimes facilitated through the darknet and cryptocurrency while striking a balance between privacy and security.
- Privacy and Security: Privacy is a fundamental right, and any measures taken to enhance security should not infringe upon individual privacy. A multistakeholder approach involving tech companies and regulators is necessary to find this delicate balance.
Challenges Associated with Cryptocurrency Use:
The use of cryptocurrency on the darknet poses several challenges. The risks associated with darknet-based cryptocurrency crimes are a significant concern. Additionally, regulatory challenges arise due to the decentralised and borderless nature of cryptocurrencies. Mitigating these challenges requires innovative approaches utilising emerging technologies.
Preventing Misuse of Technologies:
The discussion emphasised that we can step ahead of the people who wish to use these beautiful technologies meant and developed for a different purpose, to prevent from using them for crime.
Monitoring the Darknet:
The darknet, as explained, is an elusive part of the internet that necessitates the use of a special browser for access. Initially designed for secure communication by the US government, its purpose has drastically changed over time. The darknet’s evolution has given rise to significant challenges for law enforcement agencies striving to monitor its activities.
Around 95% of the activities carried out on the dark net are associated with criminal acts. Estimates suggest that over 50% of the global cybercrime revenue originates from the dark net. This implies that approximately half of all cybercrimes are facilitated through the darknet.
The exploitation of the darknet has raised concerns regarding the need for effective regulation. Monitoring the darknet is crucial for law enforcement, national agencies, and cybersecurity companies. The challenges associated with the darknet’s exploitation and the criminal activities facilitated by cryptocurrency emphasise the pressing need for regulations to ensure a secure digital landscape.
Use of Cryptocurrency on the Darknet
Cryptocurrency plays a central role in the activities taking place on the darknet. The discussion highlighted its involvement in various illicit practices, including ransomware attacks, terrorist financing, extortion, theft, and the operation of darknet marketplaces. These applications leverage cryptocurrency’s anonymous features to enable illegal transactions and maintain anonymity.
AI's Role in De-Anonymizing the Darknet and Monitoring Challenges:
- 1.AI’s Potential in De-Anonymizing the Darknet
During the discussion, it was highlighted how AI could be utilised to help in de-anonymizing the darknet. AI’s pattern recognition capabilities can aid in identifying and analysing patterns of behaviour within the darknet, enabling law enforcement agencies and cybersecurity experts to gain insights into its operations. However, there are limitations to what AI can accomplish in this context. AI cannot break encryption or directly associate patterns with specific users, but it can assist in identifying illegal marketplaces and facilitating their takedown. The dynamic nature of the darknet, with new marketplaces quickly emerging, adds further complexity to monitoring efforts.
- 2.Challenges in Darknet Monitoring
Monitoring the darknet poses various challenges due to its vast amount of data, anonymous and encrypted nature, dynamically evolving landscape, and the need for specialised access. These challenges make it difficult for law enforcement agencies and cybersecurity professionals to effectively track and prevent illicit activities.
- 3.Possible Ways Forward
To address the challenges, several potential avenues were discussed. Ethical considerations, striking a balance between privacy and security, must be taken into account. Cross-border collaboration, involving the development of relevant laws and policies, can enhance efforts to combat darknet-related crimes. Additionally, education and awareness initiatives, driven by collaboration among law enforcement, government entities, and academia, can play a crucial role in combating darknet activities.
The panel also addressed the questions from the audience
- How law enforcement agencies and regulators can use AI to detect and prevent crimes on the darknet and cryptocurrency? The panel answered that- Law enforcement officers should also be AI and technology ready, and that kind of upskilling program should be there in place.
- How should lawyers and the judiciary understand the problem and regulate it? The panel answered that AI should only be applied by looking at the outcomes. And Law has to be clear as to what is acceptable and what is not.
- Aligning AI with human intention? Whether it’s possible? Whether can we create an ethical AI instead of talking about using AI ethically? The panel answered that we have to understand how to behave ethically. AI can beat any human. We have to learn AI. Step one is to focus on our ethical behaviour. And step two is bringing the ethical aspect to the software and technologies. Aligning AI with human intention and creating ethical AI is a challenge. The focus should be on ethical behaviour both in humans and in the development of AI technologies.
Conclusion
The G20 Conference on Crime and Security shed light on the intertwined relationship between the darknet and cryptocurrency and the challenges it presents to cybersecurity. The discussions emphasised the need for effective regulations, privacy-security balance, AI integration, and cross-border collaboration to tackle the rising cybercrime activities associated with the darknet and cryptocurrency. Addressing these challenges will require the combined efforts of governments, law enforcement agencies, technology companies, and individuals committed to building a safer digital landscape.

Introduction
A bill requiring social media companies, providers of encrypted communications, and other online services to report drug activity on their platforms to the U.S. The Drug Enforcement Administration (DEA) advanced to the Senate floor, alarming privacy advocates who claim the legislation transforms businesses into de facto drug enforcement agents and exposes many of them to liability for providing end-to-end encryption.
Why is there a requirement for online companies to report drug activity?
The reason behind the bill is that there was a Kansas teenager died after unknowingly taking a fentanyl-laced pill he purchased on Snapchat. The bill requires social media companies and other web communication providers to provide the DEA with users’ names and other information when the companies have “actual knowledge” that illicit drugs are being distributed on their platforms.
There is an urgent need to look into this matter as platforms like Snapchat and Instagram are the constant applications that netizens use. If these kinds of apps promote the selling of drugs, then it will result in major drug-selling vehicles and become drug-selling platforms.
Threat to end to end encryption
End-to-end encryption has long been criticised by law enforcement for creating a “lawless space” that criminals, terrorists, and other bad actors can exploit for their illicit purposes. End- to end encryption is important for privacy, but it has been criticised as criminals also use it for bad purposes that result in cyber fraud and cybercrimes.
Cases of drug peddling on social media platforms
It is very easy to get drugs on social media, just like calling an Uber. It is that simple to get the drugs. The survey discovered that access to illegal drugs is “staggering” on social media applications, which has contributed to the rising number of fentanyl overdoses, which has resulted in suicide, gun violence, and accidents.
According to another survey, drug dealers use slang, emoticons, QR codes, and disappearing messages to reach customers while avoiding content monitoring measures on social networking platforms. Drug dealers are frequently active on numerous social media platforms, advertising their products on Instagram while providing their WhatApps or Snapchat names for queries, making it difficult for law officials to crack down on the transactions.
There is a need for social media platforms to report these kinds of drug-selling activity on specific platforms to the Drug enforcement administration. The bill requires online companies to report drug cases going on websites, such as the above-mentioned Snapchat case. There are so many other cases where drug dealers sell the drug through Instagram, Snapchat etc. Usually, if Instagram blocks one account, they create another account for the drug selling. Just by only blocking the account does not help to stop drug trafficking on social media platforms.
Will this put the privacy of users at risk?
It is important to report the cybercrime activities of selling drugs on social media platforms. The companies will only detect the activity regarding the drugs which are being sold through social media platforms which are able to detect bad actors and cyber criminals. The detection will be on the particular activities on the applications where it is happening because the social media platforms lack regulations to govern them, and their convenience becomes the major vehicle for the drugs sale.
Conclusion
Social media companies are required to report these kinds of activities happening on their platforms immediately to the Drugs enforcement Administration so that the DEA will take the required steps instead of just blocking the account. Because just blocking does not stop these drug markets from happening online. There must be proper reporting for that. And there is a need for social media regulations. Social media platforms mostly influence people.

Introduction
The advent of AI-driven deepfake technology has facilitated the creation of explicit counterfeit videos for sextortion purposes. There has been an alarming increase in the use of Artificial Intelligence to create fake explicit images or videos for sextortion.
What is AI Sextortion and Deepfake Technology
AI sextortion refers to the use of artificial intelligence (AI) technology, particularly deepfake algorithms, to create counterfeit explicit videos or images for the purpose of harassing, extorting, or blackmailing individuals. Deepfake technology utilises AI algorithms to manipulate or replace faces and bodies in videos, making them appear realistic and often indistinguishable from genuine footage. This enables malicious actors to create explicit content that falsely portrays individuals engaging in sexual activities, even if they never participated in such actions.
Background on the Alarming Increase in AI Sextortion Cases
Recently there has been a significant increase in AI sextortion cases. Advancements in AI and deepfake technology have made it easier for perpetrators to create highly convincing fake explicit videos or images. The algorithms behind these technologies have become more sophisticated, allowing for more seamless and realistic manipulations. And the accessibility of AI tools and resources has increased, with open-source software and cloud-based services readily available to anyone. This accessibility has lowered the barrier to entry, enabling individuals with malicious intent to exploit these technologies for sextortion purposes.

The proliferation of sharing content on social media
The proliferation of social media platforms and the widespread sharing of personal content online have provided perpetrators with a vast pool of potential victims’ images and videos. By utilising these readily available resources, perpetrators can create deepfake explicit content that closely resembles the victims, increasing the likelihood of success in their extortion schemes.
Furthermore, the anonymity and wide reach of the internet and social media platforms allow perpetrators to distribute manipulated content quickly and easily. They can target individuals specifically or upload the content to public forums and pornographic websites, amplifying the impact and humiliation experienced by victims.
What are law agencies doing?
The alarming increase in AI sextortion cases has prompted concern among law enforcement agencies, advocacy groups, and technology companies. This is high time to make strong Efforts to raise awareness about the risks of AI sextortion, develop detection and prevention tools, and strengthen legal frameworks to address these emerging threats to individuals’ privacy, safety, and well-being.
There is a need for Technological Solutions, which develops and deploys advanced AI-based detection tools to identify and flag AI-generated deepfake content on platforms and services. And collaboration with technology companies to integrate such solutions.
Collaboration with Social Media Platforms is also needed. Social media platforms and technology companies can reframe and enforce community guidelines and policies against disseminating AI-generated explicit content. And can ensure foster cooperation in developing robust content moderation systems and reporting mechanisms.
There is a need to strengthen the legal frameworks to address AI sextortion, including laws that specifically criminalise the creation, distribution, and possession of AI-generated explicit content. Ensure adequate penalties for offenders and provisions for cross-border cooperation.
Proactive measures to combat AI-driven sextortion
Prevention and Awareness: Proactive measures raise awareness about AI sextortion, helping individuals recognise risks and take precautions.
Early Detection and Reporting: Proactive measures employ advanced detection tools to identify AI-generated deepfake content early, enabling prompt intervention and support for victims.
Legal Frameworks and Regulations: Proactive measures strengthen legal frameworks to criminalise AI sextortion, facilitate cross-border cooperation, and impose offender penalties.
Technological Solutions: Proactive measures focus on developing tools and algorithms to detect and remove AI-generated explicit content, making it harder for perpetrators to carry out their schemes.
International Cooperation: Proactive measures foster collaboration among law enforcement agencies, governments, and technology companies to combat AI sextortion globally.
Support for Victims: Proactive measures provide comprehensive support services, including counselling and legal assistance, to help victims recover from emotional and psychological trauma.
Implementing these proactive measures will help create a safer digital environment for all.

Misuse of Technology
Misusing technology, particularly AI-driven deepfake technology, in the context of sextortion raises serious concerns.
Exploitation of Personal Data: Perpetrators exploit personal data and images available online, such as social media posts or captured video chats, to create AI- manipulation violates privacy rights and exploits the vulnerability of individuals who trust that their personal information will be used responsibly.
Facilitation of Extortion: AI sextortion often involves perpetrators demanding monetary payments, sexually themed images or videos, or other favours under the threat of releasing manipulated content to the public or to the victims’ friends and family. The realistic nature of deepfake technology increases the effectiveness of these extortion attempts, placing victims under significant emotional and financial pressure.
Amplification of Harm: Perpetrators use deepfake technology to create explicit videos or images that appear realistic, thereby increasing the potential for humiliation, harassment, and psychological trauma suffered by victims. The wide distribution of such content on social media platforms and pornographic websites can perpetuate victimisation and cause lasting damage to their reputation and well-being.
Targeting teenagers– Targeting teenagers and extortion demands in AI sextortion cases is a particularly alarming aspect of this issue. Teenagers are particularly vulnerable to AI sextortion due to their increased use of social media platforms for sharing personal information and images. Perpetrators exploit to manipulate and coerce them.
Erosion of Trust: Misusing AI-driven deepfake technology erodes trust in digital media and online interactions. As deepfake content becomes more convincing, it becomes increasingly challenging to distinguish between real and manipulated videos or images.
Proliferation of Pornographic Content: The misuse of AI technology in sextortion contributes to the proliferation of non-consensual pornography (also known as “revenge porn”) and the availability of explicit content featuring unsuspecting individuals. This perpetuates a culture of objectification, exploitation, and non-consensual sharing of intimate material.
Conclusion
Addressing the concern of AI sextortion requires a multi-faceted approach, including technological advancements in detection and prevention, legal frameworks to hold offenders accountable, awareness about the risks, and collaboration between technology companies, law enforcement agencies, and advocacy groups to combat this emerging threat and protect the well-being of individuals online.