#FactCheck - Visuals of Jharkhand Police catching a truck load of cash and gold coins is an AI-generated image
Executive Summary:
An image has been spread on social media about the truck carrying money and gold coins impounded by Jharkhand Police that also during lok sabha elections in 2024. The Research Wing, CyberPeace has verified the image and found it to be generated using artificial intelligence. There are no credible news articles supporting claims about the police having made such a seizure in Jharkhand. The images were checked using AI image detection tools and proved to be AI made. It is advised to share any image or content after verifying its authenticity.

Claims:
The viral social media post depicts a truck intercepted by the Jharkhand Police during the 2024 Lok Sabha elections. It was claimed that the truck was filled with large amounts of cash and gold coins.



Fact Check:
On receiving the posts, we started with keyword-search to find any relevant news articles related to this post. If such a big incident really happened it would have been covered by most of the media houses. We found no such similar articles. We have closely analysed the image to find any anomalies that are usually found in AI generated images. And found the same.

The texture of the tree in the image is found to be blended. Also, the shadow of the people seems to be odd, which makes it more suspicious and is a common mistake in most of the AI generated images. If we closely look at the right hand of the old man wearing white attire, it is clearly visible that the thumb finger is blended with his apparel.
We then analysed the image in an AI image detection tool named ‘Hive Detector’. Hive Detector found the image to be AI-generated.

To validate the AI fabrication, we checked with another AI image detection tool named ‘ContentAtScale AI detection’ and it detected the image as 82% AI. Generated.

After validation of the viral post using AI detection tools, it is apparent that the claim is misleading and fake.
Conclusion:
The viral image of the truck impounded by Jharkhand Police is found to be fake and misleading. The viral image is found to be AI-generated. There has been no credible source that can support the claim made. Hence, the claim made is false and misleading. The Research Wing, CyberPeace previously debunked such AI-generated images with misleading claims. Netizens must verify such news that circulates in Social Media with bogus claims before sharing it further.
- Claim: The photograph shows a truck intercepted by Jharkhand Police during the 2024 Lok Sabha elections, which was allegedly loaded with huge amounts of cash and gold coins.
- Claimed on: Facebook, Instagram, X (Formerly known as Twitter)
- Fact Check: Fake & Misleading
Related Blogs

According to Statista, the number of users in India's digital assets market is expected to reach 107.30m users by 2025 (Impacts of Inflation on Financial Markets, August 2023). India's digital asset market has been experiencing exponential growth fueled by the increased adoption of cryptocurrencies and blockchain technology. This furthers the need for its regulation. Digital assets include cryptocurrencies, NFTs, asset-backed tokens, and tokenised real estate.
India has defined Digital Assets under Section 47(A) of the Income Tax Act, 1961. The Finance Act 2022-23 has added the word 'virtual' to make it “Virtual Digital Assets”. A “virtual digital asset” is any information or code, number, or token, created through cryptographic methods or otherwise, by any name, giving a digital representation of value exchanged with or without consideration. A VDA should contain an inherent value and represent a store of value or unit of account, functional in any financial transaction or investment. These can be stored, transferred, or traded in electronic format.
Digital Asset Governance: Update and Future Outlook
Indian regulators have been conservative in their approach towards digital assets, with the Reserve Bank of India first issuing directions against cryptocurrency transactions in 2018. This ban was removed by the Supreme Court through a court order in 2020. The presentation of the Cryptocurrency and Regulation of Official Digital Currency Bill of 2021 is a fairly important milestone in its attempts to lay down the framework for issuing an official digital currency by the Reserve Bank of India. While some digital assets seem to have potential, like the Central Bank Digital Currencies (CBDCs) and blockchain-based financial applications, a blanket prohibition has been enforced on private cryptocurrencies.
However, in more recent trends, the landscape is changing as the RBI's CBDC is to provide a state-backed digital alternative to cash under a more structured regulatory framework. This move seeks to balance state control with innovation on investor safety and compliance, expecting to reduce risk and enhance security for investors by enacting strict anti-money laundering and know-your-customer laws. Highlighting these developments is important to examine how global regulatory trends influence India's digital asset policies.
Impact of Global Development on India’s Approach
Global regulatory developments have an impact on Indian policies on digital assets. The European Union's Markets in Crypto-assets (MiCA) is to introduce a comprehensive regulatory framework for cryptocurrencies that could act as an inspiration for India. MiCA regulation covers crypto-assets that are not currently regulated by existing financial services legislation. Its particular focus on consumer protection and market integrity resonates with India in terms of investigating needs related to digital assets, including fraud and price volatility. Additionally, evolving policies in the US, such as regulating crypto exchanges and classifying certain tokens as securities, could also form the basis for India's regulatory posture.
Collaboration on the international level is also a chief contributing factor. India’s regular participation in global forums like the G20, facilitates an opportunity to align its regulations on digital assets with other countries, tending toward an even more standardised and predictable framework for cross-border transactions. This can significantly help India given that the nation has a huge diaspora providing a critical inflow of remuneration.
CyberPeace Outlook
Though digital assets offer many opportunities to India, challenges also exist. Cryptocurrency volatility affects investors, posing concerns over fraud and illicit dealings. A balance between the need for innovation and investor protection is paramount to avoid killing the growth of India's digital asset ecosystem with overly restrictive regulations.
Financial inclusion, efficient cross-border payments with low transaction costs, and the opening of investment opportunities are a few opportunities offered by digital assets. For example, the tokenisation of real estate throws open real estate investment to smaller investors. To strengthen the opportunities while addressing challenges, some policy reforms and new frameworks might prove beneficial.
CyberPeace Policy Recommendations
- Establish a regulatory sandbox for startups working in the area of blockchain and digital assets. This would allow them to test innovative solutions in a controlled environment with regulatory oversight minimising risks.
- Clear guidelines for the taxation of digital assets should be provided as they will ensure transparency, reduce ambiguity for investors, and promote compliance with tax regulations. Specific guidelines can be drawn from the EU's MiCA regulation.
- Workshops, online resources, and campaigns are some examples of initiatives aimed at improving consumer awareness about digital assets, benefits and associated risks that should be implemented. Partnerships with global fintech firms will provide a great opportunity to learn best practices.
Conclusion
India is positioned at a critical juncture with respect to the debate on digital assets. The challenge which lies ahead is one of balancing innovation with effective regulation. The introduction of the Central Bank Digital Currency (CBDC) and the development of new policies signal a willingness on the part of the regulators to embrace the digital future. In contrast, issues like volatility, fraud, and regulatory compliance continue to pose hurdles. By drawing insights from global frameworks and strengthening ties through international forums, India can pave the way for a secure and dynamic digital asset ecosystem. Embracing strategic measures such as regulatory sandboxes and transparent tax guidelines will not only protect investors but also unlock the immense potential of digital assets, propelling India into a new era of financial innovation and inclusivity.
References
- https://www.weforum.org/agenda/2024/10/different-countries-navigating-uncertainty-digital-asset-regulation-election-year/
- https://www.acfcs.org/eu-passes-landmark-crypto-regulation
- https://www.indiabudget.gov.in/budget2022-23/doc/Finance_Bill.pdf
- https://www.weforum.org/agenda/2024/10/different-countries-navigating-uncertainty-digital-asset-regulation-election-year/
- https://www3.weforum.org/docs/WEF_Digital_Assets_Regulation_2024.pdf

Introduction
On May 21st, 2025, the Department of Telecommunications (DoT) launched the Financial Risk Indicator (FRI) feature, marking an important step towards safeguarding mobile phone users from the risks of financial fraud. This was developed as a part of the Digital Intelligence Platform (DIP), which facilitates coordination between stakeholders to curb the misuse of telecom services for conducting cyber crimes.
What is the Financial Risk Indicator (FRI)?
The FRI is a risk-based metric feature that categorises phone numbers into risk, medium risk, and high risk based on their association with financial fraud in the past. The data pool enabling this intelligence sharing includes the Digital Intelligence Unit (DIU) of the DoT, which engages and sends a list of Mobile Numbers that were disconnected (Mobile Number Revocation List - MNRL) to the following stakeholders, creating a network of checks and balances. They are:
- Intelligence from Non-Banking Finance Companies, and UPI (Unified Payment Interface) gateways.
- The Chakshu facility- a feature on the Sanchar Saathi portal that enables users to report suspected fraudulent communication (Calls, SMS, WhatsApp messages), which has also been roped in.
- Complaints from the National Cybercrime Reporting Portal (NCRP) through the I4C (Indian Cyber Coordination Center).
Some other initiatives taken up concerning securing against digital financial fraud are the Citizen Financial Cyber Fraud Reporting and Management System, the International Incoming Spoofed Calls Prevention System, among others.
A United Stance
The ease of payment and increasing digitisation might have enabled the increasing usage of UPI platforms. However, post-adoption, the responsibility of securing the digital payments infrastructure becomes essential. As per a report by CNBC TV18, UPI fraud cases surged by 85% in FY24. The number of incidents have increased from 7.25 lakh in FY23 to 13.42 lakh in FY24. These cases involved a total value of ₹1,087 crore, compared to ₹573 crore in the previous year, and the number continues to increase.
Nevertheless, UPI platforms are taking their own initiative to combat such crimes. PhonePe, one of the most used digital payment interface as of January 2025 (Statista) has already incorporated the FRI into its PhonePe Protect feature; this blocks transactions with high-risk numbers and issues a warning prior to engaging with numbers that are categorised to be of medium risk.
CyberPeace Insights
The launch of a feature addressing the growing threat of financial fraud is crucial for creating a network of stakeholders to coordinate with law enforcement to better track and prevent crimes. Publicity of these measures will raise public awareness and keep end-users informed. A secure infrastructure for digital payments is necessary in this age, with a robust base mechanism that can adapt to both current and future threats.
References
- https://www.thehawk.in/news/economy-and-business/centre-launches-financial-fraud-risk-indicator-to-safeguard-mobile-users
- https://telanganatoday.com/government-launches-financial-fraud-risk-indicator-to-safeguard-mobile-users
- https://www.pib.gov.in/PressReleasePage.aspx?PRID=2130249#:~:text=What%20is%20the%20%E2%80%9CFinancial%20Fraud,High%20risk%20of%20financial%20fraud
- https://www.business-standard.com/industry/news/dot-launches-financial-fraud-risk-indicator-to-aid-cybercrime-detection-125052101912_1.html
- https://www.cnbctv18.com/business/finance/upi-fraud-cases-rise-85-pc-in-fy24-increase-parliament-reply-data-19514295.htm
- https://www.statista.com/statistics/1034443/india-upi-usage-by-platform/#:~:text=In%20January%202025%2C%20PhonePe%20held%20the%20highest,key%20drivers%20of%20UPI%20adoption%20in%20India
- https://telecom.economictimes.indiatimes.com/amp/news/policy/centre-notifies-draft-rules-for-delicensing-lower-6-ghz-band/121260887?nt

Introduction
The advent of AI-driven deepfake technology has facilitated the creation of explicit counterfeit videos for sextortion purposes. There has been an alarming increase in the use of Artificial Intelligence to create fake explicit images or videos for sextortion.
What is AI Sextortion and Deepfake Technology
AI sextortion refers to the use of artificial intelligence (AI) technology, particularly deepfake algorithms, to create counterfeit explicit videos or images for the purpose of harassing, extorting, or blackmailing individuals. Deepfake technology utilises AI algorithms to manipulate or replace faces and bodies in videos, making them appear realistic and often indistinguishable from genuine footage. This enables malicious actors to create explicit content that falsely portrays individuals engaging in sexual activities, even if they never participated in such actions.
Background on the Alarming Increase in AI Sextortion Cases
Recently there has been a significant increase in AI sextortion cases. Advancements in AI and deepfake technology have made it easier for perpetrators to create highly convincing fake explicit videos or images. The algorithms behind these technologies have become more sophisticated, allowing for more seamless and realistic manipulations. And the accessibility of AI tools and resources has increased, with open-source software and cloud-based services readily available to anyone. This accessibility has lowered the barrier to entry, enabling individuals with malicious intent to exploit these technologies for sextortion purposes.

The proliferation of sharing content on social media
The proliferation of social media platforms and the widespread sharing of personal content online have provided perpetrators with a vast pool of potential victims’ images and videos. By utilising these readily available resources, perpetrators can create deepfake explicit content that closely resembles the victims, increasing the likelihood of success in their extortion schemes.
Furthermore, the anonymity and wide reach of the internet and social media platforms allow perpetrators to distribute manipulated content quickly and easily. They can target individuals specifically or upload the content to public forums and pornographic websites, amplifying the impact and humiliation experienced by victims.
What are law agencies doing?
The alarming increase in AI sextortion cases has prompted concern among law enforcement agencies, advocacy groups, and technology companies. This is high time to make strong Efforts to raise awareness about the risks of AI sextortion, develop detection and prevention tools, and strengthen legal frameworks to address these emerging threats to individuals’ privacy, safety, and well-being.
There is a need for Technological Solutions, which develops and deploys advanced AI-based detection tools to identify and flag AI-generated deepfake content on platforms and services. And collaboration with technology companies to integrate such solutions.
Collaboration with Social Media Platforms is also needed. Social media platforms and technology companies can reframe and enforce community guidelines and policies against disseminating AI-generated explicit content. And can ensure foster cooperation in developing robust content moderation systems and reporting mechanisms.
There is a need to strengthen the legal frameworks to address AI sextortion, including laws that specifically criminalise the creation, distribution, and possession of AI-generated explicit content. Ensure adequate penalties for offenders and provisions for cross-border cooperation.
Proactive measures to combat AI-driven sextortion
Prevention and Awareness: Proactive measures raise awareness about AI sextortion, helping individuals recognise risks and take precautions.
Early Detection and Reporting: Proactive measures employ advanced detection tools to identify AI-generated deepfake content early, enabling prompt intervention and support for victims.
Legal Frameworks and Regulations: Proactive measures strengthen legal frameworks to criminalise AI sextortion, facilitate cross-border cooperation, and impose offender penalties.
Technological Solutions: Proactive measures focus on developing tools and algorithms to detect and remove AI-generated explicit content, making it harder for perpetrators to carry out their schemes.
International Cooperation: Proactive measures foster collaboration among law enforcement agencies, governments, and technology companies to combat AI sextortion globally.
Support for Victims: Proactive measures provide comprehensive support services, including counselling and legal assistance, to help victims recover from emotional and psychological trauma.
Implementing these proactive measures will help create a safer digital environment for all.

Misuse of Technology
Misusing technology, particularly AI-driven deepfake technology, in the context of sextortion raises serious concerns.
Exploitation of Personal Data: Perpetrators exploit personal data and images available online, such as social media posts or captured video chats, to create AI- manipulation violates privacy rights and exploits the vulnerability of individuals who trust that their personal information will be used responsibly.
Facilitation of Extortion: AI sextortion often involves perpetrators demanding monetary payments, sexually themed images or videos, or other favours under the threat of releasing manipulated content to the public or to the victims’ friends and family. The realistic nature of deepfake technology increases the effectiveness of these extortion attempts, placing victims under significant emotional and financial pressure.
Amplification of Harm: Perpetrators use deepfake technology to create explicit videos or images that appear realistic, thereby increasing the potential for humiliation, harassment, and psychological trauma suffered by victims. The wide distribution of such content on social media platforms and pornographic websites can perpetuate victimisation and cause lasting damage to their reputation and well-being.
Targeting teenagers– Targeting teenagers and extortion demands in AI sextortion cases is a particularly alarming aspect of this issue. Teenagers are particularly vulnerable to AI sextortion due to their increased use of social media platforms for sharing personal information and images. Perpetrators exploit to manipulate and coerce them.
Erosion of Trust: Misusing AI-driven deepfake technology erodes trust in digital media and online interactions. As deepfake content becomes more convincing, it becomes increasingly challenging to distinguish between real and manipulated videos or images.
Proliferation of Pornographic Content: The misuse of AI technology in sextortion contributes to the proliferation of non-consensual pornography (also known as “revenge porn”) and the availability of explicit content featuring unsuspecting individuals. This perpetuates a culture of objectification, exploitation, and non-consensual sharing of intimate material.
Conclusion
Addressing the concern of AI sextortion requires a multi-faceted approach, including technological advancements in detection and prevention, legal frameworks to hold offenders accountable, awareness about the risks, and collaboration between technology companies, law enforcement agencies, and advocacy groups to combat this emerging threat and protect the well-being of individuals online.