The appeal is to be heard by the TDSAT (telecommunication dispute settlement & appellate tribunal) regarding several changes under Digital personal data protection. The Changes should be a removal of the deemed consent, a change in appellate mechanism, No change in delegation legislation, and under data breach. And there are some following other changes in the bill, and the digital personal data protection bill 2023 will now provide a negative list of countries that cannot transfer the data.
New Version of the DPDP Bill
The Digital Personal Data Protection Bill has a new version. There are three major changes in the 2022 draft of the digital personal data protection bill. The changes are as follows: The new version proposes changes that there shall be no deemed consent under the bill and that the personal data processing should be for limited uses only. By giving the deemed consent, there shall be consent for the processing of data for any purposes. That is why there shall be no deemed consent.
In the interest of the sovereignty
The integrity of India and the National Security
For the issue of subsidies, benefits, services, certificates, licenses, permits, etc
To comply with any judgment or order under the law
To protect, assist, or provide service in a medical or health emergency, a disaster situation, or to maintain public order
In relation to an employee and his/her rights
The 2023 version now includes an appeals mechanism
It states that the Board will have the authority to issue directives for data breach remediation or mitigation, investigate data breaches and complaints, and levy financial penalties. It would be authorised to submit complaints to alternative dispute resolution, accept voluntary undertakings from data fiduciaries, and advise the government to prohibit a data fiduciary’s website, app, or other online presence if the terms of the law were regularly violated. The Telecom Disputes Settlement and Appellate Tribunal will hear any appeals.
The other change is in delegated legislation, as one of the criticisms of the 2022 version bill was that it gave the government extensive rule-making powers. The committee also raised the same concern with the ministry. The committed wants that the provisions that cannot be fully defined within the scope of the bill can be addressed.
The other major change raised in the new version bill is regarding the data breach; there will be no compensation for the data breach. This raises a significant concern for the victims, If the victims suffer a data breach and he approaches the relevant court or authority, he will not be awarded compensation for the loss he has suffered due to the data breach.
Need of changes under DPDP
There is a need for changes in digital personal data protection as we talk about the deemed consent so simply speaking, by ‘deeming’ consent for subsequent uses, your data may be used for purposes other than what it has been provided for and, as there is no provision for to be informed of this through mandatory notice, there may never even come to know about it.
Conclusion
The bill requires changes to meet the need of evolving digital landscape in the digital personal data protection 2022 draft. The removal of deemed consent will ultimately protect the data of the data principal. And the data of the data principal will be used or processed only for the purpose for which the consent is given. The change in the appellate mechanism is also crucial as it meets the requirements of addressing appeals. However, the no compensation for a data breach is derogatory to the interest of the victim who has suffered a data breach.
Artificial Intelligence (AI) provides a varied range of services and continues to catch intrigue and experimentation. It has altered how we create and consume content. Specific prompts can now be used to create desired images enhancing experiences of storytelling and even education. However, as this content can influence public perception, its potential to cause misinformation must be noted as well. The realistic nature of the images can make it hard to discern as artificially generated by the untrained eye. As AI operates by analysing the data it was trained on previously to deliver, the lack of contextual knowledge and human biases (while framing prompts) also come into play. The stakes are higher whilst dabbling with subjects such as history, as there is a fine line between the creation of content with the intent of mere entertainment and the spread of misinformation owing to biases and lack of veracity left unchecked. AI-generated images enhance storytelling but can also spread misinformation, especially in historical contexts. For instance, an AI-generated image of London during the Black Death might include inaccurate details, misleading viewers about the past.
The Rise of AI-Generated Historical Images as Entertainment
Recently, generated images and videos of various historical instances along with the point of view of the people present have been floating all over the internet. Some of them include the streets of London during the Black Death in the 1300s in England, the eruption of Mount Vesuvius at Pompeii etc. Hogne and Dan, two creators who operate accounts named POV Lab and Time Traveller POV on TikTok state that they create such videos as they feel that seeing the past through a first-person perspective is an interesting way to bring history back to life while highlighting the cool parts, helping the audience learn something new. Mostly sensationalised for visual impact and storytelling, such content has been called out by historians for inconsistencies with respect to details particular of the time. Presently, artists admit to their creations being inaccurate, reasoning them to be more of an artistic interpretation than fact-checked documentaries.
It is important to note that AI models may inaccurately depict objects (issues with lateral inversion), people(anatomical implausibilities), or scenes due to "present-ist" bias. As noted by Lauren Tilton, an associate professor of digital humanities at the University of Richmond, many AI models primarily rely on data from the last 15 years, making them prone to modern-day distortions especially when analysing and creating historical content. The idea is to spark interest rather than replace genuine historical facts while it is assumed that engagement with these images and videos is partly a product of the fascination with upcoming AI tools. Apart from this, there are also chatbots like Hello History and Charater.ai which enable simulations of interacting with historical figures that have piqued curiosity.
Although it makes for an interesting perspective, one cannot ignore that our inherent biases play a role in how we perceive the information presented. Dangerous consequences include feeding into conspiracy theories and the erasure of facts as information is geared particularly toward garnering attention and providing entertainment. Furthermore, exposure of such content to an impressionable audience with a lesser attention span increases the gravity of the matter. In such cases, information regarding the sources used for creation becomes an important factor.
Acknowledging the risks posed by AI-generated images and their susceptibility to create misinformation, the Government of Spain has taken a step in regulating the AI content created. It has passed a bill (for regulating AI-Generated content) that mandates the labelling of AI-generated images and failure to do so would warrant massive fines (up to $38 million or 7% of turnover on companies). The idea is to ensure that content creators label their content which would help to spot images that are artificially created from those that are not.
The Way Forward: Navigating AI and Misinformation
While AI-generated images make for exciting possibilities for storytelling and enabling intrigue, their potential to spread misinformation should not be overlooked. To address these challenges, certain measures should be encouraged.
Media Literacy and Awareness – In this day and age critical thinking and media literacy among consumers of content is imperative. Awareness, understanding, and access to tools that aid in detecting AI-generated content can prove to be helpful.
AI Transparency and Labeling – Implementing regulations similar to Spain’s bill on labelling content could be a guiding crutch for people who have yet to learn to tell apart AI-generated content from others.
Ethical AI Development – AI developers must prioritize ethical considerations in training using diverse and historically accurate datasets and sources which would minimise biases.
As AI continues to evolve, balancing innovation with responsibility is essential. By taking proactive measures in the early stages, we can harness AI's potential while safeguarding the integrity and trust of the sources while generating images.
The advent of AI-driven deepfake technology has facilitated the creation of explicit counterfeit videos for sextortion purposes. There has been an alarming increase in the use of Artificial Intelligence to create fake explicit images or videos for sextortion.
What is AI Sextortion and Deepfake Technology
AI sextortion refers to the use of artificial intelligence (AI) technology, particularly deepfake algorithms, to create counterfeit explicit videos or images for the purpose of harassing, extorting, or blackmailing individuals. Deepfake technology utilises AI algorithms to manipulate or replace faces and bodies in videos, making them appear realistic and often indistinguishable from genuine footage. This enables malicious actors to create explicit content that falsely portrays individuals engaging in sexual activities, even if they never participated in such actions.
Background on the Alarming Increase in AI Sextortion Cases
Recently there has been a significant increase in AI sextortion cases. Advancements in AI and deepfake technology have made it easier for perpetrators to create highly convincing fake explicit videos or images. The algorithms behind these technologies have become more sophisticated, allowing for more seamless and realistic manipulations. And the accessibility of AI tools and resources has increased, with open-source software and cloud-based services readily available to anyone. This accessibility has lowered the barrier to entry, enabling individuals with malicious intent to exploit these technologies for sextortion purposes.
The proliferation of sharing content on social media
The proliferation of social media platforms and the widespread sharing of personal content online have provided perpetrators with a vast pool of potential victims’ images and videos. By utilising these readily available resources, perpetrators can create deepfake explicit content that closely resembles the victims, increasing the likelihood of success in their extortion schemes.
Furthermore, the anonymity and wide reach of the internet and social media platforms allow perpetrators to distribute manipulated content quickly and easily. They can target individuals specifically or upload the content to public forums and pornographic websites, amplifying the impact and humiliation experienced by victims.
What are law agencies doing?
The alarming increase in AI sextortion cases has prompted concern among law enforcement agencies, advocacy groups, and technology companies. This is high time to make strong Efforts to raise awareness about the risks of AI sextortion, develop detection and prevention tools, and strengthen legal frameworks to address these emerging threats to individuals’ privacy, safety, and well-being.
There is a need for Technological Solutions, which develops and deploys advanced AI-based detection tools to identify and flag AI-generated deepfake content on platforms and services. And collaboration with technology companies to integrate such solutions.
Collaboration with Social Media Platforms is also needed. Social media platforms and technology companies can reframe and enforce community guidelines and policies against disseminating AI-generated explicit content. And can ensure foster cooperation in developing robust content moderation systems and reporting mechanisms.
There is a need to strengthen the legal frameworks to address AI sextortion, including laws that specifically criminalise the creation, distribution, and possession of AI-generated explicit content. Ensure adequate penalties for offenders and provisions for cross-border cooperation.
Proactive measures to combat AI-driven sextortion
Prevention and Awareness: Proactive measures raise awareness about AI sextortion, helping individuals recognise risks and take precautions.
Early Detection and Reporting: Proactive measures employ advanced detection tools to identify AI-generated deepfake content early, enabling prompt intervention and support for victims.
Legal Frameworks and Regulations: Proactive measures strengthen legal frameworks to criminalise AI sextortion, facilitate cross-border cooperation, and impose offender penalties.
Technological Solutions: Proactive measures focus on developing tools and algorithms to detect and remove AI-generated explicit content, making it harder for perpetrators to carry out their schemes.
International Cooperation: Proactive measures foster collaboration among law enforcement agencies, governments, and technology companies to combat AI sextortion globally.
Support for Victims: Proactive measures provide comprehensive support services, including counselling and legal assistance, to help victims recover from emotional and psychological trauma.
Implementing these proactive measures will help create a safer digital environment for all.
Misuse of Technology
Misusing technology, particularly AI-driven deepfake technology, in the context of sextortion raises serious concerns.
Exploitation of Personal Data: Perpetrators exploit personal data and images available online, such as social media posts or captured video chats, to create AI- manipulation violates privacy rights and exploits the vulnerability of individuals who trust that their personal information will be used responsibly.
Facilitation of Extortion: AI sextortion often involves perpetrators demanding monetary payments, sexually themed images or videos, or other favours under the threat of releasing manipulated content to the public or to the victims’ friends and family. The realistic nature of deepfake technology increases the effectiveness of these extortion attempts, placing victims under significant emotional and financial pressure.
Amplification of Harm: Perpetrators use deepfake technology to create explicit videos or images that appear realistic, thereby increasing the potential for humiliation, harassment, and psychological trauma suffered by victims. The wide distribution of such content on social media platforms and pornographic websites can perpetuate victimisation and cause lasting damage to their reputation and well-being.
Targeting teenagers– Targeting teenagers and extortion demands in AI sextortion cases is a particularly alarming aspect of this issue. Teenagers are particularly vulnerable to AI sextortion due to their increased use of social media platforms for sharing personal information and images. Perpetrators exploit to manipulate and coerce them.
Erosion of Trust: Misusing AI-driven deepfake technology erodes trust in digital media and online interactions. As deepfake content becomes more convincing, it becomes increasingly challenging to distinguish between real and manipulated videos or images.
Proliferation of Pornographic Content: The misuse of AI technology in sextortion contributes to the proliferation of non-consensual pornography (also known as “revenge porn”) and the availability of explicit content featuring unsuspecting individuals. This perpetuates a culture of objectification, exploitation, and non-consensual sharing of intimate material.
Conclusion
Addressing the concern of AI sextortion requires a multi-faceted approach, including technological advancements in detection and prevention, legal frameworks to hold offenders accountable, awareness about the risks, and collaboration between technology companies, law enforcement agencies, and advocacy groups to combat this emerging threat and protect the well-being of individuals online.
This report is the collaborative outcome of insights derived from the CyberPeace Helpline’s operational statistics and the CyberPeace Research Team, covering the monthly helpline case trends of May 2025, the report identifies recurring trends, operational challenges, and strategic opportunities. The objective is to foster research-driven solutions that enhance the overall efficacy of the helpline.
Executive Summary:
This report summarizes the cybercrime cases reported in May, offering insights into case types, gender distribution, resolution status, and geographic trends.
As per our analysis, out of various Cyber Frauds Financial Fraud was the most reported issue, making up 43% of cases, followed by Cyberbullying (26%) and Impersonation (14%). Less frequent but serious issues included Sexual Harassment, Sextortion, Hacking, Data Tampering, and Cyber Defamation, each accounting for 3–6%, highlighting a mix of financial and behavioral threats.The gender distribution was fairly balanced, with 51% male and 49% female respondents. While both genders were affected by major crimes like financial fraud and cyber bullying, some categories—such as sexual harassment—reflected more gender-specific risks, indicating the need for gender-responsive policies and support.
Regarding case status, 60% remain under follow-up while 40% have been resolved, reflecting strong case-handling efforts by the team.
The location-wise data shows higher case concentrations in Uttar Pradesh, Andhra Pradesh, Karnataka, and West Bengal, with significant reports also from Delhi, Telangana, Maharashtra, and Odisha. Reports from the northeastern and eastern states confirm the nationwide spread of cyber incidents.In conclusion, the findings point to a growing need for enhanced cybersecurity awareness, preventive strategies, and robust digital safeguards to address the evolving cyber threat landscape across India.
Cases Received in May:
As per the given dataset, the following types of cases were reported to our team during the month of May:
💰 Financial Fraud – 43%
💬 Cyber Bullying – 26%
🕵️♂️ Impersonation – 14%
🚫 Sexual Harassment – 6%
📸 Sextortion – 3%
💻 Hacking – 3%
📝 Data Tampering – 3%
🗣️ Cyber Defamation – 3%
The chart illustrates various cybercrime categories and their occurrence rates. Financial Fraud emerges as the most common, accounting for 43% of cases, highlighting the critical need for stronger digital financial security. This is followed by Cyber Bullying at 26%, reflecting growing concerns around online harassment, especially among youth. Impersonation ranks third with 14%, involving identity misuse for deceitful purposes. Less frequent but still serious crimes such as Sexual Harassment (6%), Sextortion, Hacking, Data Tampering, and Cyber Defamation (each 3%) also pose significant risks to users’ privacy and safety. Overall, the data underscores the need for improved cybersecurity awareness, legal safeguards, and preventive measures to address both financial and behavioral threats in the digital space.
Gender-Wise Distribution:
👨 Male – 51%
👩 Female – 49%
The chart illustrates the distribution of respondents by gender. The data shows that Male participants make up 51% of the total, while Female participants account for 49%. This indicates a fairly balanced representation of both genders, with a slight majority of male respondents.
Gender-Wise Case Distribution:
The chart presents a gender-wise distribution of various cybercrime cases, offering a comparative view of how different types of cyber incidents affect males and females.
It highlights that both genders are significantly impacted by cybercrimes such as financial fraud and cyber bullying, indicating a widespread risk across the board.
Certain categories, including sexual harassment, cyber defamation, and hacking, show more gender-specific patterns of victimization, pointing to differing vulnerabilities.
The data suggests the need for gender-sensitive policies and preventive measures to effectively address the unique risks faced by males and females in the digital space.
These insights can inform the design of tailored awareness programs, support services, and intervention strategies aimed at improving cybersecurity for all individuals.
Major Location Wise Distribution:
The map visualization displays location-wise distribution of reported cases across India. The cases reflect the cyber-related incidents or cases mapped geographically.
The map highlights the regional distribution of cybercrime cases across Indian states, with a higher concentration in Uttar Pradesh, Andhra Pradesh, Karnataka, and West Bengal. States like Delhi, Telangana, Maharashtra, and Odisha also show notable activity, indicating widespread cyber threats. Regions including Assam, Tripura, Bihar, Jharkhand, and Jammu & Kashmir further reflect the pan-India spread of such incidents. This distribution stresses the need for targeted cybersecurity awareness and stronger digital safeguards nationwide
CyberPeace Advisory:
Use Strong and Unique Passwords: Create complex passwords using a mix of letters, numbers, and symbols. Avoid reusing the same password across multiple platforms.
Enable Multi-Factor Authentication (MFA): Add an extra layer of security by using a second verification step like an OTP or authentication app.
Keep Software Updated: Regularly update your operating system, apps, and security tools to protect against known vulnerabilities.
Install Trusted Security Software: Use reliable antivirus and anti-malware programs to detect and block threats.
Limit Information Sharing: Be cautious about sharing personal or sensitive details, especially on social media or public platforms.
Secure Your Network: Protect your Wi-Fi with a strong password and encryption. Avoid accessing confidential information on public networks.
Back Up Important Data: Regularly save copies of important files in secure storage to prevent data loss in case of an attack.
Stay Informed with Cybersecurity Training: Learn how to identify scams, phishing attempts, and other online threats through regular awareness sessions.
Control Access to Data: Give access to sensitive information only to those who need it, based on their job roles.
Monitor and Respond to Threats: Continuously monitor systems for unusual activity and have a clear response plan for handling security incidents.
CyberPeace Helpline mail ID: helpline@cyberpeace.net
The cybercrime cases reported in May highlight a diverse and evolving threat landscape across India. Financial fraud, cyber bullying, and impersonation are the most prevalent, affecting both genders almost equally, though some crimes like sexual harassment call for targeted gender-sensitive measures. With 60% of cases still under follow-up, the team’s efforts in investigation and resolution remain strong. Geographically, cyber incidents are widespread, with higher concentrations in several key states, demonstrating that no region is immune. These findings underscore the urgent need to enhance cybersecurity awareness, strengthen preventive strategies, and build robust digital safeguards. Proactive and inclusive approaches are essential to protect individuals and communities and to address the growing challenges posed by cybercrime nationwide.
Become a part of our vision to make the digital world safe for all!
Numerous avenues exist for individuals to unite with us and our collaborators in fostering global cyber security
Awareness
Stay Informed: Elevate Your Awareness with Our Latest Events and News Articles Promoting Cyber Peace and Security.
Your institution or organization can partner with us in any one of our initiatives or policy research activities and complement the region-specific resources and talent we need.