MeitY’s Efforts in Combatting Deepfakes
Recognizing As the Ministry of Electronic and Information Technology (MeitY) continues to invite proposals from academicians, institutions, and industry experts to develop frameworks and tools for AI-related issues through the IndiaAI Mission, it has also funded two AI projects that will deal with matters related to deepfakes as per a status report submitted on 21st November 2024. The Delhi court also ordered the nomination of the members of a nine-member Committee constituted by the MeitY on 20th November 2024 (to address deepfake issues) and asked for a report within three months.
Funded AI projects :
The two projects funded by MeitY are:
- Fake Speech Detection Using Deep Learning Framework- The project was initiated in December 2021 and focuses on detecting fake speech by creating a web interface for detection software this also includes investing in creating a speech verification software platform that is specifically designed for testing fake speech detection systems. It is set to end in December 2024.
- Design and Development of Software for Detecting Deepfake Videos and Images- This project was funded by MeitY from January 2022 to March 2024. It also involved the Centre for Development of Advanced Computing (C-DAC), Kolkata and Hyderabad as they have developed a prototype tool capable of detecting deepfakes. Named FakeCheck, it is designed as a desktop application and a web portal aiming to detect deepfakes without the use of the internet. Reports suggest that it is currently undergoing the testing phase and awaiting feedback.
Apart from these projects, MeitY has released their expression of interest for proposals in four other areas which include:
- Tools that detect AI-generated content along with traceable markers,
- Tools that develop an ethical AI framework for AI systems to be transparent and respect human values,
- An AI risk management and assessment tool that analyses threats and precarious situations of AI-specific risks in public AI use cases and;
- Tools that can assess the resilience of AI in stressful situations such as cyberattacks, national disasters, operational failures, etc.
CyberPeace Outlook
Deepfakes pose significant challenges to critical sectors in India, such as healthcare and education, where manipulated content can lead to crimes like digital impersonation, misinformation, and fraud. The rapid advancement of AI, with developments (regarding regulation) that can’t keep pace, continues to fuel such threats. Recognising these risks, MeitY’s IndiaAI mission, promoting investments and encouraging educational institutions to undertake AI projects that strengthen the country's digital infrastructure comes in as a guiding light. A part of the mission focuses on developing indigenous solutions, including tools for assessment and regulation, to address AI-related threats effectively. While India is making strides in this direction, the global AI landscape is evolving rapidly, with many nations advancing regulations to mitigate AI-driven challenges. Consistent steps, including inviting proposals and funding projects provide the much-needed impetus for the mission to be realized.
References
- https://economictimes.indiatimes.com/tech/technology/meity-dot-at-work-on-projects-for-fair-ai-development/articleshow/115777713.cms?from=mdr
- https://www.hindustantimes.com/india-news/meity-seeks-tools-to-detect-deepfakes-label-ai-generated-content-101734410291642.html
- https://www.msn.com/en-in/news/India/meity-funds-two-ai-projects-to-detect-fake-media-forms-committee-on-deepfakes/ar-AA1vMAlJ
- https://indiaai.gov.in/
Related Blogs

Introduction
In 2022, Oxfam’s India Inequality report revealed the worsening digital divide, highlighting that only 38% of households in the country are digitally literate. Further, only 31% of the rural population uses the internet, as compared to 67% of the urban population. Over time, with the increasing awareness about the importance of digital privacy globally, the definition of digital divide has translated into a digital privacy divide, whereby different levels of privacy are afforded to different sections of society. This further promotes social inequalities and impedes access to fundamental rights.
Digital Privacy Divide: A by-product of the digital divide
The digital divide has evolved into a multi-level issue from its earlier interpretations; level I implies the lack of physical access to technologies, level II refers to the lack of digital literacy and skills and recently, level III relates to the impacts of digital access. Digital Privacy Divide (DPD) refers to the various gaps in digital privacy protection provided to users based on their socio-demographic patterns. It forms a subset of the digital divide, which involves uneven distribution, access and usage of information and communication technology (ICTs). Typically, DPD exists when ICT users receive distinct levels of digital privacy protection. As such, it forms a part of the conversation on digital inequality.
Contrary to popular perceptions, DPD, which is based on notions of privacy, is not always based on ideas of individualism and collectivism and may constitute internal and external factors at the national level. A study on the impacts of DPD conducted in the U.S., India, Bangladesh and Germany highlighted that respondents in Germany and Bangladesh expressed more concerns about their privacy compared to respondents in the U.S. and India. This suggests that despite the U.S. having a strong tradition of individualistic rights, that is reflected in internal regulatory frameworks such as the Fourth Amendment, the topic of data privacy has not garnered enough interest from the population. Most individuals consider forgoing the right to privacy as a necessary evil to access many services, and schemes and to stay abreast with technological advances. Research shows that 62%- 63% of Americans believe that companies and the government collecting data have become an inescapable necessary evil in modern life. Additionally, 81% believe that they have very little control over what data companies collect and about 81% of Americans believe that the risk of data collection outweighs the benefits. Similarly, in Japan, data privacy is thought to be an adopted concept emerging from international pressure to regulate, rather than as an ascribed right, since collectivism and collective decision-making are more valued in Japan, positioning the concept of privacy as subjective, timeserving and an idea imported from the West.
Regardless, inequality in privacy preservation often reinforces social inequality. Practices like surveillance that are geared towards a specific group highlight that marginalised communities are more likely to have less data privacy. As an example, migrants, labourers, persons with a conviction history and marginalised racial groups are often subject to extremely invasive surveillance under suspicions of posing threats and are thus forced to flee their place of birth or residence. This also highlights the fact that focus on DPD is not limited to those who lack data privacy but also to those who have (either by design or by force) excess privacy. While on one end, excessive surveillance, carried out by both governments and private entities, forces immigrants to wait in deportation centres during the pendency of their case, the other end of the privacy extreme hosts a vast number of undocumented individuals who avoid government contact for fear of deportation, despite noting high rates of crime victimization.
DPD is also noted among groups with differential knowledge and skills in cyber security. For example, in India, data privacy laws mandate that information be provided on order of a court or any enforcement agency. However, individuals with knowledge of advanced encryption are adopting communication channels that have encryption protocols that the provider cannot control (and resultantly able to exercise their right to privacy more effectively), in contrast with individuals who have little knowledge of encryption, implying a security as well as an intellectual divide. While several options for secure communication exist, like Pretty Good Privacy, which enables encrypted emailing, they are complex and not easy to use in addition to having negative reputations, like the Tor Browser. Cost considerations also are a major factor in propelling DPD since users who cannot afford devices like those by Apple, which have privacy by default, are forced to opt for devices that have relatively poor in-built encryption.
Children remain the most vulnerable group. During the pandemic, it was noted that only 24% of Indian households had internet facilities to access e-education and several reported needing to access free internet outside of their homes. These public networks are known for their lack of security and privacy, as traffic can be monitored by the hotspot operator or others on the network if proper encryption measures are not in place. Elsewhere, students without access to devices for remote learning have limited alternatives and are often forced to rely on Chromebooks and associated Google services. In response to this issue, Google provided free Chromebooks and mobile hotspots to students in need during the pandemic, aiming to address the digital divide. However, in 2024, New Mexico was reported to be suing Google for allegedly collecting children’s data through its educational products provided to the state's schools, claiming that it tracks students' activities on their personal devices outside of the classroom. It signified the problems in ensuring the privacy of lower-income students while accessing basic education.
Policy Recommendations
Digital literacy is one of the critical components in bridging the DPD. It enables individuals to gain skills, which in turn effectively addresses privacy violations. Studies show that low-income users remain less confident in their ability to manage their privacy settings as compared to high-income individuals. Thus, emphasis should be placed not only on educating on technology usage but also on privacy practices since it aims to improve people’s Internet skills and take informed control of their digital identities.
In the U.S., scholars have noted the role of libraries and librarians in safeguarding intellectual privacy. The Library Freedom Project, for example, has sought to ensure that the skills and knowledge required to ensure internet freedoms are available to all. The Project channelled one of the core values of the library profession i.e. intellectual freedom, literacy, equity of access to recorded knowledge and information, privacy and democracy. As a result, the Project successfully conducted workshops on internet privacy for the public and also openly objected to the Department of Homeland Security’s attempts to shut down the use of encryption technologies in libraries. The International Federation of Library Association adopted a Statement of Privacy in the Library Environment in 2015 that specified “when libraries and information services provide access to resources, services or technologies that may compromise users’ privacy, libraries should encourage users to be aware of the implications and provide guidance in data protection and privacy.” The above should be used as an indicative case study for setting up similar protocols in inclusive public institutions like Anganwadis, local libraries, skill development centres and non-government/non-profit organisations in India, where free education is disseminated. The workshops conducted must inculcate two critical aspects; firstly, enhancing the know-how of using public digital infrastructure and popular technologies (thereby de-alienating technology) and secondly, shifting the viewpoint of privacy as a right an individual has and not something that they own.
However, digital literacy should not be wholly relied on, since it shifts the responsibility of privacy protection to the individual, who may not either be aware or cannot be controlled. Data literacy also does not address the larger issue of data brokers, consumer profiling, surveillance etc. Resultantly, an obligation on companies to provide simplified privacy summaries, in addition to creating accessible, easy-to-use technical products and privacy tools, should be necessitated. Most notable legislations address this problem by mandating notices and consent for collecting personal data of users, despite slow enforcement. However, the Digital Personal Data Protection Act 2023 in India aims to address DPD by not only mandating valid consent but also ensuring that privacy policies remain accessible in local languages, given the diversity of the population.
References
- https://idronline.org/article/inequality/indias-digital-divide-from-bad-to-worse/
- https://arxiv.org/pdf/2110.02669
- https://arxiv.org/pdf/2201.07936#:~:text=The%20DPD%20index%20is%20a,(33%20years%20and%20over).
- https://www.pewresearch.org/internet/2019/11/15/americans-and-privacy-concerned-confused-and-feeling-lack-of-control-over-their-personal-information/
- https://eprints.lse.ac.uk/67203/1/Internet%20freedom%20for%20all%20Public%20libraries%20have%20to%20get%20serious%20about%20tackling%20the%20digital%20privacy%20divi.pdf
- /https://openscholarship.wustl.edu/cgi/viewcontent.cgi?article=6265&context=law_lawreview
- https://eprints.lse.ac.uk/67203/1/Internet%20freedom%20for%20all%20Public%20libraries%20have%20to%20get%20serious%20about%20tackling%20the%20digital%20privacy%20divi.pdf
- https://bosniaca.nub.ba/index.php/bosniaca/article/view/488/pdf
- https://www.hindustantimes.com/education/just-24-of-indian-households-have-internet-facility-to-access-e-education-unicef/story-a1g7DqjP6lJRSh6D6yLJjL.html
- https://www.forbes.com/councils/forbestechcouncil/2021/05/05/the-pandemic-has-unmasked-the-digital-privacy-divide/
- https://www.meity.gov.in/writereaddata/files/Digital%20Personal%20Data%20Protection%20Act%202023.pdf
- https://www.isc.meiji.ac.jp/~ethicj/Privacy%20protection%20in%20Japan.pdf
- https://socialchangenyu.com/review/the-surveillance-gap-the-harms-of-extreme-privacy-and-data-marginalization/

Introduction
In an age where the lines between truth and fiction blur with an alarming regularity, we stand at the precipice of a new and dangerous era. Amidst the wealth of information that characterizes the digital age, deep fakes and disinformation rise like ghosts, haunting our shared reality. These manifestations of a technological revolution that promised enlightenment instead threaten the foundations upon which our societies are built: trust, truth, and collective understanding.
These digital doppelgängers, enabled by advanced artificial intelligence, and their deceitful companion—disinformation—are not mere ghosts in the machine. They are active agents of chaos, capable of undermining the core of democratic values, human rights, and even the safety of individuals who dare to question the status quo.
The Perils of False Narratives in the Digital Age
As a society, we often throw around terms such as 'fake news' with a mixture of disdain and a weary acceptance of their omnipresence. However, we must not understate their gravity. Misinformation and disinformation represent the vanguard of the digital duplicitous tide, a phenomenon growing more complex and dire each day. Misinformation, often spread without malicious intent but with no less damage, can be likened to a digital 'slip of the tongue' — an error in dissemination or interpretation. Disinformation, its darker counterpart, is born of deliberate intent to deceive, a calculated move in the chess game of information warfare.
Their arsenal is varied and ever-evolving: from misleading memes and misattributed quotations to wholesale fabrications in the form of bogus news sites and carefully crafted narratives. Among these weapons of deceit, deepfakes stand out for their audacity and the striking challenge they pose to the concept of seeing to believe. Through the unwelcome alchemy of algorithms, these video and audio forgeries place public figures, celebrities, and even everyday individuals into scenarios they never experienced, uttering words they never said.
The Human Cost: Threats to Rights and Liberties
The impact of this disinformation campaign transcends inconvenience or mere confusion; it strikes at the heart of human rights and civil liberties. It particularly festers at the crossroads of major democratic exercises, such as elections, where the right to a truthful, unmanipulated narrative is not just a political nicety but a fundamental human right, enshrined in Article 25 of the International Convention on Civil and Political Rights (ICCPR).
In moments of political change, whether during elections or pivotal referenda, the deliberate seeding of false narratives is a direct assault on the electorate's ability to make informed decisions. This subversion of truth infects the electoral process, rendering hollow the promise of democratic choice.
This era of computational propaganda has especially chilling implications for those at the frontline of accountability—journalists and human rights defenders. They find themselves targets of character assassinations and smear campaigns that not only put their safety at risk but also threaten to silence the crucial voices of dissent.
It should not be overlooked that the term 'fake news' has, paradoxically, been weaponized by governments and political entities against their detractors. In a perverse twist, this label becomes a tool to shut down legitimate debate and shield human rights violations from scrutiny, allowing for censorship and the suppression of opposition under the guise of combatting disinformation.
Deepening the societal schisms, a significant portion of this digital deceit traffic in hate speech. Its contents are laden with xenophobia, racism, and calls to violence, all given a megaphone through the anonymity and reach the internet so readily provides, feeding a cycle of intolerance and violence vastly disproportionate to that seen in traditional media.
Legislative and Technological Countermeasures: The Ongoing Struggle
The fight against this pervasive threat, as illustrated by recent actions and statements by the Indian government, is multifaceted. Notably, Union Minister Rajeev Chandrasekhar's commitment to safeguarding the Indian populace from the dangers of AI-generated misinformation signals an important step in the legislative and policy framework necessary to combat deepfakes.
Likewise, Prime Minister Narendra Modi's personal experience with a deepfake video accentuates the urgency with which policymakers, technologists, and citizens alike must view this evolving threat. The disconcerting experience of actor Rashmika Mandanna serves as a sobering reminder of the individual harm these false narratives can inflict and reinforces the necessity of a robust response.
In their pursuit to negate these virtual apparitions, policymakers have explored various avenues ranging from legislative action to penalizing offenders and advancing digital watermarks. However, it is not merely in the realm of technology that solutions must be sought. Rather, the confrontation with deepfakes and disinformation is also a battle for the collective soul of societies across the globe.
As technological advancements continue to reshape the battleground, figures like Kris Gopalakrishnan and Manish Gangwar posit that only a mix of rigorous regulatory frameworks and savvy technological innovation can hold the front line against this rising tidal wave of digital distrust.
This narrative is not a dystopian vision of a distant future - it is the stark reality of our present. And as we navigate this new terrain, our best defenses are not just technological safeguards, but also the nurturing of an informed and critical citizenry. It is essential to foster media literacy, to temper the human inclination to accept narratives at face value and to embolden the values that encourage transparency and the robust exchange of ideas.
As we peer into the shadowy recesses of our increasingly digital existence, may we hold fast to our dedication to the truth, and in doing so, preserve the essence of our democratic societies. For at stake is not just a technological arms race, but the very quality of our democratic discourse and the universal human rights that give it credibility and strength.
Conclusion
In this age of digital deceit, it is crucial to remember that the battle against deep fakes and disinformation is not just a technological one. It is also a battle for our collective consciousness, a battle to preserve the sanctity of truth in an era of falsehoods. As we navigate the labyrinthine corridors of the digital world, let us arm ourselves with the weapons of awareness, critical thinking, and a steadfast commitment to truth. In the end, it is not just about winning the battle against deep fakes and disinformation, but about preserving the very essence of our democratic societies and the human rights that underpin them.

Introduction:
A new Android malware called NGate is capable of stealing money from payment cards through relaying the data read by the Near Field Communication (“NFС”) chip to the attacker’s device. NFC is a device which allows devices such as smartphones to communicate over a short distance wirelessly. In particular, NGate allows forging the victims’ cards and, therefore, performing fraudulent purchases or withdrawing money from ATMs. .
About NGate Malware:
The whole purpose of NGate malware is to target victims’ payment cards by relaying the NFC data to the attacker’s device. The malware is designed to take advantage of phishing tactics and functionality of the NFC on android based devices.
Modus Operandi:
- Phishing Campaigns: The first step is spoofed emails or SMS used to lure the users into installing the Progressive Web Apps (“PWAs”) or the WebAPKs presented as genuine banking applications. These apps usually have a layout and logo that makes them look like an authentic app of a Targeted Bank which makes them believable.
- Installation of NGate: When the victim downloads the specific app, he or she is required to input personal details including account numbers and PIN numbers. Users are also advised to turn on or install NFC on their gadgets and place the payment cards to the back part of the phone to scan the cards.
- NFCGate Component: One of the main working features of the NGate is the NFCGate, an application created and designed by some students of Technical University of Darmstadt. This tool allows the malware to:
- Collect NFC traffic from payment cards in the vicinity.
- Transmit, or relay this data to the attacker’s device through a server.
- Repeat data that has been previously intercepted or otherwise copied.
It is important to note that some aspects of NFCGate mandate a rooted device; however, forwarding NFC traffic can occur with devices that are not rooted, and therefore can potentially ensnare more victims.
Technical Mechanism of Data Theft:
- Data Capture: The malware exploits the NFC communication feature on android devices and reads the information from the payment card, if the card is near the infected device. It is able to intercept and capture the sensive card details.
- Data Relay: The stolen information is transmitted through a server to the attacker’s device so that he/she is in a position to mimic the victim’s card.
- Unauthorized Transactions: Attackers get access to spend money on the merchants or withdraw money from the ATM that has NFC enabled. This capability marks a new level of Android malware in that the hackers are able to directly steal money without having to get hold of the card.
Social Engineering Tactics:
In most cases, attackers use social engineering techniques to obtain more information from the target before implementing the attack. In the second phase, attackers may pretend to be representatives of a bank that there is a problem with the account and offer to download a program called NGate, which in fact is a Trojan under the guise of an application for confirming the security of the account. This method makes it possible for the attackers to get ITPIN code from the sides of the victim, which enables them to withdraw money from the targeted person’s account without authorization.
Technical Analysis:
The analysis of malicious file hashes and phishing links are below:
Malicious File Hashes:
csob_smart_klic.apk:
- MD5: 7225ED2CBA9CB6C038D8
- Classification: Android/Spy.NGate.B
csob_smart_klic.apk:
- MD5: 66DE1E0A2E9A421DD16B
- Classification: Android/Spy.NGate.C
george_klic.apk:
- MD5: DA84BC78FF2117DDBFDC
- Classification: Android/Spy.NGate.C
george_klic-0304.apk:
- MD5: E7AE59CD44204461EDBD
- Classification: Android/Spy.NGate.C
rb_klic.apk:
- MD5: 103D78A180EB973B9FFC
- Classification: Android/Spy.NGate.A
rb_klic.apk:
- MD5: 11BE9715BE9B41B1C852
- Classification: Android/Spy.NGate.C.
Phishing URLs:
Phishing URL:
- https://client.nfcpay.workers[.]dev/?key=8e9a1c7b0d4e8f2c5d3f6b2
Additionally, several distinct phishing websites have been identified, including:
- rb.2f1c0b7d.tbc-app[.]life
- geo-4bfa49b2.tbc-app[.]life
- rb-62d3a.tbc-app[.]life
- csob-93ef49e7a.tbc-app[.]life
- george.tbc-app[.]life.
Analysis:

Broader Implications of NGate:
The ultramodern features of NGate mean that its manifestation is not limited to financial swindling. An attacker can also generate a copy of NFC access cards and get full access when hacking into restricted areas, for example, the corporate offices or restricted facility. Moreover, it is also safe to use the capacity to capture and analyze NFC traffic as threats to identity theft and other forms of cyber-criminality.
Precautionary measures to be taken:
To protect against NGate and similar threats, users should consider the following strategies:
- Disable NFC: As mentioned above, NFC should be not often used, it is safe to turn NFC on Android devices off. This perhaps can be done from the general control of the device in which the bursting modes are being set.
- Scrutinize App Permissions: Be careful concerning the permission that applies to the apps that are installed particularly the ones allowed to access the device. Hence, it is very important that applications should be downloaded only from genuine stores like Google Play Store only.
- Use Security Software: The malware threat can be prevented by installing relevant security applications that are available in the market.
- Stay Informed: As it has been highlighted, it is crucial for a person to know risks that are associated with the use of NFC while attempting to safeguard an individual’s identity.
Conclusion:
The presence of malware such as NGate is proof of the dynamism of threats in the context of mobile payments. Through the utilization of NFC function, NGate is a marked step up of Android malware implying that the attackers can directly manipulate the cash related data of the victims regardless of the physical aspect of the payment card. This underscores the need to be careful when downloading applications and to be keen on the permission one grants on the application. Turn NFC when not in use, use good security software and be aware of the latest scams are some of the measures that help to fight this high level of financial fraud. The attackers are now improving their methods. It is only right for the people and companies to take the right steps in avoiding the breach of privacy and identity theft.
Reference:
- https://www.welivesecurity.com/en/eset-research/ngate-android-malware-relays-nfc-traffic-to-steal-cash/
- https://therecord.media/android-malware-atm-stealing-czech-banks
- https://www.darkreading.com/mobile-security/nfc-traffic-stealer-targets-android-users-and-their-banking-info
- https://cybersecuritynews.com/new-ngate-android-malware/