Using incognito mode and VPN may still not ensure total privacy, according to expert
SVIMS Director and Vice-Chancellor B. Vengamma lighting a lamp to formally launch the cybercrime awareness programme conducted by the police department for the medical students in Tirupati on Wednesday.
An awareness meet on safe Internet practices was held for the students of Sri Venkateswara University University (SVU) and Sri Venkateswara Institute of Medical Sciences (SVIMS) here on Wednesday.
“Cyber criminals on the prowl can easily track our digital footprint, steal our identity and resort to impersonation,” cyber expert I.L. Narasimha Rao cautioned the college students.
Addressing the students in two sessions, Mr. Narasimha Rao, who is a Senior Manager with CyberPeace Foundation, said seemingly common acts like browsing a website, and liking and commenting on posts on social media platforms could be used by impersonators to recreate an account in our name.
Turning to the youth, Mr. Narasimha Rao said the incognito mode and Virtual Private Network (VPN) used as a protected network connection do not ensure total privacy as third parties could still snoop over the websites being visited by the users. He also cautioned them tactics like ‘phishing’, ‘vishing’ and ‘smishing’ being used by cybercriminals to steal our passwords and gain access to our accounts.
“After cracking the whip on websites and apps that could potentially compromise our security, the Government of India has recently banned 232 more apps,” he noted.
Additional Superintendent of Police (Crime) B.H. Vimala Kumari appealed to cyber victims to call 1930 or the Cyber Mitra’s helpline 9121211100. SVIMS Director B. Vengamma stressed the need for caution with smartphones becoming an indispensable tool for students, be it for online education, seeking information, entertainment or for conducting digital transactions.
Related Blogs

Introduction
On March 12, the Ministry of Corporate Affairs (MCA) proposed the Bill to curb anti-competitive practices of tech giants through ex-ante regulation. The Draft Digital Competition Bill is to apply to ‘Core Digital Services,’ with the Central Government having the authority to update the list periodically. The proposed list in the Bill encompasses online search engines, online social networking services, video-sharing platforms, interpersonal communications services, operating systems, web browsers, cloud services, advertising services, and online intermediation services.
The primary highlight of the Digital Competition Law Report created by the Committee on Digital Competition Law presented to the Parliament in the 2nd week of March 2024 involves a recommendation to introduce new legislation called the ‘Digital Competition Act,’ intended to strike a balance between certainty and flexibility. The report identified ten anti-competitive practices relevant to digital enterprises in India. These are anti-steering, platform neutrality/self-preferencing, bundling and tying, data usage (use of non-public data), pricing/ deep discounting, exclusive tie-ups, search and ranking preferencing, restricting third-party applications and finally advertising Policies.
Key Take-Aways: Digital Competition Bill, 2024
- Qualitative and quantitative criteria for identifying Systematically Significant Digital Enterprises, if it meets any of the specified thresholds.
- Financial thresholds in each of the immediately preceding three financial years like turnover in India, global turnover, gross merchandise value in India, or global market capitalization.
- User thresholds in each of the immediately preceding 3 financial years in India like the core digital service provided by the enterprise has at least 1 crore end users, or it has at least 10,000 business users.
- The Commission may make the designation based on other factors such as the size and resources of an enterprise, number of business or end users, market structure and size, scale and scope of activities of an enterprise and any other relevant factor.
- A period of 90 days is provided to notify the CCI of qualification as an SSDE. Additionally, the enterprise must also notify the Commission of other enterprises within the group that are directly or indirectly involved in the provision of Core Digital Services, as Associate Digital Enterprises (ADE) and the qualification shall be for 3 years.
- It prescribes obligations for SSDEs and their ADEs upon designation. The enterprise must comply with certain obligations regarding Core Digital Services, and non-compliance with the same shall result in penalties. Enterprises must not directly or indirectly prevent or restrict business users or end users from raising any issue of non-compliance with the enterprise’s obligations under the Act.
- Avoidance of favouritism in product offerings by SSDE, its related parties, or third parties for the manufacture and sale of products or provision of services over those offered by third-party business users on the Core Digital Service in any manner.
- The Commission will be having the same powers as vested to a civil court under the Code of Civil Procedure, 1908 when trying a suit.
- Penalty for non-compliance without reasonable cause may extend to Rs 1 lakh for each day during which such non-compliance occurs (max. of Rs 10 crore). It may extend to 3 years or with a fine, which may extend to Rs 25 crore or with both. The Commission may also pass an order imposing a penalty on an enterprise (not exceeding 1% of the global turnover) in case it provides incorrect, incomplete, misleading information or fails to provide information.
Suggestions and Recommendations
- The ex-ante model of regulation needs to be examined for the Indian scenario and studies need to be conducted on it has worked previously in different jurisdictions like the EU.
- The Bill should be aimed at prioritising the fostering of fair competition by preventing monopolistic practices in digital markets exclusively. A clear distinction from the already existing Competition Act, 2002 in its functioning needs to be created so that there is no overlap in the regulations and double jeopardy is not created for enterprises.
- Restrictions on tying and bundling and data usage have been shown to negatively impact MSMEs that rely significantly on big tech to reduce operational costs and enhance customer outreach.
- Clear definitions of "dominant position" and "anti-competitive behaviour" are essential for effective enforcement in terms of digital competition need to be defined.
- Encouraging innovation while safeguarding consumer data privacy in consonance with the DPDP Act should be the aim. Promoting interoperability and transparency in algorithms can prevent discriminatory practices.
- Regular reviews and stakeholder consultations will ensure the law adapts to rapidly evolving technologies.
- Collaboration with global antitrust bodies which is aimed at enhancing cross-border regulatory coherence and effectiveness.
Conclusion
The need for a competition law that is focused exclusively on Digital Enterprises is the need of the hour and hence the Committee recommended enacting the Digital Competition Act to enable CCI to selectively regulate large digital enterprises. The proposed legislation should be restricted to regulate only those enterprises that have a significant presence and ability to influence the Indian digital market. The impact of the law needs to be restrictive to digital enterprises and it should not encroach upon matters not influenced by the digital arena. India's proposed Digital Competition Bill aims to promote competition and fairness in the digital market by addressing anti-competitive practices and dominant position abuses prevalent in the digital business space. The Ministry of Corporate Affairs has received 41-page public feedback on the draft which is expected to be tabled next year in front of the Parliament.
References
- https://www.medianama.com/wp-content/uploads/2024/03/DRAFT-DIGITAL-COMPETITION-BILL-2024.pdf
- https://prsindia.org/files/policy/policy_committee_reports/Report_Summary-Digital_Competition_Law.pdf
- https://economictimes.indiatimes.com/tech/startups/meity-meets-india-inc-to-hear-out-digital-competition-law-concerns/articleshow/111091837.cms?from=mdr
- https://www.mca.gov.in/bin/dms/getdocument?mds=gzGtvSkE3zIVhAuBe2pbow%253D%253D&type=open
- https://www.barandbench.com/law-firms/view-point/digital-competition-laws-beginning-of-a-new-era
- https://www.linkedin.com/pulse/policy-explainer-digital-competition-bill-nimisha-srivastava-lhltc/
- https://www.lexology.com/library/detail.aspx?g=5722a078-1839-4ece-aec9-49336ff53b6c

Modern international trade heavily relies on data transfers for the exchange of digital goods and services. User data travels across multiple jurisdictions and legal regimes, each with different rules for processing it. Since international treaties and standards for data protection are inadequate, states, in an effort to protect their citizens' data, have begun extending their domestic privacy laws beyond their borders. However, this opens a Pandora's box of legal and administrative complexities for both, the data protection authorities and data processors. The former must balance the harmonization of domestic data protection laws with their extraterritorial enforcement, without overreaching into the sovereignty of other states. The latter must comply with the data privacy laws in all states where it collects, stores, and processes data. While the international legal community continues to grapple with these challenges, India can draw valuable lessons to refine the Digital Personal Data Protection Act, 2023 (DPDP) in a way that effectively addresses these complexities.
Why Extraterritorial Application?
Since data moves freely across borders and entities collecting such data from users in multiple states can misuse it or use it to gain an unfair competitive advantage in local markets, data privacy laws carry a clause on their extraterritorial application. Thus, this principle is utilized by states to frame laws that can ensure comprehensive data protection for their citizens, irrespective of the data’s location. The foremost example of this is the European Union’s (EU) General Data Protection Regulation (GDPR), 2016, which applies to any entity that processes the personal data of its citizens, regardless of its location. Recently, India has enacted the DPDP Act of 2023, which includes a clause on extraterritorial application.
The Extraterritorial Approach: GDPR and DPDP Act
The GDPR is considered the toughest data privacy law in the world and sets a global standard in data protection. According to Article 3, its provisions apply not only to data processors within the EU but also to those established outside its territory, if they offer goods and services to and conduct behavioural monitoring of data subjects within the EU. The enforcement of this regulation relies on heavy penalties for non-compliance in the form of fines up to €20 million or 4% of the company’s global turnover, whichever is higher, in case of severe violations. As a result, corporations based in the USA, like Meta and Clearview AI, have been fined over €1.5 billion and €5.5 million respectively, under the GDPR.
Like the GDPR, the DPDP Act extends its jurisdiction to foreign companies dealing with personal data of data principles within Indian territory under section 3(b). It has a similar extraterritorial reach and prescribes a penalty of up to Rs 250 crores in case of breaches. However, the Act or DPDP Rules, 2025, which are currently under deliberation, do not elaborate on an enforcement mechanism through which foreign companies can be held accountable.
Lessons for India’s DPDP on Managing Extraterritorial Application
- Clarity in Definitions: GDPR clearly defines ‘personal data’, covering direct information such as name and identification number, indirect identifiers like location data, and, online identifiers that can be used to identify the physical, physiological, genetic, mental, economic, cultural, or social identity of a natural person. It also prohibits revealing special categories of personal data like religious beliefs and biometric data to protect the fundamental rights and freedoms of the subjects. On the other hand, the DPDP Act/ Rules define ‘personal data’ vaguely, leaving a broad scope for Big Tech and ad-tech firms to bypass obligations.
- International Cooperation: Compliance is complex for companies due to varying data protection laws in different countries. The success of regulatory measures in such a scenario depends on international cooperation for governing cross-border data flows and enforcement. For DPDP to be effective, India will have to foster cooperation frameworks with other nations.
- Adequate Safeguards for Data Transfers: The GDPR regulates data transfers outside the EU via pre-approved legal mechanisms such as standard contractual clauses or binding corporate rules to ensure that the same level of protection applies to EU citizens’ data even when it is processed outside the EU. The DPDP should adopt similar safeguards to ensure that Indian citizens’ data is protected when processed abroad.
- Revised Penalty Structure: The GDPR mandates a penalty structure that must be effective, proportionate, and dissuasive. The supervisory authority in each member state has the power to impose administrative fines as per these principles, up to an upper limit set by the GDPR. On the other hand, the DPDP’s penalty structure is simplistic and will disproportionately impact smaller businesses. It must take into regard factors such as nature, gravity, and duration of the infringement, its consequences, compliance measures taken, etc.
- Governance Structure: The GDPR envisages a multi-tiered governance structure comprising of
- National-level Data Protection Authorities (DPAs) for enforcing national data protection laws and the GDPR,
- European Data Protection Supervisor (EDPS) for monitoring the processing of personal data by EU institutions and bodies,
- European Commission (EC) for developing GDPR legislation
- European Data Protection Board (EDPB) for enabling coordination between the EC, EDPS, and DPAs
In contrast, the Data Protection Board (DPB) under DPDP will be a single, centralized body overseeing compliance and enforcement. Since its members are to be appointed by the Central Government, it raises questions about the Board’s autonomy and ability to apply regulations consistently. Further, its investigative and enforcement capabilities are not well defined.
Conclusion
The protection of the human right to privacy ( under the International Covenant on Civil and Political Rights and the Universal Declaration of Human Rights) in today’s increasingly interconnected digital economy warrants international standard-setting on cross-border data protection. In the meantime, States relying on the extraterritorial application of domestic laws is unavoidable. While India’s DPDP takes measures towards this, they must be refined to ensure clarity regarding implementation mechanisms. They should push for alignment with data protection laws of other States, and account for the complexity of enforcement in cases involving extraterritorial jurisdiction. As India sets out to position itself as a global digital leader, a well-crafted extraterritorial framework under the DPDP Act will be essential to promote international trust in India’s data governance regime.
Sources
- https://gdpr-info.eu/art-83-gdpr/
- https://gdpr-info.eu/recitals/no-150/
- https://gdpr-info.eu/recitals/no-51/
- https://www.meity.gov.in/static/uploads/2024/06/2bf1f0e9f04e6fb4f8fef35e82c42aa5.pdf
- https://www.eqs.com/compliance-blog/biggest-gdpr-fines/#:~:text=ease%20the%20burden.-,At%20a%20glance,In%20summary
- https://gdpr-info.eu/art-3-gdpr/
- https://www.legal500.com/developments/thought-leadership/gdpr-v-indias-dpdpa-key-differences-and-compliance-implications/#:~:text=Both%20laws%20cover%20'personal%20data,of%20personal%20data%20as%20sensitive.

Introduction
In January 2026, the Basic Act on the Development of Artificial Intelligence and the Establishment of a Foundation for Trustworthiness came into effect in South Korea, establishing one of the first national AI laws in the world. The bill, enacted by the National Assembly of Korea in December 2024 and implemented from January 22, 2026, aims to strike a balance between the rapid advancement of technology and clear safeguards against risks, as well as transparency, accountability, and responsible AI use. It puts Seoul and the European Union on the frontline of developing legal systems for artificial intelligence and indicates a long-term goal of becoming an AI power on the global stage.
What the AI Basic Act Covers
The AI Basic Act consists of 19 separate AI bills that are merged into a single piece of legislation that covers the lifecycle of AI, including research and development, deployment, and utilisation. It is very wide in its coverage: it refers to any AI system that influences the Korean market or users inside the country, irrespective of the country in which it is created. The law does not apply to national defence and security applications.
The law defines key concepts like artificial intelligence, generative AI, and high-impact AI and establishes the principles of ethical AI, safety, user rights, industry support, and national policy coordination. It also offers a legal foundation for the activities of the government to promote AI innovation without jeopardising the common good.
Fundamentally, the AI Basic Act is designed to establish a culture of trust between businesses and the government/citizens. It does not prohibit AI technologies and does not excessively limit innovation. Instead, it creates the framework of responsible development and economic growth.
Guardrails for Safety and Accountability
One of the defining features of the AI Basic Act is its risk-based approach. Rather than considering all AI systems as similar, it makes a distinction between ordinary and high-impact AI systems, the ones applied in sectors where the wrong or unsafe decision can have a major impact on the safety, rights, or critical infrastructure of the population. Some of them can be seen in healthcare, transportation, financial services, education, and public services.
The high-impact AI operators must integrate risk management plans, human controls, and surveillance systems. In critical decision-making situations, human control should be available at all times; that is, machines can help but not override human control where human safety or other human rights are involved.
The law enables the regulators to perform on-site checks, demand documentation, and conduct compliance investigations. Fines for breaches may go up to 30 million Korean won (approximately 21,000 US dollars). It has a one-year period of transition that is based on guidance but not enforcement, thus allowing companies time to implement compliance measures before imposing fines.
These requirements contribute to enhancing accountability by defining who is accountable for the safety outcomes. The law in South Korea is placed in the ecosystem, as opposed to the methods in which industry self-governance alone is utilised.
Transparency and Labelling Requirements
The AI Basic Act is based on transparency. The legislation ensures that users are notified before an AI system is operating, particularly with the generation of AI outputs that could be confused with human-created material. As an example, AI-generated text, images, video, or audio that may be difficult to distinguish between reality and fake must have obvious labels or watermarks to allow users to understand the source of the content.
The necessity to label is meant to fight misinformation, misleading activities, and unintended influence on the perception of the people. It is based on international anxiety regarding AI-generated content, such as deepfakes, manipulated media, and misleading online advertisements that have already been addressed separately in policy by South Korea, as well as discussions of data governance.
The transparency is also applied to the process of decision-making in AI systems. Developers and operators should be able to give explicit information about the way in which high-impact systems make their conclusions so that those who are victims of automated decisions can seek meaningful explanations. Although specific explainability criteria are in the process of being developed, the law grounds the principle that AI cannot act behind the scenes in situations where crucial decisions are being made.
Data Privacy and User Protection
The AI governance practice in South Korea is complementary to its current data protection laws, the Personal Information Protection Act (PIPA), which is broadly regarded as equivalent to major international data protection regulations like the GDPR in regard to personal data laws. The AI Basic Act provides an explanation as to how the data can be gathered, processed, and utilised within AI systems with regard to privacy rights, particularly in areas of high impact.
The law does not supersede the personal data protection policies, but it sets certain conditions on how AI developers must address the data to be utilised in training, testing, and running AIs. Operators will be required to document their data workflows and demonstrate how they guard the privacy of their users, including by transparency and consent mechanisms where necessary. This can assist in ensuring that the information that is utilised in AI functions is regulated by definite norms, and it is more difficult to avoid privacy requirements in the name of innovation.
Accountability and Governance Infrastructure
The AI Basic Act establishes a national policy framework of AI governance. The National Artificial Intelligence Strategy Committee, chaired by the President, is at the top and proposes the overall AI policy and aligns it with national objectives. The organisations that would support this are the specialised organisations that deal with safety, risk assessment, and research and the policy centre that would analyse the effects of AI on society and assist in its adoption by the industry.
This institutional structure facilitates strategic guidance as well as operational control. It is through incorporating AI governance in the administration of the people, but not into the market forces, that South Korea wishes to have the ethical and societal concerns become part of the sectors and agencies.
Promoting Innovation and Industrial Support
Although the AI Basic Act does not disregard regulation, it is not a law of restrictions. It also offers legal justification for research and development, human capital, and the growth of the AI industry, with special consideration for startups and small and medium-sized businesses. The legislation promotes AI clusters, long-term funding programmes, and policies to bring foreign talent to the Korean AI ecosystem.
This bidimensional approach of compliance and support is indicative of the broader desire of Korea to become one of the leading AI powers in the world, along with the US and China. The government has pointed out that it will encourage trust by having clear and predictable rules that will attract investment and maintain innovation and not stifle it.
What This Means Globally
The AI Basic Act of South Korea is not only interesting in its contents but also in its timing. It is also among the first thorough AI legislations to come into force in the world, and it beats the gradual regulatory implementations in other parts of the globe, like the European Union. Its system incorporates a principle-based framework, transparency requirements, accountability regulations, and industrial support, which reflects a contrasting model to either pure prescriptive risk regulation or lax self-regulation models elsewhere.
Other critics, such as industry groups and civil society organisations, have suggested that some of the protections may be more explicit, in particular to those who are harmed by AI systems, or to establish high-impact categories. Nonetheless, the framework sets a benchmark upon which most nations will pay close attention when they establish their own AI regimes.
Conclusion
The AI Basic Act puts South Korea at the forefront of national AI regulation, including very well-developed guardrails that enforce transparency, ethical control, accountability, and data protection in addition to fostering innovation. It recognises that AI could lead to economic and social advantages, yet also actual risks, particularly when systems are opaque, autonomous, or widely implemented. South Korea has gone holistically in responsible AI governance by integrating human oversight, labelling requirements, risk management planning, and governance infrastructure into law to be emulated by other countries in the years to come.
Sources
- https://www.theguardian.com/world/2026/jan/29/south-korea-world-first-ai-regulation-laws
- https://www.oecd.org/content/dam/oecd/en/publications/reports/2025/10/artificial-intelligence-and-the-labour-market-in-korea_af668423/68ab1a5a-en.pdf
- https://asianintelligence.ai/south-korea
- https://aibasicact.kr/
- https://aibusinessweekly.net/p/south-korea-ai-basic-act-takes-effect-jan22-2026
- https://asiadaily.org/news/12112/