How TCP/IP Became the Backbone of the Internet
Introduction
Have you ever wondered how the internet works? Yes, there are screens and wires, but what’s going on beneath the surface? Every time you open a website, send an email, chat on messaging apps, or stream movies, you’re relying on something you probably don’t think about: the TCP/IP protocol suite. Without it, the internet as we know it wouldn’t exist. Let’s take a look at why this unassuming set of rules allows us to connect to anyone anywhere in the world.
The Problem: Networks That Couldn't Talk to Each Other
The internet is widely called a network of networks. A network is a group of devices that are connected and can share data with each other.
Researchers and governments began building early computer networks in the 1960s and 70s. But as the Cold War intensified, the U.S. military felt the need to establish a robust data-sharing infrastructure through interconnected networks that could withstand attacks. At the time, each network had different standards and protocols, which meant getting networks to communicate wasn’t easy or efficient. One network would have to be subsumed into another. This would lead to major problems in the reliability of data relay, flexibility of including more nodes, scalability of the interconnected network, and innovation.
The Breakthrough: Open Architecture Networking
This changed in the 1970s, when Bob Kahn proposed the concept of open architecture networking. It was a simple but revolutionary idea. He envisioned a system where all networks could talk to each other as equals. In this conceptualisation, all networks, even though unique in design and interface, could connect as peers to facilitate end-to-end communication. End-to-end communication helps deliver data between the source and destination without relying on intermediate nodes to control or modify it. This helps to make data relay more reliable and less prone to errors.
Along with Vint Cerf, he developed a network protocol, the TCP/IP suite, that would go on to enable different networks across satellite, wired, and non-wired domains to communicate with one another.
What Is TCP/IP?
TCP/IP stands for Transmission Control Protocol / Internet Protocol. It’s a set of communication rules that allow computers and devices to exchange information across different networks.
It’s powerful because:
- Layered and open architecture: Each function (like data delivery or routing) is handled by a specific layer. This modular design makes it easy to build new technologies like the World Wide Web or streaming services on top of it.
- Decentralisation: There's no single point of control. Any device can connect to another across the internet, making it scalable and resilient.
- Standardisation: TCP/IP works across all kinds of hardware and operating systems, making it truly universal.
The Core Components
- TCP (Transmission Control Protocol): Ensures that data is delivered accurately and in order. If any piece is lost or duplicated, TCP handles it.
- IP (Internet Protocol): Handles addressing and routing. It decides where each packet of data should go and how it gets there.
- UDP (User Datagram Protocol): A lightweight version of TCP, used when speed is more important than accuracy, such as for video calls or online gaming.
Why It Matters
The TCP/IP protocol suite introduced a set of standardised guidelines that enable networks to communicate, thereby laying the foundation of the Internet. It has made the Internet global, open, reliable, interoperable, scalable, and resilient, — features because of which the Internet has come to become the backbone of modern communication systems. So the next time you open a browser or send a message, remember: it’s TCP/IP quietly making it all possible.
References
- https://www.techtarget.com/searchnetworking/definition/ARPANET
- https://www.internetsociety.org/internet/history-internet/brief-history-internet/
- https://www.geeksforgeeks.org/tcp-ip-model/
- https://www.oreilly.com/library/view/tcpip-network-administration/0596002971/ch01.html
Related Blogs

Introduction: Reasons Why These Amendments Have Been Suggested.
The suggested changes in the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, are the much-needed regulatory reaction to the blistering emergence of synthetic information and deepfakes. These reforms are due to the pressing necessity to govern risks within the digital ecosystem as opposed to regular reformation.
The Emergence of the Digital Menace
Generative AI tools have also facilitated the generation of very realistic images, videos, audio, and text in recent years. Such artificial media have been abused to portray people in situations they are not in or in statements they have never said. The market size is expected to have a compound annual growth rate(CAGR) from 2025 to 2031 of 37.57%, resulting in a market volume of US$400.00 bn by 2031. Therefore, tight regulatory controls are necessary to curb a high prevalence of harm in the Indian digital world.
The Gap in Law and Institution
None of the IT Rules, 2021, clearly addressed synthetic content. Although the Information Technology Act, 2000 dealt with identity theft, impersonation and violation of privacy, the intermediaries were not explicitly obligated on artificial media. This left a loophole in enforcement, particularly since AI-generated content might get around the old system of moderation. These amendments bring India closer to the international standards, including the EU AI Act, which requires transparency and labelling of AI-driven content. India addresses such requirements and adapts to local constitutional and digital ecosystem needs.
II. Explanation of the Amendments
The amendments of 2025 present five alternative changes in the current IT Rules framework, which address various areas of synthetic media regulation.
A. Definitional Clarification: Synthetic Generation of Information Introduction.
Rule 2(1)(wa) Amendment:
The amendments provide an all-inclusive definition of what is meant by “synthetically generated information” as information, which is created, or produced, changed or distorted with the use of a computer resource, in a way that such information can reasonably be perceived to be genuine. This definition is intentionally broad and is not limited to deepfakes in the strict sense but to any artificial media that has gone through algorithmic manipulation in order to have a semblance of authenticity.
Expansion of Legal Scope:
Rule 2(1A) also makes it clear that any mention of information in the context of unlawful acts, namely, including categories listed in Rule 3(1)(b), Rule 3(1)(d), Rule 4(2), and Rule 4(4), should be understood to mean synthetically generated information. This is a pivotal interpretative protection that does not allow intermediaries to purport that synthetic versions of illegal material are not under the control of the regulation since they are algorithmic creations and not descriptions of what actually occurred.
B. Safe Harbour Protection and Content Removal Requirements
Amendment, rule 3(1)(b)- Safe Harbour Clarification:
The amendments add a certain proviso to the Rule (3) (1)(b) that explains a deletion or facilitation of access of synthetically produced information (or any information falling within specified categories) which the intermediaries have made in good faith as part of reasonable endeavours or at the receipt of a complaint shall not be considered a breach of the Section 79(2) (a) or (b) of the Information Technology Act, 2000. This coverage is relevant especially since it insures the intermediaries against liability in situations where they censor the synthetic contents in advance of a court ruling or governmental warnings.
C. Labelling and Metadata Requirements that are mandatory on Intermediaries that enable the creation of synthetic content
The amendments establish a new framework of due diligence in Rule 3(3) on the case of intermediaries that offer tools to generate, modify, or alter the synthetically generated information. Two fundamental requirements are laid down.
- The generated information must be prominently labelled or embedded with a permanent, unique metadata or identifier. The label or metadata must be:
- Visibly displayed or made audible in a prominent manner on or within that synthetically generated information.
- It should cover at least 10% of the surface of the visual display or, in the case of audio content, during the initial 10% of its duration.
- It can be used to immediately identify that such information is synthetically generated information which has been created, generated, modified, or altered using the computer resource of the intermediary.
- The intermediary in clause (a) shall not enable modification, suppression or removal of such label, permanent unique metadata or identifier, by whatever name called.
D. Important Social Media Intermediaries- Pre-Publication Checking Responsibilities
The amendments present a three-step verification mechanism, under Rule 4(1A), to Significant Social Media Intermediaries (SSMIs), which enables displaying, uploading or publishing on its computer resource before such display, uploading, or publication has to follow three steps.
Step 1- User Declaration: It should compel the users to indicate whether the materials they are posting are synthetically created. This puts the first burden on users.
Step 2-Technical Verification: To ensure that the user is truly valid, the SSMIs need to provide reasonable technical means, such as automated tools or other applications. This duty is contextual and would be based on the nature, format and source of content. It does not allow intermediaries to escape when it is known that not every type of content can be verified using the same standards.
Step 3- Prominent Labelling: In case the synthetic origin is verified by user declaration or technical verification, SSMIs should have a notice or label that is prominently displayed to be seen by users before publication.
The amendments provide a better system of accountability and set that intermediaries will be found to have failed due diligence in a case where it is established that they either knowingly permitted, encouraged or otherwise failed to act on synthetically produced information in contravention of these requirements. This brings in an aspect of knowledge, and intermediaries cannot use accidental errors as an excuse for non-compliance.
An explanation clause makes it clear that SSMIs should also make reasonable and proportionate technical measures to check user declarations and keep no synthetic content published without adequate declaration or labelling. This eliminates confusion on the role of the intermediaries with respect to making declarations.
III. Attributes of The Amendment Framework
- Precision in Balancing Innovation and Accountability.
The amendments have commendably balanced two extreme regulatory postures by neither prohibiting nor allowing the synthetic media to run out of control. It has recognised the legitimate use of synthetic media creation in entertainment, education, research and artistic expression by adopting a transparent and traceable mandate that preserves innovation while ensuring accountability.
- Overt Acceptance of the Intermediary Liability and Reverse Onus of Knowledge
Rule 4(1A) gives a highly significant deeming rule; in cases where the intermediary permits or refrains from acting with respect to the synthetic content knowing that the rules are violated, it will be considered as having failed to comply with the due diligence provisions. This description closes any loopholes in unscrupulous supervision where intermediaries can be able to argue that they did so. Standard of scienter promotes material investment in the detection devices and censor mechanisms that have been in place to offer security to the platforms that have sound systems, albeit the fact that the tools fail to capture violations at times.
- Clarity Through Definition and Interpretive Guidance
The cautious definition of the term “synthetically generated information” and the guidance that is provided in Rule 2(1A) is an admirable attempt to solve confusion in the previous regulatory framework. Instead of having to go through conflicting case law or regulatory direction, the amendments give specific definitional limits. The purposefully broad formulation (artificially or algorithmically created, generated, modified or altered) makes sure that the framework is not avoided by semantic games over what is considered to be a real synthetic content versus a slight algorithmic alteration.
- Insurance of non-accountability but encourages preventative moderation
The safe harbour clarification of the Rule 3(1)(b) amendment clearly safeguards the intermediaries who voluntarily dismiss the synthetic content without a court order or government notification. It is an important incentive scheme that prompts platforms to implement sound self-regulation measures. In the absence of such protection, platforms may also make rational decisions to stay in a passive stance of compliance, only deleting content under the pressure of an external authority, thus making them more effective in keeping users safe against dangerous synthetic media.
IV. Conclusion
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2025 suggest a structured, transparent, and accountable execution of curbing the rising predicaments of synthetic media and deepfakes. The amendments deal with the regulatory and interpretative gaps that have always existed in determining what should be considered as synthetically generated information, the intermediary liabilities and the mandatory labelling and metadata requirement. Safe-harbour protection will encourage the moderation proactively, and a scienter-based liability rule will not permit the intermediaries to escape liability when they are aware of the non-compliance but tolerate such non-compliance. The idea to introduce pre-publication verification of Significant Social Media Intermediaries adds the responsibility to users and due diligence to the platform. Overall, the amendments provide a reasonable balance between innovation and regulation, make the process more open with its proper definitions, promote responsible conduct on the platform and transform India and the new standards in the sphere of synthetic media regulation. They collaborate to enhance the verisimilitude, defence of the users, and visibility of the systems of the digital ecosystem of India.
V. References
2. https://www.statista.com/outlook/tmo/artificial-intelligence/generative-ai/worldwide

Introduction
In September 2025, social media feeds were flooded with strikingly vintage saree-type portraits. These images were not taken by professional photographers, but AI-generated images. More than a million people turned to the "Nano Banana" AI tool of Google Gemini, uploading their ordinary selfies and watching them transform into Bollywood-style, cinematic, 1990s posters. The popularity of this trend is evident, as are the concerns of law enforcement agencies and cybersecurity experts regarding risks of infringement of privacy, unauthorised data sharing, and threats related to deepfake misuse.
What is the Trend?
This trend in AI sarees is created using Google Geminis' Nano Banana image-editing tool, editing and morphing uploaded selfies into glitzy vintage portraits in traditional Indian attire. A user would upload a clear photograph of a solo subject and enter prompts to generate images of cinematic backgrounds, flowing chiffon sarees, golden-hour ambience, and grainy film texture, reminiscent of classic Bollywood imagery. Since its launch, the tool has processed over 500 million images, with the saree trend marking one of its most popular uses. Photographs are uploaded to an AI system, which uses machine learning to alter the pictures according to the description specified. The transformed AI portraits are then shared by users on their Instagram, WhatsApp, and other social media platforms, thereby contributing to the viral nature of the trend.
Law Enforcement Agency Warnings
- A few Indian police agencies have issued strong advisories against participation in such trends. IPS Officer VC Sajjanar warned the public: "The uploading of just one personal photograph can make greedy operators go from clicking their fingers to joining hands with criminals and emptying one's bank account." His advisory had further warned that sharing personal information through trending apps can lead to many scams and fraud.
- Jalandhar Rural Police issued a comprehensive warning stating that such applications put the user at risk of identity theft and online fraud when personal pictures are uploaded. A senior police officer stated: "Once sensitive facial data is uploaded, it can be stored, analysed, and even potentially misused to open the way for cyber fraud, impersonation, and digital identity crimes.
The Cyber Crime Police also put out warnings on social media platforms regarding how photo applications appear entertaining but can pose serious risks to user privacy. They specifically warned that selfies uploaded can lead to data misuse, deepfake creation, and the generation of fake profiles, which are punishable under Sections 66C and 66D of the IT Act 2000.
Consequences of Such Trends
The massification of AI photo trends has several severe effects on private users and society as a whole. Identity fraud and theft are the main issues, as uploaded biometric information can be used by hackers to generate imitated identities, evading security measures or committing financial fraud. The facial recognition information shared by means of these trends remains a digital asset that could be abused years after the trend has passed. ‘Deepfake’ production is another tremendous threat because personal images shared on AI platforms can be utilised to create non-consensual artificial media. Studies have found that more than 95,000 deepfake videos circulated online in 2023 alone, a 550% increase from 2019. The images uploaded can be leveraged to produce embarrassing or harmful content that can cause damage to personal reputation, relationships, and career prospects.
Financial exploitation is also when fake applications in the guise of genuine AI tools strip users of their personal data and financial details. Such malicious platforms tend to look like well-known services so as to trick users into divulging sensitive information. Long-term privacy infringement also comes about due to the permanent retention and possible commercial exploitation of personal biometric information by AI firms, even when users close down their accounts.
Privacy Risks
A few months ago, the Ghibli trend went viral, and now this new trend has taken over. Such trends may subject users to several layers of privacy threats that go far beyond the instant gratification of taking pleasing images. Harvesting of biometric data is the most critical issue since facial recognition information posted on these sites becomes inextricably linked with user identities. Under Google's privacy policy for Gemini tools, uploaded images might be stored temporarily for processing and may be kept for longer periods if used for feedback purposes or feature development.
Illegal data sharing happens when AI platforms provide user-uploaded content to third parties without user consent. A Mozilla Foundation study in 2023 discovered that 80% of popular AI apps had either non-transparent data policies or obscured the ability of users to opt out of data gathering. This opens up opportunities for personal photographs to be shared with anonymous entities for commercial use. Exploitation of training data includes the use of personal photos uploaded to enhance AI models without notifying or compensating users. Although Google provides users with options to turn off data sharing within privacy settings, most users are ignorant of these capabilities. Integration of cross-platform data increases privacy threats when AI applications use data from interlinked social media profiles, providing detailed user profiles that can be taken advantage of for purposeful manipulation or fraud. Inadequacy of informed consent continues to be a major problem, with users engaging in trends unaware of the entire context of sharing information. Studies show that 68% of individuals show concern regarding the misuse of AI app data, but 42% use these apps without going through the terms and conditions.
CyberPeace Expert Recommendations
While the Google Gemini image trend feature operates under its own terms and conditions, it is important to remember that many other tools and applications allow users to generate similar content. Not every platform can be trusted without scrutiny, so users who engage in such trends should do so only on trustworthy platforms and make reliable, informed choices. Above all, following cybersecurity best practices and digital security principles remains essential.
Here are some best practices:-
1.Immediate Protection Measures for User
In a nutshell, protection of personal information may begin by not uploading high-resolution personal photos into AI-based applications, especially those trained for facial recognition. Instead, a person can play with stock images or non-identifiable pictures to the degree that it satisfies the program's creative features without compromising biometric security. Strong privacy settings should exist on every social media platform and AI app by which a person can either limit access to their data, content, or anything else.
2.Organisational Safeguards
AI governance frameworks within organisations should enumerate policies regarding the usage of AI tools by employees, particularly those concerning the upload of personal data. Companies should appropriately carry out due diligence before the adoption of an AI product made commercially available for their own use in order to ensure that such a product has its privacy and security levels as suitable as intended by the company. Training should instruct employees regarding deepfake technology.
3.Technical Protection Strategies
Deepfake detection software should be used. These tools, which include Microsoft Video Authenticator, Intel FakeCatcher, and Sensity AI, allow real-time detection with an accuracy higher than 95%. Use blockchain-based concepts to verify content to create tamper-proof records of original digital assets so that the method of proposing deepfake content as original remains very difficult.
4.Policy and Awareness Initiatives
For high-risk transactions, especially in banks and identity verification systems, authentication should include voice and face liveness checks to ensure the person is real and not using fake or manipulated media. Implement digital literacy programs to empower users with knowledge about AI threats, deepfake detection techniques, and safe digital practices. Companies should also liaise with law enforcement, reporting purported AI crimes, thus offering assistance in combating malicious applications of synthetic media technology.
5.Addressing Data Transparency and Cross-Border AI Security
Regulatory systems need to be called for requiring the transparency of data policies in AI applications, along with providing the rights and choices to users regarding either Biometric data or any other data. Promotion must be given to the indigenous development of AI pertaining to India-centric privacy concerns, assuring the creation of AI models in a secure, transparent, and accountable manner. In respect of cross-border AI security concerns, there must be international cooperation for setting common standards of ethical design, production, and use of AI. With the virus-like contagiousness of AI phenomena such as saree editing trends, they portray the potential and hazards of the present-day generation of artificial intelligence. While such tools offer newer opportunities, they also pose grave privacy and security concerns, which should have been considered quite some time ago by users, organisations, and policy-makers. Through the setting up of all-around protection mechanisms and keeping an active eye on digital privacy, both individuals and institutions will reap the benefits of this AI innovation, and they shall not fall on the darker side of malicious exploitation.
References
- https://www.hindustantimes.com/trending/amid-google-gemini-nano-banana-ai-trend-ips-officer-warns-people-about-online-scams-101757980904282.html%202
- https://www.moneycontrol.com/news/india/viral-banana-ai-saree-selfies-may-risk-fraud-warn-jalandhar-rural-police-13549443.html
- https://www.parliament.nsw.gov.au/researchpapers/Documents/Sexually%20explicit%20deepfakes.pdf
- https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year
- https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year
- https://socradar.io/top-10-ai-deepfake-detection-tools-2025/

Introduction
The digital realm is evolving at a rapid pace, revolutionising cyberspace at a breakneck speed. However, this dynamic growth has left several operational and regulatory lacunae in the fabric of cyberspace, which are exploited by cybercriminals for their ulterior motives. One of the threats that emerged rapidly in 2024 is proxyjacking, in which vulnerable systems are exploited by cyber criminals to sell their bandwidth to third-party proxy servers. This cyber threat poses a significant threat to organisations and individual servers.
Proxyjacking is a kind of cyber attack that leverages legit bandwidth sharing services such as Peer2Profit and HoneyGain. These are legitimate platforms but proxyjacking occurs when such services are exploited without user consent. These services provide the opportunity to monetize their surplus internet bandwidth by sharing with other users. The model itself is harmless but provides an avenue for numerous cyber hostilities. The participants install net-sharing software and add the participating system to the proxy network, enabling users to route their traffic through the system. This setup intends to enhance privacy and provide access to geo-locked content.
The Modus Operandi
These systems are hijacked by cybercriminals, who sell the bandwidth of infected devices. This is achieved by establishing Secure Shell (SSH) connections to vulnerable servers. While hackers rarely use honeypots to render elaborate scams, the technical possibility of them doing so cannot be discounted. Cowrie Honeypots, for instance, are engineered to emulate UNIX systems. Attackers can use similar tactics to gain unauthorized access to poorly secured systems. Once inside the system, attackers utilise legit tools such as public docker images to take over proxy monetization services. These tools are undetectable to anti-malware software due to being genuine software in and of themselves. Endpoint detection and response (EDR) tools also struggle with the same threats.
The Major Challenges
Limitation Of Current Safeguards – current malware detection software is unable to distinguish between malicious and genuine use of bandwidth services, as the nature of the attack is not inherently malicious.
Bigger Threat Than Crypto-Jacking – Proxyjacking poses a bigger threat than cryptojacking, where systems are compromised to mine crypto-currency. Proxyjacking uses minimal system resources rendering it more challenging to identify. As such, proxyjacking offers perpetrators a higher degree of stealth because it is a resource-light technique, whereas cryptojacking can leave CPU and GPU usage footprints.
Role of Technology in the Fight Against Proxyjacking
Advanced Safety Measures- Implementing advanced safety measures is crucial in combating proxyjacking. Network monitoring tools can help detect unusual traffic patterns indicative of proxyjacking. Key-based authentication for SSH can significantly reduce the risk of unauthorized access, ensuring that only trusted devices can establish connections. Intrusion Detection Systems and Intrusion Prevention Systems can go a long way towards monitoring unusual outbound traffic.
Robust Verification Processes- sharing services must adopt robust verification processes to ensure that only legitimate users are sharing bandwidth. This could include stricter identity verification methods and continuous monitoring of user activities to identify and block suspicious behaviour.
Policy Recommendations
Verification for Bandwidth Sharing Services – Mandatory verification standards should be enforced for bandwidth-sharing services, including stringent Know Your Customer (KYC) protocols to verify the identity of users. A strong regulatory body would ensure proper compliance with verification standards and impose penalties. The transparency reports must document the user base, verification processes and incidents.
Robust SSH Security Protocols – Key-based authentication for SSH across organisations should be mandated, to neutralize the risk of brute force attacks. Mandatory security audits of SSH configuration within organisations to ensure best practices are complied with and vulnerabilities are identified will help. Detailed logging of SSH attempts will streamline the process of identification and investigation of suspicious behaviour.
Effective Anomaly Detection System – Design a standard anomaly detection system to monitor networks. The industry-wide detection system should focus on detecting inconsistencies in traffic patterns indicating proxy-jacking. Establishing mandatory protocols for incident reporting to centralised authority should be implemented. The system should incorporate machine learning in order to stay abreast with evolving attack methodologies.
Framework for Incident Response – A national framework should include guidelines for investigation, response and remediation to be followed by organisations. A centralized database can be used for logging and tracking all proxy hacking incidents, allowing for information sharing on a real-time basis. This mechanism will aid in identifying emerging trends and common attack vectors.
Whistleblower Incentives – Enacting whistleblower protection laws will ensure the proper safety of individuals reporting proxyjacking activities. Monetary rewards provide extra incentives and motivate individuals to join whistleblowing programs. To provide further protection to whistleblowers, secure communication channels can be established which will ensure full anonymity to individuals.
Conclusion
Proxyjacking represents an insidious and complicated threat in cyberspace. By exploiting legitimate bandwidth-sharing services, cybercriminals can profit while remaining entirely anonymous. Addressing this issue requires a multifaceted approach, including advanced anomaly detection systems, effective verification systems, and comprehensive incident response frameworks. These measures of strong cyber awareness among netizens will ensure a healthy and robust cyberspace.
References
- https://gridinsoft.com/blogs/what-is-proxyjacking/
- https://www.darkreading.com/cyber-risk/ssh-servers-hit-in-proxyjacking-cyberattacks
- https://therecord.media/hackers-use-log4j-in-proxyjacking-scheme