The advancement of technology has brought about remarkable changes in the aviation industry, including the introduction of inflight internet access systems. While these systems provide passengers with connectivity during their flights, they also introduce potential vulnerabilities that can compromise the security of aircraft systems.
Inflight Internet Access Systems
Inflight internet access systems have become integral to the modern air travel experience, allowing passengers to stay connected even at 30,000 feet. However, these systems can also be attractive targets for hackers, raising concerns about the safety and security of aircraft operations.
The Vulnerabilities of Inflight Internet Access Systems:
Gateway for Hackers: Inflight internet access systems are gateways between the aircraft’s internal network and the outside world. This connectivity allows hackers to exploit potential vulnerabilities and gain unauthorised access to critical systems.
Potential Entry Points: Passengers’ Wi-Fi devices, such as smartphones, tablets, and laptops, can serve as entry points for cybercriminals. If these devices are compromised, they can be used as a launching pad for attacks targeting the aircraft’s systems, including networked avionics.
Implications for Aircraft Security: Successful attacks on inflight internet access systems can have severe consequences for aircraft operations and passenger safety. Hackers gaining unauthorised access to critical systems, such as avionics, can manipulate data, compromise flight controls, or even cause emergency situations.
Securing Networked Avionics
Avionics, the electronic systems that support aircraft operation, play a crucial role in flight safety and navigation. While networked avionics are designed with robust security measures, they are not invulnerable to cyber threats. Therefore, it is essential to implement comprehensive security measures to protect these critical systems.
Ensuring Robust Architecture: Networked avionics should be designed with a strong focus on security. Implementing secure network architectures, such as segmentation and isolation, can minimise the risk of unauthorised access and limit the potential impact of a breach.
Rigorous Security Testing: Avionics systems should undergo rigorous security testing to identify vulnerabilities and weaknesses. Regular assessments, penetration testing, and vulnerability scanning are essential to proactively address any security flaws.
Collaborative Industry Efforts: Collaboration between manufacturers, airlines, regulatory bodies, and security researchers is crucial in strengthening the security of networked avionics. Sharing information, best practices, and lessons learned can help identify and address emerging threats effectively.
Continuous Monitoring and Updtes: Networked avionics should be continuously monitored for any potential security breaches. Prompt updates and patches should be applied to address newly discovered vulnerabilities and protect against known attack vectors.
Best practices to be adopted for the security of Aircraft Systems
Holistic Security Approach: Recognizing the interconnectedness of inflight internet access systems and networked avionics is essential. A holistic security approach should be adopted to address vulnerabilities in both systems and protect the overall aircraft infrastructure.
Comprehensive Security Measures: The security of inflight internet access systems should be on par with any other internet-connected device. Strong authentication, encryption, intrusion detection, and prevention systems should be implemented to mitigate risks and ensure the integrity of data transmissions.
Responsible Practices and Industry Collaboration: Encouraging responsible practices and fostering collaboration between security researchers and industry stakeholders can accelerate the identification and remediation of vulnerabilities. Open communication channels and a cooperative mindset are vital in addressing emerging threats effectively.
Robust Access Controls: Strong access controls, such as multi-factor authentication and role-based access, should be implemented to limit unauthorised access to avionics systems. Only authorised personnel should have the necessary privileges to interact with these critical systems.
Conclusion
Inflight internet access systems bring convenience and connectivity to air travel but also introduce potential risks to the security of aircraft systems. It is crucial to understand and address the vulnerabilities associated with these systems to protect networked avionics and ensure passenger safety. By implementing robust security measures, conducting regular assessments, fostering collaboration, and adopting a comprehensive approach to aircraft cybersecurity, the aviation industry can mitigate the risks and navigate the sky with enhanced safety and confidence. Inflight internet access systems and networked avionics are vital components of modern aircraft, providing connectivity and supporting critical flight operations. Balancing connectivity and cybersecurity is crucial to ensure the safety and integrity of aircraft systems.
“an intermediary, on whose computer resource the information is stored, hosted or published, upon receiving actual knowledge in the form of an order by a court of competent jurisdiction or on being notified by the Appropriate Government or its agency under clause (b) of sub-section (3) of section 79 of the Act, shall not , which is prohibited under any law for the time being in force in relation to the interest of the sovereignty and integrity of India; security of the State; friendly relations with foreign States; public order; decency or morality; in relation to contempt of court; defamation; incitement to an offence relating to the above, or any information which is prohibited under any law for the time being in force”
Law grows by confronting its absences, it heals itself through its own gaps. The most recent notification from MeitY, G.S.R. 775(E) dated October 22, 2025, is an illustration of that self-correction. On November 15, 2025, the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025, will come into effect. They accomplish two crucial things: they restrict who can use "actual knowledge” to initiate takedown and require senior-level scrutiny of those directives. By doing this, they maintain genuine security requirements while guiding India’s content governance system towards more transparent due process.
When Regulation Learns Restraint
To better understand the jurisprudence of revision, one must need to understand that Regulation, in its truest form, must know when to pause. The 2025 amendment marks that rare moment when the government chooses precision over power, when regulation learns restraint. The amendment revises Rule 3(1)(d) of the 2021 Rules. Social media sites, hosting companies, and other digital intermediaries are still required to take action within 36 hours of receiving “actual knowledge” that a piece of content is illegal (e.g. poses a threat to public order, sovereignty, decency, or morality). However, “actual knowledge” now only occurs in the following situations:
(i) a court order from a court of competent jurisdiction, or
(ii) a reasoned written intimation from a duly authorised government officer not below Joint Secretary rank (or equivalent)
The authorised authority in matters involving the police “must not be below the rank of Deputy Inspector General of Police (DIG)”. This creates a well defined, senior-accountable channel in place of a diffuse trigger.
There are two more new structural guardrails. The Rules first establish a monthly assessment of all takedown notifications by a Secretary-level officer of the relevant government to test necessity, proportionality, and compliance with India’s safe harbour provision under Section 79(3) of the IT Act. Second, in order for platforms to act precisely rather than in an expansive manner, takedown requests must be accompanied by legal justification, a description of the illegal act, and precise URLs or identifiers. The cumulative result of these guardrails is that each removal has a proportionality check and a paper trail.
Due Process as the Law’s Conscience
Indian jurisprudence has been debating what constitutes “actual knowledge” for over a decade. The Supreme Court in Shreya Singhal (2015) connected an intermediary’s removal obligation to notifications from official channels or court orders rather than vague notice. But over time, that line became hazy due to enforcement practices and some court rulings, raising concerns about over-removal and safe-harbour loss under Section 79(3). Even while more recent decisions questioned the “reasonable efforts” of intermediaries, the 2025 amendment institutionally pays homage to Shreya Singhal’s ethos by refocusing “actual knowledge” on formal reviewable communications from senior state actors or judges.
The amendment also introduces an internal constitutionalism to executive orders by mandating monthly audits at the Secretary level. The state is required to re-justify its own orders on a rolling basis, evaluating them against proportionality and necessity, which are criteria that Indian courts are increasingly requesting for speech restrictions. Clearer triggers, better logs, and less vague “please remove” communications that previously left compliance teams in legal limbo are the results for intermediaries.
The Court’s Echo in the Amendment
The essence of this amendment is echoed in Karnataka High Court’s Ruling on Sahyog Portal, a government portal used to coordinate takedown orders under Section 79(3)(b), was constitutional. The HC rejected X’s (formerly Twitter’s) appeal contesting the legitimacy of the portal in September. The business had claimed that by giving nodal officers the authority to issue takedown orders without court review, the portal permitted arbitrary content removals. The court disagreed, holding that the officers’ acts were in accordance with Section 79 (3)(b) and that they were “not dropping from the air but emanating from statutes.” The amendment turns compliance into conscience by conforming to the Sahyog Portal verdict, reiterating that due process is the moral grammar of governance rather than just a formality.
Conclusion: The Necessary Restlessness of Law
Law cannot afford stillness; it survives through self doubt and reinvention. The 2025 amendment, too, is not a destination, it’s a pause before the next question, a reminder that justice breathes through revision. As befits a constitutional democracy, India’s path to content governance has been combative and iterative. The next rule making cycle has been sharpened by the stays split judgments, and strikes down that have resulted from strategic litigation centred on the IT Rules, safe harbour, government fact-checking, and blocking orders. Lessons learnt are reflected in the 2025 amendment: review triumphs over opacity; specificity triumphs over vagueness; and due process triumphs over discretion. A digital republic balances freedom and force in this way.
Today, on the International Day of UN Peacekeepers, we honour the brave individuals who risk their lives to uphold peace in the world’s most fragile and conflict-ridden regions. These peacekeepers are symbols of hope, diplomacy, and resilience. But as the world changes, so do the arenas of conflict. In today’s interconnected age, peace and safety are no longer confined to physical spaces—they extend to the digital realm. As we commemorate their service, we must also reflect on the new frontlines of peacekeeping: the internet, where misinformation, cyberattacks, and digital hate threaten stability every day.
The Legacy of UN Peacekeepers
Since 1948, UN Peacekeepers have served in over 70 missions, protecting civilians, facilitating political processes, and rebuilding societies. From conflict zones in Africa to the Balkans, they’ve worked in the toughest terrains to keep the peace. Their role is built on neutrality, integrity, and international cooperation. But as hybrid warfare becomes more prominent and digital threats increasingly influence real-world violence, the peacekeeping mandate must evolve. Traditional missions are now accompanied by the need to understand and respond to digital disruptions that can escalate local tensions or undermine democratic institutions.
The Digital Battlefield
In recent years, we’ve seen how misinformation, deepfakes, online radicalisation, and coordinated cyberattacks can destabilise peace processes. Disinformation campaigns can polarise communities, hinder humanitarian efforts, and provoke violence. Peacekeepers now face the added challenge of navigating conflict zones where digital tools are weaponised. The line between physical and virtual conflict is blurring. Cybersecurity has gone beyond being just a technical issue and is now a peace and security issue as well. From securing communication systems to monitoring digital hate speech that could incite violence, peacekeeping must now include digital vigilance and strategic digital diplomacy.
Building a Culture of Peace Online
Safeguarding peace today also means protecting people from harm in the digital space. Governments, tech companies, civil society, and international organisations must come together to build digital resilience. This includes investing in digital literacy, combating online misinformation, and protecting human rights in cyberspace. Peacekeepers may not wear blue helmets online, but their spirit lives on in every effort to make the internet a safer, kinder, and more truthful place. The role of youth, educators, and responsible digital citizens has never been more crucial. A culture of peace must be cultivated both offline and online.
Conclusion: A Renewed Pledge for Peace
On this UN Peacekeepers’ Day, let us not only honour those who have served and sacrificed but also renew our commitment to peace in all its dimensions. The world’s conflicts are evolving, and so must our response. As we support peacekeepers on the ground, let’s also become peacebuilders in the digital world, amplifying truth, rejecting hate, and building safer, inclusive communities online. Peace today is not just about silencing guns but also silencing disinformation. The call for peace is louder than ever. Let’s answer it, both offline and online.
In an era when misinformation spreads like wildfire across the digital landscape, the need for effective strategies to counteract these challenges has grown exponentially in a very short period. Prebunking and Debunking are two approaches for countering the growing spread of misinformation online. Prebunking empowers individuals by teaching them to discern between true and false information and acts as a protective layer that comes into play even before people encounter malicious content. Debunking is the correction of false or misleading claims after exposure, aiming to undo or reverse the effects of a particular piece of misinformation. Debunking includes methods such as fact-checking, algorithmic correction on a platform, social correction by an individual or group of online peers, or fact-checking reports by expert organisations or journalists. An integrated approach which involves both strategies can be effective in countering the rapid spread of misinformation online.
Brief Analysis of Prebunking
Prebunking is a proactive practice that seeks to rebut erroneous information before it spreads. The goal is to train people to critically analyse information and develop ‘cognitive immunity’ so that they are less likely to be misled when they do encounter misinformation.
The Prebunking approach, grounded in Inoculation theory, teaches people to recognise, analyse and avoid manipulation and misleading content so that they build resilience against the same. Inoculation theory, a social psychology framework, suggests that pre-emptively conferring psychological resistance against malicious persuasion attempts can reduce susceptibility to misinformation across cultures. As the term suggests, the MO is to help the mind in the present develop resistance to influence that it may encounter in the future. Just as medical vaccines or inoculations help the body build resistance to future infections by administering weakened doses of the harm agent, inoculation theory seeks to teach people fact from fiction through exposure to examples of weak, dichotomous arguments, manipulation tactics like emotionally charged language, case studies that draw parallels between truths and distortions, and so on. In showing people the difference, inoculation theory teaches them to be on the lookout for misinformation and manipulation even, or especially, when they least expect it.
The core difference between Prebunking and Debunking is that while the former is preventative and seeks to provide a broad-spectrum cover against misinformation, the latter is reactive and focuses on specific instances of misinformation. While Debunking is closely tied to fact-checking, Prebunking is tied to a wider range of specific interventions, some of which increase motivation to be vigilant against misinformation and others increase the ability to engage in vigilance with success.
There is much to be said in favour of the Prebunking approach because these interventions build the capacity to identify misinformation and recognise red flags However, their success in practice may vary. It might be difficult to scale up Prebunking efforts and ensure their reach to a larger audience. Sustainability is critical in ensuring that Prebunking measures maintain their impact over time. Continuous reinforcement and reminders may be required to ensure that individuals retain the skills and information they gained from the Prebunking training activities. Misinformation tactics and strategies are always evolving, so it is critical that Prebunking interventions are also flexible and agile and respond promptly to developing challenges. This may be easier said than done, but with new misinformation and cyber threats developing frequently, it is a challenge that has to be addressed for Prebunking to be a successful long-term solution.
Encouraging people to be actively cautious while interacting with information, acquire critical thinking abilities, and reject the effect of misinformation requires a significant behavioural change over a relatively short period of time. Overcoming ingrained habits and prejudices, and countering a natural reluctance to change is no mean feat. Developing a widespread culture of information literacy requires years of social conditioning and unlearning and may pose a significant challenge to the effectiveness of Prebunking interventions.
Brief Analysis of Debunking
Debunking is a technique for identifying and informing people that certain news items or information are incorrect or misleading. It seeks to lessen the impact of misinformation that has already spread. The most popular kind of Debunking occurs through collaboration between fact-checking organisations and social media businesses. Journalists or other fact-checkers discover inaccurate or misleading material, and social media platforms flag or label it. Debunking is an important strategy for curtailing the spread of misinformation and promoting accuracy in the digital information ecosystem.
Debunking interventions are crucial in combating misinformation. However, there are certain challenges associated with the same. Debunking misinformation entails critically verifying facts and promoting corrected information. However, this is difficult owing to the rising complexity of modern tools used to generate narratives that combine truth and untruth, views and facts. These advanced approaches, which include emotional spectrum elements, deepfakes, audiovisual material, and pervasive trolling, necessitate a sophisticated reaction at all levels: technological, organisational, and cultural.
Furthermore, It is impossible to debunk all misinformation at any given time, which effectively means that it is impossible to protect everyone at all times, which means that at least some innocent netizens will fall victim to manipulation despite our best efforts. Debunking is inherently reactive in nature, addressing misinformation after it has grown extensively. This reactionary method may be less successful than proactive strategies such as Prebunking from the perspective of total harm done. Misinformation producers operate swiftly and unexpectedly, making it difficult for fact-checkers to keep up with the rapid dissemination of erroneous or misleading information. Debunking may need continuous exposure to fact-check to prevent erroneous beliefs from forming, implying that a single Debunking may not be enough to rectify misinformation. Debunking requires time and resources, and it is not possible to disprove every piece of misinformation that circulates at any particular moment. This constraint may cause certain misinformation to go unchecked, perhaps leading to unexpected effects. The misinformation on social media can be quickly spread and may become viral faster than Debunking pieces or articles. This leads to a situation in which misinformation spreads like a virus, while the antidote to debunked facts struggles to catch up.
Prebunking vs Debunking: Comparative Analysis
Prebunking interventions seek to educate people to recognise and reject misinformation before they are exposed to actual manipulation. Prebunking offers tactics for critical examination, lessening the individuals' susceptibility to misinformation in a variety of contexts. On the other hand, Debunking interventions involve correcting specific false claims after they have been circulated. While Debunking can address individual instances of misinformation, its impact on reducing overall reliance on misinformation may be limited by the reactive nature of the approach.
CyberPeace Policy Recommendations for Tech/Social Media Platforms
With the rising threat of online misinformation, tech/social media platforms can adopt an integrated strategy that includes both Prebunking and Debunking initiatives to be deployed and supported on all platforms to empower users to recognise the manipulative messaging through Prebunking and be aware of the accuracy of misinformation through Debunking interventions.
Gamified Inoculation: Tech/social media companies can encourage gamified inoculation campaigns, which is a competence-oriented approach to Prebunking misinformation. This can be effective in helping people immunise the receiver against subsequent exposures. It can empower people to build competencies to detect misinformation through gamified interventions.
Promotion of Prebunking and Debunking Campaigns through Algorithm Mechanisms:Tech/social media platforms may promote and guarantee that algorithms prioritise the distribution of Prebunking materials to users, boosting educational content that strengthens resistance to misinformation. Platform operators should incorporate algorithms that prioritise the visibility of Debunking content in order to combat the spread of erroneous information and deliver proper corrections; this can eventually address and aid in Prebunking and Debunking methods to reach a bigger or targeted audience.
User Empowerment to Counter Misinformation:Tech/social media platforms can design user-friendly interfaces that allow people to access Prebunking materials, quizzes, and instructional information to help them improve their critical thinking abilities. Furthermore, they can incorporate simple reporting tools for flagging misinformation, as well as links to fact-checking resources and corrections.
Partnership with Fact-Checking/Expert Organizations:Tech/social media platforms can facilitate Prebunking and Debunking initiatives/campaigns by collaborating with fact-checking/expert organisations and promoting such initiatives at a larger scale and ultimately fighting misinformation with joint hands initiatives.
Conclusion
The threat of online misinformation is only growing with every passing day and so, deploying effective countermeasures is essential. Prebunking and Debunking are the two such interventions. To sum up: Prebunking interventions try to increase resilience to misinformation, proactively lowering susceptibility to erroneous or misleading information and addressing broader patterns of misinformation consumption, while Debunking is effective in correcting a particular piece of misinformation and having a targeted impact on belief in individual false claims. An integrated approach involving both the methods and joint initiatives by tech/social media platforms and expert organizations can ultimately help in fighting the rising tide of online misinformation and establishing a resilient online information landscape.
Your institution or organization can partner with us in any one of our initiatives or policy research activities and complement the region-specific resources and talent we need.