Using incognito mode and VPN may still not ensure total privacy, according to expert
SVIMS Director and Vice-Chancellor B. Vengamma lighting a lamp to formally launch the cybercrime awareness programme conducted by the police department for the medical students in Tirupati on Wednesday.
An awareness meet on safe Internet practices was held for the students of Sri Venkateswara University University (SVU) and Sri Venkateswara Institute of Medical Sciences (SVIMS) here on Wednesday.
“Cyber criminals on the prowl can easily track our digital footprint, steal our identity and resort to impersonation,” cyber expert I.L. Narasimha Rao cautioned the college students.
Addressing the students in two sessions, Mr. Narasimha Rao, who is a Senior Manager with CyberPeace Foundation, said seemingly common acts like browsing a website, and liking and commenting on posts on social media platforms could be used by impersonators to recreate an account in our name.
Turning to the youth, Mr. Narasimha Rao said the incognito mode and Virtual Private Network (VPN) used as a protected network connection do not ensure total privacy as third parties could still snoop over the websites being visited by the users. He also cautioned them tactics like ‘phishing’, ‘vishing’ and ‘smishing’ being used by cybercriminals to steal our passwords and gain access to our accounts.
“After cracking the whip on websites and apps that could potentially compromise our security, the Government of India has recently banned 232 more apps,” he noted.
Additional Superintendent of Police (Crime) B.H. Vimala Kumari appealed to cyber victims to call 1930 or the Cyber Mitra’s helpline 9121211100. SVIMS Director B. Vengamma stressed the need for caution with smartphones becoming an indispensable tool for students, be it for online education, seeking information, entertainment or for conducting digital transactions.
Related Blogs

Brief Overview of the EU AI Act
The EU AI Act, Regulation (EU) 2024/1689, was officially published in the EU Official Journal on 12 July 2024. This landmark legislation on Artificial Intelligence (AI) will come into force just 20 days after publication, setting harmonized rules across the EU. It amends key regulations and directives to ensure a robust framework for AI technologies. The AI Act, a set of EU rules governing AI, has been in development for two years and now, the EU AI Act enters into force across all 27 EU Member States on 1 August 2024, with certain future deadlines tied up and the enforcement of the majority of its provisions will commence on 2 August 2026. The law prohibits certain uses of AI tools, including those that threaten citizens' rights, such as biometric categorization, untargeted scraping of faces, and systems that try to read emotions are banned in the workplace and schools, as are social scoring systems. It also prohibits the use of predictive policing tools in some instances. The law takes a phased approach to implementing the EU's AI rulebook, meaning there are various deadlines between now and then as different legal provisions will start to apply.
The framework puts different obligations on AI developers, depending on use cases and perceived risk. The bulk of AI uses will not be regulated as they are considered low-risk, but a small number of potential AI use cases are banned under the law. High-risk use cases, such as biometric uses of AI or AI used in law enforcement, employment, education, and critical infrastructure, are allowed under the law but developers of such apps face obligations in areas like data quality and anti-bias considerations. A third risk tier also applies some lighter transparency requirements for makers of tools like AI chatbots.
In case of failure to comply with the Act, the companies in the EU providing, distributing, importing, and using AI systems and GPAI models, are subject to fines of up to EUR 35 million or seven per cent of the total worldwide annual turnover, whichever is higher.
Key highlights of EU AI Act Provisions
- The AI Act classifies AI according to its risk. It prohibits Unacceptable risks such as social scoring systems and manipulative AI. The regulation mostly addresses high-risk AI systems.
- Limited-risk AI systems are subject to lighter transparency obligations and according to the act, the developers and deployers must ensure that the end-users are aware that the interaction they are having is with AI such as Chatbots and Deepfakes. The AI Act allows the free use of minimal-risk AI. This includes the majority of AI applications currently available in the EU single market like AI-enabled video games, and spam filters, but with the advancement of Gen AI changes with regards to this might be done. The majority of obligations fall on providers (developers) of high-risk AI systems that intend to place on the market or put into service high-risk AI systems in the EU, regardless of whether they are based in the EU or a third country. And also, a third-country provider where the high-risk AI system’s output is used in the EU.
- Users are natural or legal persons who deploy an AI system in a professional capacity, not affected end-users. Users (deployers) of high-risk AI systems have some obligations, though less than providers (developers). This applies to users located in the EU, and third-country users where the AI system’s output is used in the EU.
- General purpose AI or GPAI model providers must provide technical documentation, and instructions for use, comply with the Copyright Directive, and publish a summary of the content used for training. Free and open license GPAI model providers only need to comply with copyright and publish the training data summary, unless they present a systemic risk. All providers of GPAI models that present a systemic risk – open or closed – must also conduct model evaluations, and adversarial testing, and track and report serious incidents and ensure cybersecurity protections.
- The Codes of Practice will account for international approaches. It will cover but not necessarily be limited to the obligations, particularly the relevant information to include in technical documentation for authorities and downstream providers, identification of the type and nature of systemic risks and their sources, and the modalities of risk management accounting for specific challenges in addressing risks due to the way they may emerge and materialize throughout the value chain. The AI Office may invite GPAI model providers, and relevant national competent authorities to participate in drawing up the codes, while civil society, industry, academia, downstream providers and independent experts may support the process.
Application & Timeline of Act
The EU AI Act will be fully applicable 24 months after entry into force, but some parts will be applicable sooner, for instance the ban on AI systems posing unacceptable risks will apply six months after the entry into force. The Codes of Practice will apply nine months after entry into force. Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force. High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force. The expected timeline for the same is:
- August 1st, 2024: The AI Act will enter into force.
- February 2025: Prohibition of certain AI systems - Chapters I (general provisions) & II (prohibited AI systems) will apply; Prohibition of certain AI systems.
- August 2025: Chapter III Section 4 (notifying authorities), Chapter V (general purpose AI models), Chapter VII (governance), Chapter XII (confidentiality and penalties), and Article 78 (confidentiality) will apply, except for Article 101 (fines for General Purpose AI providers); Requirements for new GPAI models.
- August 2026: The whole AI Act applies, except for Article 6(1) & corresponding obligations (one of the categories of high-risk AI systems);
- August 2027: Article 6(1) & corresponding obligations apply.
The AI Act sets out clear definitions for the different actors involved in AI, such as the providers, deployers, importers, distributors, and product manufacturers. This means all parties involved in the development, usage, import, distribution, or manufacturing of AI systems will be held accountable. Along with this, the AI Act also applies to providers and deployers of AI systems located outside of the EU, e.g., in Switzerland, if output produced by the system is intended to be used in the EU. The Act applies to any AI system within the EU that is on the market, in service, or in use, covering both AI providers (the companies selling AI systems) and AI deployers (the organizations using those systems).
In short, the AI Act will apply to different companies across the AI distribution chain, including providers, deployers, importers, and distributors (collectively referred to as “Operators”). The EU AI Act also has extraterritorial application and can also apply to companies not established in the EU, or providers outside the EU if they -make an AI system or GPAI model available on the EU market. Even if only the output generated by the AI system is used in the EU, the Act still applies to such providers and deployers.
CyberPeace Outlook
The EU AI Act, approved by EU lawmakers in 2024, is a landmark legislation designed to protect citizens' health, safety, and fundamental rights from potential harm caused by AI systems. The AI Act will apply to AI systems and GPAI models. The Act creates a tiered risk categorization system with various regulations and stiff penalties for noncompliance. The Act adopts a risk-based approach to AI governance, categorizing potential risks into four tiers: unacceptable, high, limited, and low. Violations of banned systems carry the highest fine: €35 million, or 7 percent of global annual revenue. It establishes transparency requirements for general-purpose AI systems. The regulation also provides specific rules for general-purpose AI (GPAI) models and lays down more stringent requirements for GPAI models with 'high-impact capabilities' that could pose a systemic risk and have a significant impact on the internal market. For high-risk AI systems, the AI Act addresses the issues of fundamental rights impact assessment and data protection impact assessment.
The EU AI Act aims to enhance trust in AI technologies by establishing clear regulatory standards governing AI. We encourage regulatory frameworks that strive to balance the desire to foster innovation with the critical need to prevent unethical practices that may cause user harm. The legislation can be seen as strengthening the EU's position as a global leader in AI innovation and developing regulatory frameworks for emerging technologies. It sets a global benchmark for regulating AI. The companies to which the act applies will need to make sure their practices align with the same. The act may inspire other nations to develop their own legislation contributing to global AI governance. The world of AI is complex and challenging, the implementation of regulatory checks, and compliance by the concerned companies, all pose a conundrum. However, in the end, balancing innovation with ethical considerations is paramount.
At the same hand, the tech sector welcomes regulatory progress but warns that overly-rigid regulations could stifle innovation. Hence flexibility and adaptability are key to effective AI governance. The journey towards robust AI regulation has begun in major countries, and it is important that we find the right balance between safety and innovation and also take into consideration the industry reactions.
References:
- https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401689
- https://www.theverge.com/2024/7/12/24197058/eu-ai-act-regulations-bans-deadline
- https://techcrunch.com/2024/07/12/eus-ai-act-gets-published-in-blocs-official-journal-starting-clock-on-legal-deadlines/
- https://www.wsgr.com/en/insights/eu-ai-act-to-enter-into-force-in-august.html
- https://www.techtarget.com/searchenterpriseai/tip/Is-your-business-ready-for-the-EU-AI-Act
- https://www.simmons-simmons.com/en/publications/clyimpowh000ouxgkw1oidakk/the-eu-ai-act-a-quick-guide

Introduction
The advancement of technology has brought about remarkable changes in the aviation industry, including the introduction of inflight internet access systems. While these systems provide passengers with connectivity during their flights, they also introduce potential vulnerabilities that can compromise the security of aircraft systems.
Inflight Internet Access Systems
Inflight internet access systems have become integral to the modern air travel experience, allowing passengers to stay connected even at 30,000 feet. However, these systems can also be attractive targets for hackers, raising concerns about the safety and security of aircraft operations.
The Vulnerabilities of Inflight Internet Access Systems:
Securing Networked Avionics
Avionics, the electronic systems that support aircraft operation, play a crucial role in flight safety and navigation. While networked avionics are designed with robust security measures, they are not invulnerable to cyber threats. Therefore, it is essential to implement comprehensive security measures to protect these critical systems.
- Ensuring Robust Architecture: Networked avionics should be designed with a strong focus on security. Implementing secure network architectures, such as segmentation and isolation, can minimise the risk of unauthorised access and limit the potential impact of a breach.
- Rigorous Security Testing: Avionics systems should undergo rigorous security testing to identify vulnerabilities and weaknesses. Regular assessments, penetration testing, and vulnerability scanning are essential to proactively address any security flaws.
- Collaborative Industry Efforts: Collaboration between manufacturers, airlines, regulatory bodies, and security researchers is crucial in strengthening the security of networked avionics. Sharing information, best practices, and lessons learned can help identify and address emerging threats effectively.
- Continuous Monitoring and Updtes: Networked avionics should be continuously monitored for any potential security breaches. Prompt updates and patches should be applied to address newly discovered vulnerabilities and protect against known attack vectors.
Best practices to be adopted for the security of Aircraft Systems
- Holistic Security Approach: Recognizing the interconnectedness of inflight internet access systems and networked avionics is essential. A holistic security approach should be adopted to address vulnerabilities in both systems and protect the overall aircraft infrastructure.
- Comprehensive Security Measures: The security of inflight internet access systems should be on par with any other internet-connected device. Strong authentication, encryption, intrusion detection, and prevention systems should be implemented to mitigate risks and ensure the integrity of data transmissions.
- Responsible Practices and Industry Collaboration: Encouraging responsible practices and fostering collaboration between security researchers and industry stakeholders can accelerate the identification and remediation of vulnerabilities. Open communication channels and a cooperative mindset are vital in addressing emerging threats effectively.
- Robust Access Controls: Strong access controls, such as multi-factor authentication and role-based access, should be implemented to limit unauthorised access to avionics systems. Only authorised personnel should have the necessary privileges to interact with these critical systems.
Conclusion
Inflight internet access systems bring convenience and connectivity to air travel but also introduce potential risks to the security of aircraft systems. It is crucial to understand and address the vulnerabilities associated with these systems to protect networked avionics and ensure passenger safety. By implementing robust security measures, conducting regular assessments, fostering collaboration, and adopting a comprehensive approach to aircraft cybersecurity, the aviation industry can mitigate the risks and navigate the sky with enhanced safety and confidence. Inflight internet access systems and networked avionics are vital components of modern aircraft, providing connectivity and supporting critical flight operations. Balancing connectivity and cybersecurity is crucial to ensure the safety and integrity of aircraft systems.
.webp)
Introduction
In an era where organisations are increasingly interdependent through global supply chains, outsourcing and digital ecosystems, third-party risk has become one of the most vital aspects of enterprise risk management. The SolarWinds hack, the MOVEit vulnerabilities and recent software vendor attacks all serve as a reminder of the necessity to enhance Third-Party Risk Management (TPRM). As cyber risks evolve and become more sophisticated and as regulatory oversight sharpens globally, 2025 is a transformative year for the development of TPRM practices. This blog explores the top trends redefining TPRM in 2025, encompassing real-time risk scoring, AI-driven due diligence, harmonisation of regulations, integration of ESG, and a shift towards continuous monitoring. All of these trends signal a larger movement towards resilience, openness and anticipatory defence in an increasingly dependent world.
Real-Time and Continuous Monitoring becomes the Norm
The old TPRM methods entailed point-in-time testing, which typically was an annual or onboarding process. By 2025, organisations are shifting towards continuous, real-time monitoring of their third-party ecosystems. Now, authentic advanced tools are making it possible for companies to take a real-time pulse of the security of their vendors by monitoring threat indicators, patching practices and digital footprint variations. This change has been further spurred by the growth in cyber supply chain attacks, where the attackers target vendors to gain access to bigger organisations. Real-time monitoring software enables the timely detection of malicious activity, equipping organisations with a faster defence response. It also guarantees dynamic risk rating instead of relying on outdated questionnaire-based scoring.
AI and Automation in Risk Assessment and Due Diligence
Manual TPRM processes aren't sustainable anymore. In 2025, AI and machine learning are reshaping the TPRM lifecycle from onboarding and risk classification to contract review and incident handling. AI technology can now analyse massive amounts of vendor documentation and automatically raise red flags on potential issues. Natural language processing (NLP) is becoming more common for automated contract intelligence, which assists in the detection of risky clauses or liability gaps or data protection obligations. In addition, automation is increasing scalability for large organisations that have hundreds or thousands of third-party relationships, eliminating human errors and compliance fatigue. However, all of this must be implemented with a strong focus on security, transparency, and ethical AI use to ensure that sensitive vendor and organisational data remains protected throughout the process.
Risk Quantification and Business Impact Mapping
Risk scoring in isolation is no longer adequate. One of the major trends for 2025 is the merging of third-party risk with business impact analysis (BIA). Organisations are using tools that associate vendors to particular business processes and assets, allowing better knowledge of how a compromise of a vendor would impact operations, customer information or financial position. This movement has resulted in increased use of risk quantification models, such as FAIR (Factor Analysis of Information Risk), which puts dollar values on risks associated with vendors. By using the language of business value, CISOs and risk officers are more effective at prioritising risks and making resource allocations.
Environmental, Social, and Governance (ESG) enters into TPRM
As ESG keeps growing on the corporate agenda, organisations are taking TPRM one step further than cybersecurity and legal risks and expanding it to incorporate ESG-related factors. In 2025, organisations evaluate if their suppliers have ethical labour practices, sustainable supply chains, DEI (Diversity, Equity, Inclusion) metrics and climate impact disclosures. This growth is not only a reputational concern, but also a third-party non-compliance with ESG can now invoke regulatory or shareholder action. ESG risk scoring software and vendor ESG audits are becoming components of onboarding and performance evaluations.
Shared Assessments and Third-Party Exchanges
With the duplication of effort by having multiple vendors respond to the same security questionnaires, the trend is moving toward shared assessments. Systems such as the SIG Questionnaire (Standardised Information Gathering) and the Global Vendor Exchange allow vendors to upload once and share with many clients. This change not only simplifies the due diligence process but also enhances data accuracy, standardisation and vendor experience. In 2025, organisations are relying more and more on industry-wide vendor assurance platforms to minimise duplication, decrease costs and maximise trust.
Incident Response and Resilience Partnerships
Another trend on the rise is bringing vendors into incident response planning. In 2025, proactive organisations address major vendors as more than suppliers but as resilience partners. This encompasses shared tabletop exercises, communication procedures and breach notification SLAs. With the increasing ransomware attacks and cloud reliance, organisations are now calling for vendor-side recovery plans, RTO and RPO metrics. TPRM is transforming into a comprehensive resilience management function where readiness and not mere compliance takes centre stage.
Conclusion
Third-Party Risk Management in 2025 is no longer about checklists and compliance audits; it's a dynamic, intelligence-driven and continuous process. With regulatory alignment, AI automation, real-time monitoring, ESG integration and resilience partnerships leading the way, organisations are transforming their TPRM programs to address contemporary threat landscapes. As digital ecosystems grow increasingly complex and interdependent, managing third-party risk is now essential. Early adopters who invest in tools, talent and governance will be more likely to create secure and resilient businesses for the AI era.
References
- https://finance.ec.europa.eu/publications/digital-operational-resilience-act-dora_en
- https://digital-strategy.ec.europa.eu/en/policies/nis2-directive
- https://www.meity.gov.in/data-protection-framework
- https://securityscorecard.com
- https://sharedassessments.org/sig/
- https://www.fairinstitute.org/fair-model