Using incognito mode and VPN may still not ensure total privacy, according to expert
SVIMS Director and Vice-Chancellor B. Vengamma lighting a lamp to formally launch the cybercrime awareness programme conducted by the police department for the medical students in Tirupati on Wednesday.
An awareness meet on safe Internet practices was held for the students of Sri Venkateswara University University (SVU) and Sri Venkateswara Institute of Medical Sciences (SVIMS) here on Wednesday.
“Cyber criminals on the prowl can easily track our digital footprint, steal our identity and resort to impersonation,” cyber expert I.L. Narasimha Rao cautioned the college students.
Addressing the students in two sessions, Mr. Narasimha Rao, who is a Senior Manager with CyberPeace Foundation, said seemingly common acts like browsing a website, and liking and commenting on posts on social media platforms could be used by impersonators to recreate an account in our name.
Turning to the youth, Mr. Narasimha Rao said the incognito mode and Virtual Private Network (VPN) used as a protected network connection do not ensure total privacy as third parties could still snoop over the websites being visited by the users. He also cautioned them tactics like ‘phishing’, ‘vishing’ and ‘smishing’ being used by cybercriminals to steal our passwords and gain access to our accounts.
“After cracking the whip on websites and apps that could potentially compromise our security, the Government of India has recently banned 232 more apps,” he noted.
Additional Superintendent of Police (Crime) B.H. Vimala Kumari appealed to cyber victims to call 1930 or the Cyber Mitra’s helpline 9121211100. SVIMS Director B. Vengamma stressed the need for caution with smartphones becoming an indispensable tool for students, be it for online education, seeking information, entertainment or for conducting digital transactions.
Related Blogs

Executive Summary:
A viral post currently circulating on various social media platforms claims that Reliance Jio is offering a ₹700 Holi gift to its users, accompanied by a link for individuals to claim the offer. This post has gained significant traction, with many users engaging in it in good faith, believing it to be a legitimate promotional offer. However, after careful investigation, it has been confirmed that this post is, in fact, a phishing scam designed to steal personal and financial information from unsuspecting users. This report seeks to examine the facts surrounding the viral claim, confirm its fraudulent nature, and provide recommendations to minimize the risk of falling victim to such scams.
Claim:
Reliance Jio is offering a ₹700 reward as part of a Holi promotional campaign, accessible through a shared link.

Fact Check:
Upon review, it has been verified that this claim is misleading. Reliance Jio has not provided any promo deal for Holi at this time. The Link being forwarded is considered a phishing scam to steal personal and financial user details. There are no reports of this promo offer on Jio’s official website or verified social media accounts. The URL included in the message does not end in the official Jio domain, indicating a fake website. The website requests for the personal information of individuals so that it could be used for unethical cyber crime activities. Additionally, we checked the link with the ScamAdviser website, which flagged it as suspicious and unsafe.


Conclusion:
The viral post claiming that Reliance Jio is offering a ₹700 Holi gift is a phishing scam. There is no legitimate offer from Jio, and the link provided leads to a fraudulent website designed to steal personal and financial information. Users are advised not to click on the link and to report any suspicious content. Always verify promotions through official channels to protect personal data from cybercriminal activities.
- Claim: Users can claim ₹700 by participating in Jio's Holi offer.
- Claimed On: Social Media
- Fact Check: False and Misleading

Introduction
“GPS Spoofing” though formerly was confined to conflict zones as a consequence, has lately become a growing hazard for pilots and aircraft operators across the world, and several countries have been facing such issues. This definition stems from the US Radio Technical Commission for Aeronautics, which delivers specialized advice for government regulatory authorities. Global Positioning System (GPS) is considered an emergent part of aviation infrastructure as it supersedes traditional radio beams used to direct planes towards the landing. “GPS spoofing” occurs when a double-dealing radio signal overrides a legitimate GPS satellite alert where the receiver gets false location information. In the present times, this is the first time civilian passenger flights have faced such a significant danger, though GPS signal interference of this character has existed for over a decade. According to the Agency France-Presse (AFP), false GPS signals mislead onboard plane procedures and problematise the job of airline pilots that are surging around conflict areas. GPS spoofing may also be the outcome of military electronic warfare systems that have been deployed in zones combating regional tension. GPS spoofing can further lead to significant upheavals in commercial aviation, which include arrivals and departures of passengers apart from safety.
Spoofing might likewise involve one country’s military sending false GPS signals to an enemy plane or drone to impede its capability to operate, which has a collateral impact on airliners operating at a near distance. Collateral impairment in commercial aircraft can occur as confrontations escalate and militaries send faulty GPS signals to attempt to thwart drones and other aircraft. It could, therefore, lead to a global crisis, leading to the loss of civilian aircraft in an area already at a high-risk zone close to an operational battle area. Furthermore, GPS jamming is different from GPS Spoofing. While jamming is when the GPS signals are jammed or obstructed, spoofing is very distinct and way more threatening.
Global Reporting
An International Civil Aviation Organization (ICAO) assessment released in 2019 indicated that there were 65 spoofing incidents across the Middle East in the preceding two years, according to the C4ADS report. At the beginning of 2018, Euro control received more than 800 reports of Global Navigation Satellite System (GNSS) interference in Europe. Also, GPS spoofing in Eastern Europe and the Middle East has resulted in up to 80nm divergence from the flight route and aircraft impacted have had to depend on radar vectors from Air Traffic Control (ATC). According to Forbes, flight data intelligence website OPSGROUP, constituted of 8,000 members including pilots and controllers, has been reporting spoofing incidents since September 2023. Similarly, over 20 airlines and corporate jets flying over Iran diverted from their planned path after they were directed off the pathway by misleading GPS signals transmitted from the ground, subjugating the navigation systems of the aircraft.
In this context, vicious hackers, however at large, have lately realized how to override the critical Inertial Reference Systems (IRS) of an airplane, which is the essential element of technology and is known by the manufacturers as the “brains” of an aircraft. However, the current IRS is not prepared to counter this kind of attack. IRS uses accelerometers, gyroscopes and electronics to deliver accurate attitude, speed, and navigation data so that a plane can decide how it is moving through the airspace. GPS spoofing occurrences make the IRS ineffective, and in numerous cases, all navigation power is lost.
Red Flag from Agencies
The European Union Aviation Safety Agency (EASA) and the International Air Transport Association (IATA) correspondingly hosted a workshop on incidents where people have spoofed and obstructed satellite navigation systems and inferred that these direct a considerable challenge to security. IATA and EASA have further taken measures to communicate information about GPS tampering so that crew and pilots can make sure to determine when it is transpiring. The EASA had further pre-cautioned about an upsurge in reports of GPS spoofing and jamming happenings in the Baltic Sea area, around the Black Sea, and regions near Russia and Finland in 2022 and 2023. According to industry officials, empowering the latest technologies for civil aircraft can take several years, and while GPS spoofing incidents have been increasing, there is no time to dawdle. Experts have noted critical navigation failures on airplanes, as there have been several recent reports of alarming cyber attacks that have changed planes' in-flight GPS. As per experts, GPS spoofing could affect commercial airlines and cause further disarray. Due to this, there are possibilities that pilots can divert from the flight route, further flying into a no-fly zone or any unauthorized zone, putting them at risk.
According to OpsGroup, a global group of pilots and technicians first brought awareness and warning to the following issue when the Federal Aviation Administration (FAA) issued a forewarning on the security of flight risk to civil aviation operations over the spate of attacks. In addition, as per the civil aviation regulator Directorate General of Civil Aviation (DGCA), a forewarning circular on spoofing threats to planes' GPS signals when flying over parts of the Middle East was issued. DGCA advisory further notes the aviation industry is scuffling with uncertainties considering the contemporary dangers and information of GNSS jamming and spoofing.
Conclusion
As the aviation industry continues to grapple with GPS spoofing problems, it is entirely unprepared to combat this, although the industry should consider discovering attainable technologies to prevent them. As International conflicts become convoluted, technological solutions are unrestricted and can be pricey, intricate and not always efficacious depending on what sort of spoofing is used.
As GPS interference attacks become more complex, specialized resolutions should be invariably contemporized. Improving education and training (to increase awareness among pilots, air traffic controllers and other aviation experts), receiver technology (Creating and enforcing more state-of-the-art GPS receiver technology), ameliorating monitoring and reporting (Installing robust monitoring systems), cooperation (collaboration among stakeholders like government bodies, aviation organisations etc.), data/information sharing, regulatory measures (regulations and guidelines by regulatory and government bodies) can help in averting GPS spoofing.
References
- https://economictimes.indiatimes.com/industry/transportation/airlines-/-aviation/false-gps-signal-surge-makes-life-hard-for-pilots/articleshow/108363076.cms?from=mdr
- https://nypost.com/2023/11/20/lifestyle/hackers-are-taking-over-planes-gps-experts-are-lost-on-how-to-fix-it/
- https://www.timesnownews.com/india/planes-losing-gps-signal-over-middle-east-dgca-flags-spoofing-threat-article-105475388
- https://www.firstpost.com/world/gps-spoofing-deceptive-gps-lead-over-20-planes-astray-in-iran-13190902.html
- https://www.forbes.com/sites/erictegler/2024/01/31/gps-spoofing-is-now-affecting-airplanes-in-parts-of-europe/?sh=48fbe725c550
- https://www.insurancejournal.com/news/international/2024/01/30/758635.htm
- https://airwaysmag.com/gps-spoofing-commercial-aviation/
- https://www.wsj.com/articles/aviation-industry-to-tackle-gps-security-concerns-c11a917f
- https://www.deccanherald.com/world/explained-what-is-gps-spoofing-that-has-misguided-around-20-planes-near-iran-iraq-border-and-how-dangerous-is-this-2708342

AI has grown manifold in the past decade and so has its reliance. A MarketsandMarkets study estimates the AI market to reach $1,339 billion by 2030. Further, Statista reports that ChatGPT amassed more than a million users within the first five days of its release, showcasing its rapid integration into our lives. This development and integration have their risks. Consider this response from Google’s AI chatbot, Gemini to a student’s homework inquiry: “You are not special, you are not important, and you are not needed…Please die.” In other instances, AI has suggested eating rocks for minerals or adding glue to pizza sauce. Such nonsensical outputs are not just absurd; they’re dangerous. They underscore the urgent need to address the risks of unrestrained AI reliance.
AI’s Rise and Its Limitations
The swiftness of AI’s rise, fueled by OpenAI's GPT series, has revolutionised fields like natural language processing, computer vision, and robotics. Generative AI Models like GPT-3, GPT-4 and GPT-4o with their advanced language understanding, enable learning from data, recognising patterns, predicting outcomes and finally improving through trial and error. However, despite their efficiency, these AI models are not infallible. Some seemingly harmless outputs can spread toxic misinformation or cause harm in critical areas like healthcare or legal advice. These instances underscore the dangers of blindly trusting AI-generated content and highlight the importance and the need to understand its limitations.
Defining the Problem: What Constitutes “Nonsensical Answers”?
Harmless errors due to AI nonsensical responses can be in the form of a wrong answer for a trivia question, whereas, critical failures could be as damaging as wrong legal advice.
AI algorithms sometimes produce outputs that are not based on training data, are incorrectly decoded by the transformer or do not follow any identifiable pattern. This response is known as a Nonsensical Answer and the situation is known as an “AI Hallucination”. It can be factual inaccuracies, irrelevant information or even contextually inappropriate responses.
A significant source of hallucination in machine learning algorithms is the bias in input that it receives. If the inputs for the AI model are full of biased datasets or unrepresentative data, it may lead to the model hallucinating and producing results that reflect these biases. These models are also vulnerable to adversarial attacks, wherein bad actors manipulate the output of an AI model by tweaking the input data ina subtle manner.
The Need for Policy Intervention
Nonsensical AI responses risk eroding user trust and causing harm, highlighting the need for accountability despite AI’s opaque and probabilistic nature. Different jurisdictions address these challenges in varied ways. The EU’s AI Act enforces stringent reliability standards with a risk-based and transparent approach. The U.S. emphasises creating ethical guidelines and industry-driven standards. India’s DPDP Act indirectly tackles AI safety through data protection, focusing on the principles of accountability and consent. While the EU prioritises compliance, the U.S. and India balance innovation with safeguards. This reflects on the diverse approaches that nations have to AI regulation.
Where Do We Draw the Line?
The critical question is whether AI policies should demand perfection or accept a reasonable margin for error. Striving for flawless AI responses may be impractical, but a well-defined framework can balance innovation and accountability. Adopting these simple measures can lead to the creation of an ecosystem where AI develops responsibly while minimising the societal risks it can pose. Key measures to achieve this include:
- Ensure that users are informed about AI and its capabilities and limitations. Transparent communication is the key to this.
- Implement regular audits and rigorous quality checks to maintain high standards. This will in turn prevent any form of lapses.
- Establishing robust liability mechanisms to address any harms caused by AI-generated material which is in the form of misinformation. This fosters trust and accountability.
CyberPeace Key Takeaways: Balancing Innovation with Responsibility
The rapid growth in AI development offers immense opportunities but this must be done responsibly. Overregulation of AI can stifle innovation, on the other hand, being lax could lead to unintended societal harm or disruptions.
Maintaining a balanced approach to development is essential. Collaboration between stakeholders such as governments, academia, and the private sector is important. They can ensure the establishment of guidelines, promote transparency, and create liability mechanisms. Regular audits and promoting user education can build trust in AI systems. Furthermore, policymakers need to prioritise user safety and trust without hindering creativity while making regulatory policies.
We can create a future that is AI-development-driven and benefits us all by fostering ethical AI development and enabling innovation. Striking this balance will ensure AI remains a tool for progress, underpinned by safety, reliability, and human values.
References
- https://timesofindia.indiatimes.com/technology/tech-news/googles-ai-chatbot-tells-student-you-are-not-needed-please-die/articleshow/115343886.cms
- https://www.forbes.com/advisor/business/ai-statistics/#2
- https://www.reuters.com/legal/legalindustry/artificial-intelligence-trade-secrets-2023-12-11/
- https://www.indiatoday.in/technology/news/story/chatgpt-has-gone-mad-today-openai-says-it-is-investigating-reports-of-unexpected-responses-2505070-2024-02-21