What’s Your New Year's Resolution?
2025 is knocking firmly at our door and we have promises to make and resolutions to keep. Time you make your list for the New Year and check it twice.
- Lifestyle targets 🡪 Check
- Family targets 🡪 Check
- Social targets 🡪 Check
Umm, so far so good, but what about your cybersecurity targets for the year? Hey, you look confused and concerned. Wait a minute, you do not have one, do you?
I get it. Though the digital world still puzzles, and sometimes outright scares us, we still are not in the ‘Take-Charge-Of-Your-Digital-Safety Mode. We prefer to depend on whatever software security we are using and keep our fingers crossed that the bad guys (read threat actors) do not find us.
Let me illustrate why cybersecurity should be one of your top priorities. You know that stress is a major threat to our continued good health, right? However, if your devices, social media accounts, office e-mail or network, or God forbid, bank accounts become compromised, would that not cause stress? Think about it and the probable repercussions and you will comprehend why I am harping on prioritising security.
Fret not. We will keep it brief as we well know you have 101 things to do in the next few days leading up to 01/01/2025. Just add cyber health to the list and put in motion the following:
- Install and activate comprehensive security software on ALL internet-enabled devices you have at home. Yes, including your smartphones.
- Set yourself a date to change and create separate unique passwords for all accounts. Or use the password manager that comes with all reputed security software to make life simpler.
- Keep home Wi-Fi turned off at night
- Do not set social media accounts to auto-download photos/documents
- Activate parental controls on all the devices used by your children to monitor and mentor them. But keep them apprised.
- Do not blindly trust anyone or anything online – this includes videos, speeches, emails, voice calls, and video calls. Be aware of fakes.
- Be aware of the latest threats and talk about unsafe cyber practices and behaviour often at home.
Short and sweet, as promised.
We will be back, with more tips, and answers to your queries. Drop us a line anytime, and we will be happy to resolve your doubts.
Ciao!
Related Blogs

Introduction
The digital expanse of the metaverse has recently come under scrutiny following a gruesome incident. In a digital realm crafted for connection and exploration, a 16-year-old girl’s avatar falls victim to an agonising assault that kindled the fire of ethno-legal and societal discourse. The incident is a stark reminder that the cyberverse, offering endless possibilities and experiences, also has glaring challenges that require serious consideration. The incident involves a sixteen-year-old teen girl being raped through her digital avatar by a few members of Metaverse.
This incident has sparked a critical question of genuine psychological trauma inflicted by virtual experiences. The incident with a 16-year-old girl highlights the strong emotional repercussions caused by illicit virtual actions. While the physical realm remains unharmed, the digital assault can leave permanent scars on the psyche of the girl. This issue raises a critical question about the ethical implications of virtual interactions and the responsibilities of service providers to protect users' well-being on their platforms.
The Judicial Quagmire
The digital nature of these assaults gives impetus to complex jurisdictions which are profound in cyber offences. We are still novices in navigating the digital labyrinth where avatars have the ability to transcend borders with just a click of a mouse. The current legal structure is not equipped to tackle virtual crimes, calling for urgent reforms in critical legal structure. The Policymakers and legal Professionals must define virtual offenses first with clear and defined jurisdictional boundaries ensuring justice isn’t hampered due to geographical restrictions.
Meta’s Accountability
Meta, a platform where this gruesome incident occurred, finds itself at the crossroads of ethical dilemma. The company implemented plenty of safeguards that proved futile in preventing such harrowing acts. The incident has raised several questions about the broader role and responsibilities of tech juggernauts. Some of the questions demanding immediate answers as how a company can strike a balance between innovation and the protection of its users.
The Tightrope of Ethics
Metaverse is the epitome of innovation, yet this harrowing incident highlights a fundamental ethical contention. The real challenge is to harness the power of virtual reality while addressing the risks of digital hostilities. Society is still facing this conundrum, stakeholders must work in tandem to formulate robust and effective legal structures to protect the rights and well-being of users. This also includes balancing technological development and ethical challenges which require collective effort.
Reflections of Society
Beyond legal and ethical considerations, this act calls for wider societal reflections. It emphasises the pressing need for a cultural shift fostering empathy, digital civility and respect. As we tread deeper into the virtual realm, we must strive to cultivate ethos upholding dignity in both the digital and real world. This shift is only possible through awareness campaigns, educational initiatives and strong community engagement to foster a culture of respect and responsibility.
Safer and Ethical Way Forward
A multidimensional approach is essential to address the complicated challenges cyber violence poses. Several measures can pave the way for safer cyberspace for netizens.
- Legislative Reforms - There’s an urgent need to revamp legislative frameworks to mitigate and effectively address the complexities of these new and emerging virtual offences. The tech companies must collaborate with the government on formulating best practices and help develop standard security measures prioritising user protection.
- Public Awareness and Engagement - Initiating public awareness campaigns to educate users on crucial issues such as cyber resilience, ethics, digital detox and responsible online behaviour play a critical role in making netizens vigilant to avoid cyber hostilities and help fellow netizens in distress. Civil society organisations and think tanks such as CyberPeace Foundation are the pioneers of cyber safety campaigns in the country, working in tandem with governments across the globe to curb the evil of cyber hostilities.
- Interdisciplinary Research: The policymakers should delve deeper into the ethical, psychological and societal ramifications of digital interactions. The multidisciplinary approach in research is crucial for formulating policy based on evidence.
Conclusion
The digital Gang Rape is a wake-up call, demanding the bold measure to confront the intricate legal, societal and ethical pitfalls of the metaverse. As we navigate digital labyrinth, our collective decisions will help shape the metaverse's future. By nurturing the culture of empathy, responsibility and innovation, we can forge a path honouring the dignity of netizens, upholding ethical principles and fostering a vibrant and safe cyberverse. In this significant movement, ethical vigilance, diligence and active collaboration are indispensable.
References:
- https://www.thehindu.com/sci-tech/technology/virtual-gang-rape-reported-in-the-metaverse-probe-underway/article67705164.ece
- https://thesouthfirst.com/news/teen-uk-girl-virtually-gang-raped-in-metaverse-are-indian-laws-equipped-to-handle-similar-cases/
%20(1).jpg)
Introduction
Artificial Intelligence (AI) driven autonomous weapons are reshaping military strategy, acting as force multipliers that can independently assess threats, adapt to dynamic combat environments, and execute missions with minimal human intervention, pushing the boundaries of modern warfare tactics. AI has become a critical component of modern technology-driven warfare and has simultaneously impacted many spheres in a technology-driven world. Nations often prioritise defence for significant investments, supporting its growth and modernisation. AI has become a prime area of investment and development for technological superiority in defence forces. India’s focus on defence modernisation is evident through initiatives like the Defence AI Council and the Task Force on Strategic Implementation of AI for National Security.
The main requirement that Autonomous Weapons Systems (AWS) require is the “autonomy” to perform their functions when direction or input from a human actor is absent. AI is not a prerequisite for the functioning of AWSs, but, when incorporated, AI could further enable such systems. While militaries seek to apply increasingly sophisticated AI and automation to weapons technologies, several questions arise. Ethical concerns have been raised for AWS as the more prominent issue by many states, international organisations, civil society groups and even many distinguished figures.
Ethical Concerns Surrounding Autonomous Weapons
The delegation of life-and-death decisions to machines is the ethical dilemma that surrounds AWS. A major concern is the lack of human oversight, raising questions about accountability. What if AWS malfunctions or violates international laws, potentially committing war crimes? This ambiguity fuels debate over the dangers of entrusting lethal force to non-human actors. Additionally, AWS poses humanitarian risks, particularly to civilians, as flawed algorithms could make disastrous decisions. The dehumanisation of warfare and the violation of human dignity are critical concerns when AWS is in question, as targets become reduced to mere data points. The impact on operators’ moral judgment and empathy is also troubling, alongside the risk of algorithmic bias leading to unjust or disproportionate targeting. These ethical challenges are deeply concerning.
Balancing Ethical Considerations and Innovations
It is immaterial how advanced a computer becomes in simulating human emotions like compassion, empathy, altruism, or other emotions as the machine will only be imitating them, not experiencing them as a human would. A potential solution to this ethical predicament is using a 'human-in-the-loop' or 'human-on-the-loop' semi-autonomous system. This would act as a compromise between autonomy and accountability.
A “human-on-the-loop” system is designed to provide human operators with the ability to intervene and terminate engagements before unacceptable levels of damage occur. For example, defensive weapon systems could autonomously select and engage targets based on their programming, during which a human operator retains full supervision and can override the system within a limited period if necessary.
In contrast, a ‘human-in-the-loop” system is intended to engage individual targets or specific target groups pre-selected by a human operator. Examples would include homing munitions that, once launched to a particular target location, search for and attack preprogrammed categories of targets within the area.
International Debate and Regulatory Frameworks
The regulation of autonomous weapons that employ AI, in particular, is a pressing global issue due to the ethical, legal, and security concerns it contains. There are many ongoing efforts at the international level which are in discussion to regulate such weapons. One such example is the initiative under the United Nations Convention on CertainConventional Weapons (CCW), where member states, India being an active participant, debate the limits of AI in warfare. However, existing international laws, such as the Geneva Conventions, offer legal protection by prohibiting indiscriminate attacks and mandating the distinction between combatants and civilians. The key challenge lies in achieving global consensus, as different nations have varied interests and levels of technological advancement. Some countries advocate for a preemptive ban on fully autonomous weapons, while others prioritise military innovation. The complexity of defining human control and accountability further complicates efforts to establish binding regulations, making global cooperation both essential and challenging.
The Future of AI in Defence and the Need for Stronger Regulations
The evolution of autonomous weapons poses complex ethical and security challenges. As AI-driven systems become more advanced, a growing risk of its misuse in warfare is also advancing, where lethal decisions could be made without human oversight. Proactive regulation is crucial to prevent unethical use of AI, such as indiscriminate attacks or violations of international law. Setting clear boundaries on autonomous weapons now can help avoid future humanitarian crises. India’s defence policy already recognises the importance of regulating the use of AI and AWS, as evidenced by the formation of bodies like the Defence AI Project Agency (DAIPA) for enabling AI-based processes in defence Organisations. Global cooperation is essential for creating robust regulations that balance technological innovation with ethical considerations. Such collaboration would ensure that autonomous weapons are used responsibly, protecting civilians and combatants, while encouraging innovation within a framework prioritising human dignity and international security.
Conclusion
AWS and AI in warfare present significant ethical, legal, and security challenges. While these technologies promise enhanced military capabilities, they raise concerns about accountability, human oversight, and humanitarian risks. Balancing innovation with ethical responsibility is crucial, and semi-autonomous systems offer a potential compromise. India’s efforts to regulate AI in defence highlight the importance of proactive governance. Global cooperation is essential in establishing robust regulations that ensure AWS is used responsibly, prioritising human dignity and adherence to international law, while fostering technological advancement.
References
● https://indianexpress.com/article/explained/reaim-summit-ai-war-weapons-9556525/
.webp)
Introduction
Privacy has become a concern for netizens and social media companies have access to a user’s data and the ability to use the said data as they see fit. Meta’s business model, where they rely heavily on collecting and processing user data to deliver targeted advertising, has been under scrutiny. The conflict between Meta and the EU traces back to the enactment of GDPR in 2018. Meta is facing numerous fines for not following through with the regulation and mainly failing to obtain explicit consent for data processing under Chapter 2, Article 7 of the GDPR. ePrivacy Regulation, which focuses on digital communication and digital data privacy, is the next step in the EU’s arsenal to protect user privacy and will target the cookie policies and tracking tech crucial to Meta's ad-targeting mechanism. Meta’s core revenue stream is sourced from targeted advertising which requires vast amounts of data for the creation of a personalised experience and is scrutinised by the EU.
Pay for Privacy Model and its Implications with Critical Analysis
Meta came up with a solution to deal with the privacy issue - ‘Pay or Consent,’ a model that allows users to opt out of data-driven advertising by paying a subscription fee. The platform would offer users a choice between free, ad-supported services and a paid privacy-enhanced experience which aligns with the GDPR and potentially reduces regulatory pressure on Meta.
Meta presently needs to assess the economic feasibility of this model and come up with answers for how much a user would be willing to pay for the privacy offered and shift Meta’s monetisation from ad-driven profits to subscription revenues. This would have a direct impact on Meta’s advertisers who use Meta as a platform for detailed user data for targeted advertising, and would potentially decrease ad revenue and innovate other monetisation strategies.
For the users, increased privacy and greater control of data aligning with global privacy concerns would be a potential outcome. While users will undoubtedly appreciate the option to avoid tracking, the suggestion does beg the question that the need to pay might become a barrier. This could possibly divide users between cost-conscious and privacy-conscious segments. Setting up a reasonable price point is necessary for widespread adoption of the model.
For the regulators and the industry, a new precedent would be set in the tech industry and could influence other companies’ approaches to data privacy. Regulators might welcome this move and encourage further innovation in privacy-respecting business models.
The affordability and fairness of the ‘pay or consent’ model could create digital inequality if privacy comes at a digital cost or even more so as a luxury. The subscription model would also need clarifications as to what data would be collected and how it would be used for non-advertising purposes. In terms of market competition, competitors might use and capitalise on Meta’s subscription model by offering free services with privacy guarantees which could further pressure Meta to refine its offerings to stay competitive. According to the EU, the model needs to provide a third way for users who have ads but are a result of non-personalisation advertising.
Meta has further expressed a willingness to explore various models to address regulatory concerns and enhance user privacy. Their recent actions in the form of pilot programs for testing the pay-for-privacy model is one example. Meta is actively engaging with EU regulators to find mutually acceptable solutions and to demonstrate its commitment to compliance while advocating for business models that sustain innovation. Meta executives have emphasised the importance of user choice and transparency in their future business strategies.
Future Impact Outlook
- The Meta-EU tussle over privacy is a manifestation of broader debates about data protection and business models in the digital age.
- The EU's stance on Meta’s ‘pay or consent’ model and any new regulatory measures will shape the future landscape of digital privacy, leading to other jurisdictions taking cues and potentially leading to global shifts in privacy regulations.
- Meta may need to iterate on its approach based on consumer preferences and concerns. Competitors and tech giants will closely monitor Meta’s strategies, possibly adopting similar models or innovating new solutions. And the overall approach to privacy could evolve to prioritise user control and transparency.
Conclusion
Consent is the cornerstone in matters of privacy and sidestepping it violates the rights of users. The manner in which tech companies foster a culture of consent is of paramount importance in today's digital landscape. As the exploration by Meta in the ‘pay or consent’ model takes place, it faces both opportunities and challenges in balancing user privacy with business sustainability. This situation serves as a critical test case for the tech industry, highlighting the need for innovative solutions that respect privacy while fostering growth with the specificity of dealing with data protection laws worldwide, starting with India’s Digital Personal Data Protection Act, of 2023.
Reference:
- https://ciso.economictimes.indiatimes.com/news/grc/eu-tells-meta-to-address-consumer-fears-over-pay-for-privacy/111946106
- https://www.wired.com/story/metas-pay-for-privacy-model-is-illegal-says-eu/
- https://edri.org/our-work/privacy-is-not-for-sale-meta-must-stop-charging-for-peoples-right-to-privacy/
- https://fortune.com/2024/04/17/meta-pay-for-privacy-rejected-edpb-eu-gdpr-schrems/