What’s Your New Year's Resolution?
2025 is knocking firmly at our door and we have promises to make and resolutions to keep. Time you make your list for the New Year and check it twice.
- Lifestyle targets 🡪 Check
- Family targets 🡪 Check
- Social targets 🡪 Check
Umm, so far so good, but what about your cybersecurity targets for the year? Hey, you look confused and concerned. Wait a minute, you do not have one, do you?
I get it. Though the digital world still puzzles, and sometimes outright scares us, we still are not in the ‘Take-Charge-Of-Your-Digital-Safety Mode. We prefer to depend on whatever software security we are using and keep our fingers crossed that the bad guys (read threat actors) do not find us.
Let me illustrate why cybersecurity should be one of your top priorities. You know that stress is a major threat to our continued good health, right? However, if your devices, social media accounts, office e-mail or network, or God forbid, bank accounts become compromised, would that not cause stress? Think about it and the probable repercussions and you will comprehend why I am harping on prioritising security.
Fret not. We will keep it brief as we well know you have 101 things to do in the next few days leading up to 01/01/2025. Just add cyber health to the list and put in motion the following:
- Install and activate comprehensive security software on ALL internet-enabled devices you have at home. Yes, including your smartphones.
- Set yourself a date to change and create separate unique passwords for all accounts. Or use the password manager that comes with all reputed security software to make life simpler.
- Keep home Wi-Fi turned off at night
- Do not set social media accounts to auto-download photos/documents
- Activate parental controls on all the devices used by your children to monitor and mentor them. But keep them apprised.
- Do not blindly trust anyone or anything online – this includes videos, speeches, emails, voice calls, and video calls. Be aware of fakes.
- Be aware of the latest threats and talk about unsafe cyber practices and behaviour often at home.
Short and sweet, as promised.
We will be back, with more tips, and answers to your queries. Drop us a line anytime, and we will be happy to resolve your doubts.
Ciao!
Related Blogs

Introduction
In 2025, the internet is entering a new paradigm and it is hard not to witness it. The internet as we know it is rapidly changing into a treasure trove of hyper-optimised material over which vast bot armies battle to the death, thanks to the amazing advancements in artificial intelligence. All of that advancement, however, has a price, primarily in human lives. It turns out that releasing highly personalised chatbots on a populace that is already struggling with economic stagnation, terminal loneliness, and the ongoing destruction of our planet isn’t exactly a formula for improved mental health. This is the truth of 75% of the kids and teen population who have had chats with chatbot-generated fictitious characters. AI, or artificial intelligence, Chatbots are becoming more and more integrated into our daily lives, assisting us with customer service, entertainment, healthcare, and education. But as the impact of these instruments grows, accountability and moral behaviour become more important. An investigation of the internal policies of a major international tech firm last year exposed alarming gaps: AI chatbots were allowed to create content with child romantic roleplaying, racially discriminatory reasoning, and spurious medical claims. Although the firm has since amended aspects of these rules, the exposé underscores an underlying global dilemma - how can we regulate AI to maintain child safety, guard against misinformation, and adhere to ethical considerations without suppressing innovation?
The Guidelines and Their Gaps
The tech giants like Meta and Google are often reprimanded for overlooking Child Safety and the overall increase in Mental health issues in children and adolescents. According to reports, Google introduced Gemini AI kids, a kid-friendly version of its Gemini AI chatbot, which represents a major advancement in the incorporation of generative artificial intelligence (Gen-AI) into early schooling. Users under the age of thirteen can use supervised accounts on the Family Link app to access this version of Gemini AI Kids.
AI operates on the premise of data collection and analysis. To safeguard children’s personal information in the digital world, the Digital Personal Data Protection Act, 2023 (DPDP Act) introduces particular safeguards. According to Section 9, before processing the data of children, who are defined as people under the age of 18, Data Fiduciaries, entities that decide the goals and methods of processing personal data, must get verified consent from a parent or legal guardian. Furthermore, the Act expressly forbids processing activities that could endanger a child’s welfare, such as behavioural surveillance and child-targeted advertising. According to court interpretations, a child's well-being includes not just medical care but also their moral, ethical, and emotional growth.
While the DPDP Act is a big start in the right direction, there are still important lacunae in how it addresses AI and Child Safety. Age-gating systems, thorough risk rating, and limitations specific to AI-driven platforms are absent from the Act, which largely concentrates on consent and damage prevention in data protection. Furthermore, it ignores the threats to children’s emotional safety or the long-term psychological effects of interacting with generative AI models. Current safeguards are self-regulatory in nature and dispersed across several laws, such as the Bhartiya Nyaya Sanhita, 2023. These include platform disclaimers, technology-based detection of child-sexual abuse content, and measures under the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
Child Safety and AI
- The Risks of Romantic Roleplay - Enabling chatbots to engage in romantic roleplaying with youngsters is among the most concerning discoveries. These interactions can result in grooming, psychological trauma, and relaxation to inappropriate behaviour, even if they are not explicitly sexual. Having illicit or sexual conversations with kids in cyberspace is unacceptable, according to child protection experts. However, permitting even "flirtatious" conversation could normalise risky boundaries.
- International Standards and Best Practices - The concept of "safety by design" is highly valued in child online safety guidelines from around the world, including UNICEF's Child Online Protection Guidelines and the UK's Online Safety Bill. This mandating of platforms and developers to proactively remove risks, not reactively to respond to harms, is the bare minimum standard that any AI guidelines must meet if they provide loopholes for child-directed roleplay.
Misinformation and Racism in AI Outputs
- The Disinformation Dilemma - The regulations also allowed AI to create fictional narratives with disclaimers. For example, chatbots were able to write articles promulgating false health claims or smears against public officials, as long as they were labelled as "untrue." While disclaimers might give thin legal cover, they add to the proliferation of misleading information. Indeed, misinformation tends to spread extensively because users disregard caveat labels in favour of provocative assertions.
- Ethical Lines and Discriminatory Content - It is ethically questionable to allow AI systems to generate racist arguments, even when requested. Though scholarly research into prejudice and bias may necessitate such examples, unregulated generation has the potential to normalise damaging stereotypes. Researchers warn that such practice brings platforms from being passive hosts of offensive speech to active generators of discriminatory content. It is a difference that makes a difference, as it places responsibility squarely on developers and corporations.
The Broader Governance Challenge
- Corporate Responsibility and AI Material generated by AI is not equivalent to user speech—it is a direct reflection of corporate training, policy decisions, and system engineering. This fact requires a greater level of accountability. Although companies can update guidelines following public criticism, that there were such allowances in the first place indicates a lack of strong ethical regulation.
- Regulatory Gaps Regulatory regimes for AI are currently in disarray. The EU AI Act, the OECD AI Principles, and national policies all emphasise human rights, transparency, and accountability. The few, though, specify clear guidelines for content risks such as child roleplay or hate narratives. This absence of harmonised international rules leaves companies acting in the shadows, establishing their own limits until contradicted.
An active way forward would include
- Express Child Protection Requirements: AI systems must categorically prohibit interactions with children involving flirting or romance.
- Misinformation Protections: Generative AI must not be allowed to generate knowingly false material, disclaimers being irrelevant.
- Bias Reduction: Developers need to proactively train systems against generating discriminatory accounts, not merely tag them as optional outputs.
- Independent Regulation: External audit and ethics review boards can supply transparency and accountability independent of internal company regulations.
Conclusion
The guidelines that are often contentious are more than the internal folly of just one firm; they point to a deeper systemic issue in AI regulation. The stakes rise as generative AI becomes more and more integrated into politics, healthcare, education, and social interaction. Racism, false information, and inadequate child safety measures are severe issues that require quick resolution. Corporate regulation is only one aspect of the future; other elements include multi-stakeholder participation, stronger global systems, and ethical standards. In the end, rather than just corporate interests, trust in artificial neural networks will be based on their ability to preserve the truth, protect the weak, and represent universal human values.
References
- https://www.esafety.gov.au/newsroom/blogs/ai-chatbots-and-companions-risks-to-children-and-young-people
- https://www.lakshmisri.com/insights/articles/ai-for-children/#
- https://the420.in/meta-ai-chatbot-guidelines-child-safety-racism-misinformation/
- https://www.unicef.org/documents/guidelines-industry-online-child-protection
- https://www.oecd.org/en/topics/sub-issues/ai-principles.html
- https://artificialintelligenceact.eu/

Introduction
In the wake of the Spy Loan scandal, more than a dozen malicious loan apps were downloaded on Android phones from the Google Play Store, However, the number is significantly higher because they are also available on third-party marketplaces and questionable websites.
Unmasking the Scam
When a user borrows money, these predatory lending applications capture large quantities of information from their smartphone, which is then used to blackmail and force them into returning the total with hefty interest levels. While the loan amount is disbursed to users, these predatory loan apps request sensitive information by granting access to the camera, contacts, messages, logs, images, Wi-Fi network details, calendar information, and other personal information. These are then sent to loan shark servers.
The researchers have disclosed facts about the applications used by loan sharks to mislead consumers, as well as the numerous techniques used to circumvent some of the limitations imposed on the Play Store. Malware is often created with appealing user interfaces and promotes simple and rapid access to cash with high-interest payback conditions. The revelation of the Spy Loan scandal has triggered an immediate response from law enforcement agencies worldwide. There is an urgency to protect millions of users from becoming victims of malicious loan apps, it has become extremely important for law enforcement to unmask the culprits and dismantle the cyber-criminal network.
Aap’s banned: here is the list of the apps banned by Google Play Store :
- AA Kredit: इंस्टेंट लोन ऐप (com.aa.kredit.android)
- Amor Cash: Préstamos Sin Buró (com.amorcash.credito.prestamo)
- Oro Préstamo – Efectivo rápido (com.app.lo.go)
- Cashwow (com.cashwow.cow.eg)
- CrediBus Préstamos de crédito (com.dinero.profin.prestamo.credito.credit.credibus.loan.efectivo.cash)
- ยืมด้วยความมั่นใจ – ยืมด่วน (com.flashloan.wsft)
- PréstamosCrédito – GuayabaCash (com.guayaba.cash.okredito.mx.tala)
- Préstamos De Crédito-YumiCash (com.loan.cash.credit.tala.prestmo.fast.branch.mextamo)
- Go Crédito – de confianza (com.mlo.xango)
- Instantáneo Préstamo (com.mmp.optima)
- Cartera grande (com.mxolp.postloan)
- Rápido Crédito (com.okey.prestamo)
- Finupp Lending (com.shuiyiwenhua.gl)
- 4S Cash (com.swefjjghs.weejteop)
- TrueNaira – Online Loan (com.truenaira.cashloan.moneycredit)
- EasyCash (king.credit.ng)
- สินเชื่อปลอดภัย – สะดวก (com.sc.safe.credit)
Risks with several dimensions
SpyLoan's loan application violates Google's Financial Services policy by unilaterally shortening the repayment period for personal loans to a few days or any other arbitrary time frame. Additionally, the company threatens users with public embarrassment and exposure if they do not comply with such unreasonable demands.
Furthermore, the privacy rules presented by SpyLoan are misleading. While ostensibly reasonable justifications are provided for obtaining certain permissions, they are very intrusive practices. For instance, camera permission is ostensibly required for picture data uploads for Know Your Customer (KYC) purposes, and access to the user's calendar is ostensibly required to plan payment dates and reminders. However, both of these permissions are dangerous and can potentially infringe on users' privacy.
Prosecution Strategies and Legal Framework
The law enforcement agencies and legal authorities initiated prosecution strategies against the individuals who are involved in the Spy Loan Scandal, this multifaced approach involves international agreements and the exploration of innovative legal avenues. Agencies need to collaborate with International agencies to work on specific cyber-crime, leveraging the legal frameworks against digital fraud furthermore, the cross-border nature of the spy loan operation requires a strong legal framework to exchange information, extradition requests, and the pursuit of legal actions across multiple jurisdictions.
Legal Protections for Victims: Seeking Compensation and Restitution
As the legal battle unfolds in the aftermath of the Spy loan scam the focus shifts towards the victims, who suffer financial loss from such fraudulent apps. Beyond prosecuting culprits, the pursuit of justice should involve legal safeguards for victims. Existing consumer protection laws serve as a crucial shield for Spy Loan victims. These laws are designed to safeguard the rights of individuals against unfair practices.
Challenges in legal representation
As the legal hunt for justice in the Spy Loan scam progresses, it encounters challenges that demand careful navigation and strategic solutions. One of the primary obstacles in the legal pursuit of the Spy loan app lies in the jurisdictional complexities. Within the national borders, it’s quite challenging to define the jurisdiction that holds the authority, and a unified approach in prosecuting the offenders in various regions with the efforts of various government agencies.
Concealing the digital identities
One of the major challenges faced is the anonymity afforded by the digital realm poses a challenge in identifying and catching the perpetrators of the scam, the scammers conceal their identity and make it difficult for law enforcement agencies to attribute to actions against the individuals, this challenge can be overcome by joint effort by international agencies and using the advance digital forensics and use of edge cutting technology to unmask these scammers.
Technological challenges
The nature of cyber threats and crime patterns are changing day by day as technology advances this has become a challenge for legal authorities, the scammers explore vulnerabilities, making it essential, for law enforcement agencies to be a step ahead, which requires continuous training of cybercrime and cyber security.
Shaping the policies to prevent future fraud
As the scam unfolds, it has become really important to empower users by creating more and more awareness campaigns. The developers of the apps need to have a transparent approach to users.
Conclusion
It is really important to shape the policies to prevent future cyber frauds with a multifaced approach. Proposals for legislative amendments, international collaboration, accountability measures, technology protections, and public awareness programs all contribute to the creation of a legal framework that is proactive, flexible, and robust to cybercriminals' shifting techniques. The legal system is at the forefront of this effort, playing a critical role in developing regulations that will protect the digital landscape for years to come.
Safeguarding against spyware threats like SpyLoan requires vigilance and adherence to best practices. Users should exclusively download apps from official sources, meticulously verify the authenticity of offerings, scrutinize reviews, and carefully assess permissions before installation.
References

Executive Summary:
In the recent advisory the Indian Computer Emergency Response Team (CERT-In) has released a high severity warning in the older versions of the software across Apple devices. This high severity rating is because of the multiple vulnerabilities reported in Apple products which could allow the attacker to unfold the sensitive information, and execute arbitrary code on the targeted system. This warning is extremely useful to remind of the necessity to have the software up to date to prevent threats of a cybernature. It is important to update the software to the latest versions and cyber hygiene practices.
Devices Affected:
CERT-In advisory highlights significant risks associated with outdated software on the following Apple devices:
- iPhones and iPads: iOS versions that are below 18 and the 17.7 release.
- Mac Computers: All macOS builds before 14.7 (20G71), 13.7 (20H34), and earlier 20.2 for Sonoma, Ventura, Sequoia, respectively.
- Apple Watches: watchOS versions prior to 11
- Apple TVs: tvOS versions prior to 18
- Safari Browsers: versions prior to 18
- Xcode: versions prior to 16
- visionOS: versions prior to 2
Details of the Vulnerabilities:
The vulnerabilities discovered in these Apple products could potentially allow attackers to perform the following malicious activities:
- Access sensitive information: The attackers could easily access the sensitive information stored in other parts of the violated gadgets.
- Execute arbitrary code: The web page could be compromised with malcode and run on the targeted system which in the worst scenario would give the intruder full Administrator privileges on the device.
- Bypass security restrictions: Measures agreed to safeguard the device and information contained on it may be easily bypassed and the system left open to more proliferation.
- Cause denial-of-service (DoS) attacks: The vulnerabilities could be used to cause the targeted device or service to be unavailable to the rightful users.
- Perform spoofing attacks: There could be a situation where the attackers created fake entities or users or accounts to have a way into important information or do other unauthorized activities.
- Elevate privileges: It is also stated that weaknesses might be exploited to authorize the attacker a higher level of privileges in the system they are targets.
- Engage in cross-site scripting (XSS) attacks: Some of them make the associated Web applications/sites prone to XSS attacks by injecting hostile scripts into Web page code.
Vulnerabilities:
CVE-2023-42824
- Attack vector could allow a local attacker to elevate their privileges and potentially execute arbitrary code.
Affected System
- Apple's iOS and iPadOS software
CVE-2023-42916
- To improve the out of bounds read it was mitigated with improved input validation which was resolved later.
Affected System
- Safari, iOS, iPadOS, macOS, and Apple Watch Series 4 and later devices running watchOS 10.2
CVE-2023-42917
- leads to arbitrary code execution, and there have been reports of it being exploited in earlier versions of iOS.
Affected System
- Apple's Safari browser, iOS, iPadOS, and macOS Sonoma systems
Recommended Actions for Users:
To mitigate these risks, that users take immediate action:
- Update Software: Ensure all your devices are on the most current version of the operating systems they use. Repetitive updates have important security updates that fix identified weaknesses or flaws within the system.
- Monitor Device Activity: Stay vigilant if something doesn’t seem right; if your gadgets are accessed by someone who isn’t you.
- Always use strong, distinct passwords and use two-factor authentication.
- Install and update the antivirus and Firewall softwares.
- Avoid downloading any applications or clicking link from unknown sources
Conclusion:
The advisory from CERT-In, clearly demonstrates the fundamental need of keeping the software on all Apple devices up to date. Consumers need to act right away to patch their devices and apply best security measures like using multiple factors for login and system scanning. This advisory has come out when Apple has just released new products into the market such as the iPhone 16 series in India. When consumers embrace new technologies it is important for them to observe relevant measures of security precautions. Maintaining good cyber hygiene is a critical process for the protection against new threats.
Reference:
- https://www.cert-in.org.in/s2cMainServlet?pageid=PUBVLNOTES02&VLCODE=CIAD-2023-0043
- https://www.cve.org/CVERecord?id=CVE-2023-42916
- https://www.cve.org/CVERecord?id=CVE-2023-42917
- https://www.bizzbuzz.news/technology/gadjets/cert-in-issues-advisory-on-vulnerabilities-affecting-iphones-ipads-and-macs-1337253#google_vignette
- https://www.wionews.com/videos/india-warns-apple-users-of-high-severity-security-risks-in-older-software-761396