Empowering the Global South: AI Readiness and the Hyderabad GSEC
Introduction
The recent inauguration of the Google Safety Engineering Centre (GSEC) in Hyderabad on 18th June, 2025, marks a pivotal moment not just for India, but for the entire Asia-Pacific region’s digital future. As only the fourth such centre in the world after Munich, Dublin, and Málaga, its presence signals a shift in how AI safety, cybersecurity, and digital trust are being decentralised, leading to a more globalised and inclusive tech ecosystem. India’s digitisation over the years has grown at a rapid scale, introducing millions of first-time internet users, who, depending on their awareness, are susceptible to online scams, phishing, deepfakes, and AI-driven fraud. The establishment of GSEC is not just about launching a facility but a step towards addressing AI readiness, user protection, and ecosystem resilience.
Building a Safer Digital Future in the Global South
The GSEC is set to operationalise the Google Safety Charter, designed around three core pillars: empowering users by protecting them from online fraud, strengthening government cybersecurity and enterprise, and advancing responsible AI in the platform design and execution. This represents a shift from the standard reactive safety responses to proactive, AI-driven risk mitigation. The goal is to make safety tools not only effective, but tailored to threats unique to the Global South, from multilingual phishing to financial fraud via unofficial lending apps. This centre is expected to stimulate regional cybersecurity ecosystems by creating jobs, fostering public-private partnerships, and enabling collaboration across academia, law enforcement, civil society, and startups. In doing so, it positions Asia-Pacific not as a consumer of the standard Western safety solutions but as an active contributor to the next generation of digital safeguards and customised solutions.
Previous piloted solutions by Google include DigiKavach, a real-time fraud detection framework, and tools like spam protection in mobile operating systems and app vetting mechanisms. What GSEC might aid with is the scaling and integration of these efforts into systems-level responses, where threat detection, safety warnings, and reporting mechanisms, etc., would ensure seamless coordination and response across platforms. This reimagines safety as a core design principle in India’s digital public infrastructure rather than focusing on attack-based response.
CyberPeace Insights
The launch aligns with events such as the AI Readiness Methodology Conference recently held in New Delhi, which brought together researchers, policymakers, and industry leaders to discuss ethical, secure, and inclusive AI implementation. As the world grapples with how to deal with AI technologies ranging from generative content to algorithmic decisions, centres like GSEC can play a critical role in defining the safeguards and governance structures that can support rapid innovation without compromising public trust and safety. The region’s experiences and innovations in AI governance must shape global norms, and the role of Tech firms in doing so is significant. Apart from this, efforts with respect to creating digital infrastructure and safety centres addressing their protection resonate with India’s vision of becoming a global leader in AI.
References
- https://www.thehindu.com/news/cities/Hyderabad/google-safety-engineering-centre-india-inaugurated-in-hyderabad/article69708279.ece
- https://www.businesstoday.in/technology/news/story/google-launches-safety-charter-to-secure-indias-ai-future-flags-online-fraud-and-cyber-threats-480718-2025-06-17?utm_source=recengine&utm_medium=web&referral=yes&utm_content=footerstrip-1&t_source=recengine&t_medium=web&t_content=footerstrip-1&t_psl=False
- https://blog.google/intl/en-in/partnering-indias-success-in-a-new-digital-paradigm/
- https://blog.google/intl/en-in/company-news/googles-safety-charter-for-indias-ai-led-transformation/
- https://economictimes.indiatimes.com/magazines/panache/google-rolls-out-hyderabad-hub-for-online-safety-launches-first-indian-google-safety-engineering-centre/articleshow/121928037.cms?from=mdr
Related Blogs
%20(1).webp)
Introduction
Bumble’s launch of its ‘Opening Move’ feature has sparked a new narrative on safety and privacy within the digital dating sphere and has garnered mixed reactions from users. It was launched against the backdrop of women stating that the ‘message first’ policy of Bumble was proving to be tedious. Addressing the large-scale review, Bumble launched its ‘Opening Move’ feature, whereby users can either craft or select from pre-set questions which potential matches may choose to answer to start the conversation at first glance. These questions are a segue into meaningful and insightful conversation from the get-go and overstep the traditional effort to start engaging chats between matched users. This feature is an optional feature that users may enable and as such does not prevent a user from exercising the autonomy previously in place.
Innovative Approach to Conversation Starters
Many users consider this feature as innovative; not only does it act as a catalyst for fluid conversation but also cultivates insightful dialogue, fostering meaningful interactions that are devoid of the constraint of superficial small talk. The ‘Opening Moves’ feature may also be aligned with unique scientific research indicating that individuals form their initial attractions within 3-seconds of intimate interaction, thereby proving to be a catalyst to the decision-making process of an individual in the attraction time frame.
Organizational Benefits and Data Insights
From an organisational standpoint, the feature is a unique solution towards localisation challenges faced by apps; the option of writing a personalised ‘Opening Move’ implies setting prompts that are culturally relevant and appropriate in a specific area. Moreover, it is anticipated that Bumble may enhance and improve user experience within the platform through data analysis. Data from responses to an ‘Opening Move’ may provide valuable insights into user preferences and patterns by analysing which pre-set prompts garner more responses over others and how often is a user-written ‘Opening Move’ successful in obtaining a response in comparison with Bumble’s pre-set prompts. A quick glance at Bumble’s privacy policy[1] shows that data storing and transferring of chats between users are not shared with third parties, further safeguarding personal privacy. However, Bumble does use the chat data for its own internal purposes after removing personally identifiable information from chats. The manner of such review and removal of data has not been specified, which may raise challenges depending upon whether the reviewer is a human or an algorithm.
However, some users perceive the feature as counterproductive to the company’s principle of ‘women make the first move’. While Bumble aims to market the feature as a neutral ground for matched users based on the exercise of choice, users see it as a step back into the heteronormative gender expectations that most dating apps conform to, putting the onus of the ‘first move’ on men. Many male users have complained that the feature acts as a catalyst for men to opt out of the dating app and would most likely refrain from interacting with profiles enabled with the ‘Opening Move’ feature, since the pressure to answer in a creative manner is disproportionate with the likelihood their response actually being entertained.[2] Coupled with the female users terming the original protocol as ‘too much effort’, the preset questions of the ‘Opening Move’ feature may actively invite users to categorise potential matches according to arbitrary questions that undermine real-life experiences, perspectives and backgrounds of each individual.[3]
Additionally, complications are likely to arise when a notorious user sets a question that indirectly gleans personal or sensitive, identifiable information. The individual responding may be bullied or be subjected to hateful slurs when they respond to such carefully crafted conversation prompts.
Safety and Privacy Concerns
On the corollary, the appearance of choice may translate into more challenges for women on the platform. The feature may spark an increase in the number of unsolicited, undesirable messages and images from a potential match. The most vulnerable groups at present remain individuals who identify as females and other sexual minorities.[4] At present, there appears to be no mechanism in place to proactively monitor the content of responses, relying instead on user reporting. This approach may prove to be impractical given the potential volume of objectionable messages, necessitating a more efficient solution to address this issue. It is to be noted that in spite of a user reporting, the current redressal systems of online platforms remain lax, largely inadequate and demonstrate ineffectiveness in addressing user concerns or grievances. This lack of proactiveness is violative of the right to redressal provided under the Digital Personal Data Protection Act, 2023. It is thought that the feature may actually take away user autonomy that Bumble originally aimed to grant since Individuals who identify as introverted, shy, soft-spoken, or non-assertive may refrain from reporting harassing messages altogether, potentially due to discomfort or reluctance to engage in confrontation. Resultantly, it is anticipated that there would be a sharp uptake in cases pertaining to cyberbullying, harassment and hate speech (especially vulgar communications) towards both the user and the potential match.
From an Indian legal perspective, dating apps have to adhere to the Information Technology Act, 2000 [5], the Information Technology (Intermediary and Digital Media Ethics) Rules 2021 [6] and the Digital Personal Data Protection Act, 2023, that regulates a person’s digital privacy and set standards on the kind of content an intermediary may host. An obligation is cast upon an intermediary to uprise its users on what content is not allowed on its platform in addition to mandating intimation of the user’s digital rights. The lack of automated checks, as mentioned above, is likely to make Bumble non-compliant with the ethical guidelines.
The optional nature of the ‘Opening Move’ grants users some autonomy. However, some technical updates may enhance the user experience of this feature. Technologies like AI are an effective aid in behavioural and predictive analysis. An upgraded ‘matching’ algorithm can analyse the number of un-matches a profile receives, thereby identifying and flagging a profile having multiple lapsed matches. Additionally, the design interface of the application bearing a filter option to filter out flagged profiles would enable a user to be cautious while navigating through the matches. Another possible method of weeding out notorious profiles is by deploying a peer-review system of profiles whereby a user has a singular check-box that enables them to flag a profile. Such a checkbox would ideally be devoid of any option for writing personal comments and would bear a check box stating whether the profile is most or least likely to bully/harass. This would ensure that a binary, precise response is recorded and any coloured remarks are avoided. [7]
Governance and Monitoring Mechanisms
From a governance point of view, a monitoring mechanism on the manner of crafting questions is critical. Systems should be designed to detect certain words/sentences and a specific manner of framing sentences to disallow questions contrary to the national legal framework. An onscreen notification having instructions on generally acceptable manner of conversations as a reminder to users to maintain cyber hygiene while conversing is also proposed as a mandated requirement for platforms. The notification/notice may also include guidelines on what information is safe to share in order to safeguard user privacy. Lastly, a revised privacy policy should establish the legal basis for processing responses to ‘Opening Moves’, thereby bringing it in compliance with national legislations such as the Digital Personal Data Protection Act, 2023.
Conclusion
Bumble's 'Opening Move' feature marks the company’s ‘statement’ step to address user concerns regarding initiating conversations on the platform. While it has been praised for fostering more meaningful interactions, it also raises not only ethical concerns but also concerns over user safety. While the 'Opening Move' feature can potentially enhance user experience, its success is largely dependent on Bumble's ability to effectively navigate the complex issues associated with this feature. A more robust monitoring mechanism that utilises newer technology is critical to address user concerns and to ensure compliance with national laws on data privacy.
Endnotes:
- [1] Bumble’s privacy policy https://bumble.com/en-us/privacy
- [2] Discussion thread, r/bumble, Reddit https://www.reddit.com/r/Bumble/comments/1cgrs0d/women_on_bumble_no_longer_have_to_make_the_first/?share_id=idm6DK7e0lgkD7ZQ2TiTq&utm_content=2&utm_medium=ios_app&utm_name=ioscss&utm_source=share&utm_term=1&rdt=65068
- [3] Mcrea-Hedley, Olivia, “Love on the Apps: When did Dating Become so Political?”, 8 February 2024 https://www.service95.com/the-politics-of-dating-apps/
- [4] Gewirtz-Meydan, A., Volman-Pampanel, D., Opuda, E., & Tarshish, N. (2024). ‘Dating Apps: A New Emerging Platform for Sexual Harassment? A Scoping Review. Trauma, Violence, & Abuse, 25(1), 752-763. https://doi.org/10.1177/15248380231162969
- [5] Information Technology Act, 2000 https://www.indiacode.nic.in/bitstream/123456789/13116/1/it_act_2000_updated.pdf
- [6] Information Technology (Intermediary Guidelines and Digital Media Ethics) Rules 2021 https://www.meity.gov.in/writereaddata/files/Information%20Technology%20%28Intermediary%20Guidelines%20and%20Digital%20Media%20Ethics%20Code%29%20Rules%2C%202021%20%28updated%2006.04.2023%29-.pdf
- [7] Date Confidently: Engaging Features in a Dating App (Use Cases), Consaguous, 10 July 2023 https://www.consagous.co/blog/date-confidently-engaging-features-in-a-dating-app-use-cases

Introduction
In the wake of the Spy Loan scandal, more than a dozen malicious loan apps were downloaded on Android phones from the Google Play Store, However, the number is significantly higher because they are also available on third-party marketplaces and questionable websites.
Unmasking the Scam
When a user borrows money, these predatory lending applications capture large quantities of information from their smartphone, which is then used to blackmail and force them into returning the total with hefty interest levels. While the loan amount is disbursed to users, these predatory loan apps request sensitive information by granting access to the camera, contacts, messages, logs, images, Wi-Fi network details, calendar information, and other personal information. These are then sent to loan shark servers.
The researchers have disclosed facts about the applications used by loan sharks to mislead consumers, as well as the numerous techniques used to circumvent some of the limitations imposed on the Play Store. Malware is often created with appealing user interfaces and promotes simple and rapid access to cash with high-interest payback conditions. The revelation of the Spy Loan scandal has triggered an immediate response from law enforcement agencies worldwide. There is an urgency to protect millions of users from becoming victims of malicious loan apps, it has become extremely important for law enforcement to unmask the culprits and dismantle the cyber-criminal network.
Aap’s banned: here is the list of the apps banned by Google Play Store :
- AA Kredit: इंस्टेंट लोन ऐप (com.aa.kredit.android)
- Amor Cash: Préstamos Sin Buró (com.amorcash.credito.prestamo)
- Oro Préstamo – Efectivo rápido (com.app.lo.go)
- Cashwow (com.cashwow.cow.eg)
- CrediBus Préstamos de crédito (com.dinero.profin.prestamo.credito.credit.credibus.loan.efectivo.cash)
- ยืมด้วยความมั่นใจ – ยืมด่วน (com.flashloan.wsft)
- PréstamosCrédito – GuayabaCash (com.guayaba.cash.okredito.mx.tala)
- Préstamos De Crédito-YumiCash (com.loan.cash.credit.tala.prestmo.fast.branch.mextamo)
- Go Crédito – de confianza (com.mlo.xango)
- Instantáneo Préstamo (com.mmp.optima)
- Cartera grande (com.mxolp.postloan)
- Rápido Crédito (com.okey.prestamo)
- Finupp Lending (com.shuiyiwenhua.gl)
- 4S Cash (com.swefjjghs.weejteop)
- TrueNaira – Online Loan (com.truenaira.cashloan.moneycredit)
- EasyCash (king.credit.ng)
- สินเชื่อปลอดภัย – สะดวก (com.sc.safe.credit)
Risks with several dimensions
SpyLoan's loan application violates Google's Financial Services policy by unilaterally shortening the repayment period for personal loans to a few days or any other arbitrary time frame. Additionally, the company threatens users with public embarrassment and exposure if they do not comply with such unreasonable demands.
Furthermore, the privacy rules presented by SpyLoan are misleading. While ostensibly reasonable justifications are provided for obtaining certain permissions, they are very intrusive practices. For instance, camera permission is ostensibly required for picture data uploads for Know Your Customer (KYC) purposes, and access to the user's calendar is ostensibly required to plan payment dates and reminders. However, both of these permissions are dangerous and can potentially infringe on users' privacy.
Prosecution Strategies and Legal Framework
The law enforcement agencies and legal authorities initiated prosecution strategies against the individuals who are involved in the Spy Loan Scandal, this multifaced approach involves international agreements and the exploration of innovative legal avenues. Agencies need to collaborate with International agencies to work on specific cyber-crime, leveraging the legal frameworks against digital fraud furthermore, the cross-border nature of the spy loan operation requires a strong legal framework to exchange information, extradition requests, and the pursuit of legal actions across multiple jurisdictions.
Legal Protections for Victims: Seeking Compensation and Restitution
As the legal battle unfolds in the aftermath of the Spy loan scam the focus shifts towards the victims, who suffer financial loss from such fraudulent apps. Beyond prosecuting culprits, the pursuit of justice should involve legal safeguards for victims. Existing consumer protection laws serve as a crucial shield for Spy Loan victims. These laws are designed to safeguard the rights of individuals against unfair practices.
Challenges in legal representation
As the legal hunt for justice in the Spy Loan scam progresses, it encounters challenges that demand careful navigation and strategic solutions. One of the primary obstacles in the legal pursuit of the Spy loan app lies in the jurisdictional complexities. Within the national borders, it’s quite challenging to define the jurisdiction that holds the authority, and a unified approach in prosecuting the offenders in various regions with the efforts of various government agencies.
Concealing the digital identities
One of the major challenges faced is the anonymity afforded by the digital realm poses a challenge in identifying and catching the perpetrators of the scam, the scammers conceal their identity and make it difficult for law enforcement agencies to attribute to actions against the individuals, this challenge can be overcome by joint effort by international agencies and using the advance digital forensics and use of edge cutting technology to unmask these scammers.
Technological challenges
The nature of cyber threats and crime patterns are changing day by day as technology advances this has become a challenge for legal authorities, the scammers explore vulnerabilities, making it essential, for law enforcement agencies to be a step ahead, which requires continuous training of cybercrime and cyber security.
Shaping the policies to prevent future fraud
As the scam unfolds, it has become really important to empower users by creating more and more awareness campaigns. The developers of the apps need to have a transparent approach to users.
Conclusion
It is really important to shape the policies to prevent future cyber frauds with a multifaced approach. Proposals for legislative amendments, international collaboration, accountability measures, technology protections, and public awareness programs all contribute to the creation of a legal framework that is proactive, flexible, and robust to cybercriminals' shifting techniques. The legal system is at the forefront of this effort, playing a critical role in developing regulations that will protect the digital landscape for years to come.
Safeguarding against spyware threats like SpyLoan requires vigilance and adherence to best practices. Users should exclusively download apps from official sources, meticulously verify the authenticity of offerings, scrutinize reviews, and carefully assess permissions before installation.
References

Introduction
How Generative Artificial Intelligence, or GenAI, is changing the employee workday is no longer limited to writing emails or debugging code, but now also includes analysing contracts, generating reports, and much more. The use of AI tools in everyday work has become commonplace, but the speed at which companies have adopted these technologies has created a new kind of risk. Unlike threats that come from an outside attacker, Shadow AI is created inside an organisation by a legitimate employee who uses unapproved AI tools to make their work more efficient and productive. In many cases, the employee is unaware of the potential security, data privacy, and compliance risks involved in using such tools to perform their job duties.
What Is Shadow AI?
Shadow AI is when individuals use AI tools at work that aren’t provided by the company, like tools or other software programs, without the knowledge or permission of the employer. Examples of shadow AI include:
- Using personal ChatGPT or other chatbot accounts to complete tasks at the office
- Uploading business-related documents to online AI technologies for analysis or summarisation.
- Copying proprietary source code into an online AI model for debugging
- Installing browser extensions and add-ons that are not approved by IT or Security personnel.
How Shadow AI Is Harmful
1. Uncontrolled Data Exposure
When employees access or input information into their user-created AI, it becomes outside the controls of the company, such as both employee personal information and any third-party personal information, private company information (such as source code or contracts), and company internal strategies. After a user enters data into their user-created AIs, the company loses all ability to monitor how that data is stored, processed, or maintained. A data leak situation exists without a malicious cyberattack. The biggest risk of a data leak is not maliciousness but rather the loss of control and governance over sensitive data.
2. Regulatory and Legal Non-Compliance
Data protection laws like GDPR, India’s Digital Personal Data Protection (DPDP) Act, HIPAA, and other relevant sectoral laws require businesses to process data in accordance with the law, to minimise the amount of data they use, and to be accountable for their actions. Shadow AI often results in the unlawful use of personal data due to a lack of a legal basis for the processing, unauthorised cross-border data transfers, and not having appropriate contractual protections in place with their AI service providers. Regulators do not see the convenience of employees as an excuse for not complying with the law, and therefore, the organisation is ultimately responsible for any violations that occur.
3. Loss of Intellectual Property
Employees frequently use AI tools to speed up tasks involving proprietary information—debugging code, reviewing contracts, or summarising internal research. When done using unapproved AI platforms, this can expose trade secrets and intellectual property, eroding competitive advantage and creating long-term business risk.
Real-Life Example: Samsung’s ChatGPT Data Leak
In 2023, a case study exemplifying the Shadow AI risk occurred when Samsung Electronics placed a temporary ban on employee access to ChatGPT and other AI tools after reports from engineers revealed they were using ChatGPT to create debugging processes for internal source code and to summarise meeting notes. Consequently, confidential source code related to semiconductors was inadvertently uploaded onto a public AI platform. While there were no known incursions into the company’s system due to this incident, Samsung faced a significant challenge: once sensitive information is input into a public AI tool, it exists on external servers that are outside of the company’s purview or control.
As a result of this incident, Samsung restricted employee use of ChatGPT on corporate devices, issued a series of internal communications prohibiting the sharing of corporate data with public AI tools, and increased the urgency of their discussions regarding the adoption of secure, enterprise-level AI (artificial intelligence) solutions.
What Organisations Are Doing Today
Many organisations respond to Shadow AI risk by:
- Blocking access at the network level
- Circulating warning emails or policies
While these actions may reduce immediate exposure, they fail to address the root cause: employees still need AI to perform their jobs efficiently. As a result, bans often push AI usage underground, increasing Shadow AI rather than eliminating it.
Why Blocking AI Does Not Work—Governance Does
History has demonstrated that prohibition does not work - we see this when trying to block access to cloud storage, instant messaging and collaboration tools. Employees are forced to use personal devices and/or accounts when their employers block AI, which means employers do not have real-time visibility into how their employees are using these technologies, and creates friction with the security and compliance team as they try to enforce the types of tools their employees can use. Prohibiting AI adoption will not stop it from being adopted; it will just create a challenge for employers regarding how safe and responsible it is. The challenge for effective organisations is therefore to shift from denial and develop governance-first AI strategies aimed at controlling data usage, protection and security, rather than merely restricting access to a list of specific tools.
Shadow AI: A Silent Legal Liability Under the GDPR
Shadow AI isn't a problem for the Information Technology Department; it is a failure of Governance, Compliance and Law. By using AI tools that have not been approved as a result, the organisation processes personal data without a lawful basis (Article 6 of the General Data Protection Regulation (GDPR)), repurposes data for use beyond its original intent and in breach of the Purpose Limitation (Article 5(1)(b)), and routinely exceeds necessity and in breach of Data Minimisation (Article 5(1)(c)). The outcome of these actions is the use of tools that involve International Data Transfers Without Authorisation and are therefore in breach of Chapter V, and violate Article 32 because there are no enforceable safeguards in place. Most significantly, the failure to demonstrate Oversight, Logging and Control under Articles 5(2) and 24 constitutes a failure in Accountability. Therefore, from a Regulatory perspective, Shadow AI is not accidental and is not defensible.
The Right Solution: Secure and Governed AI Adoption
1. Provide Approved AI Tools
Employers have an obligation to supply business-approved AI technology for helping workers to be productive while maintaining maximum protections, like storing data separately and not using employees' data for training a model; defining how long data is kept, and the rules around deleting that data. When employees are provided with verified and secure AI options that align with their work processes, they will rely significantly less on Shadow AI.
2. Enforce Zero-Trust Data Access
The governance of AI systems must follow the principles of "zero trust," granting access to data only through the principle of "least privilege," which means that data access will only be allowed by the system user, and providing continuous verification of user-identity and context; this supports and helps establish context-aware controls to monitor and track all user activities, which will be especially important as agent-like AI systems become increasingly autonomous and are capable of operating at machine-speed where even small errors in configuration, will result in rapid and large expose to data.
3. Apply DLP and Audit Logging
It is important to have robust data loss prevention measures in place to protect sensitive data that is sent outside an organisation. The first end user or machine that accesses the data should be detailed in a comprehensive audit log that indicates when and how the data is accessed. In combination with other controls, these measures create accountability, comply with regulations, and assist with appropriately detecting and responding to incidents.
4. Maintain Visibility Across AI, Cloud, and SaaS
Security teams need unified visibility across AI tools, personal cloud applications, and SaaS platforms. Risks move across systems, and controls must follow the data wherever it flows.
Conclusion
This new threat exposes an organisation to the risk of data loss through leaks, regulatory fines, liability for the loss of intellectual property, and reputational damage, all of which can occur without any intent to cause harm. The way forward is not to block AI, but to adopt a clear framework built on governance, visibility, and secure enablement. This approach allows organisations to use AI with confidence, while ensuring trust, accountability, and effective oversight to protect data and support AI in reaching its full transformative potential. AI use is encouraged, but it must be done responsibly, ethically, and securely.
References
- https://bronson.ai/resources/shadow-ai/
- https://www.varonis.com/blog/shadow-ai
- https://www.waymakeros.com/learn/gdpr-hipaa-shadow-ai-compliance-nightmare
- https://www.forbes.com/sites/siladityaray/2023/05/02/samsung-bans-chatgpt-and-other-chatbots-for-employees-after-sensitive-code-leak/
- https://www.usatoday.com/story/special/contributor-content/2025/05/23/shadow-ai-the-hidden-risk-in-todays-workplace/83822081007