Privacy v/s Regulation: Apple’s Advanced Data Protection Rollback in the UK
Sharisha Sahay
Research Analyst - Policy & Advocacy, CyberPeace
PUBLISHED ON
Mar 12, 2025
10
Recently, Apple has pushed away the Advanced Data Protection feature for its customers in the UK. This was done due to a request by the UK’s Home Office, which demanded access to encrypted data stored in its cloud service, empowered by the Investigatory Powers Act (IPA). The Act compels firms to provide information to law enforcement. This move and its subsequent result, however, have raised concerns—bringing out different perspectives regarding the balance between privacy and security, along with the involvement of higher authorities and tech firms.
What is Advanced Data Protection?
Advanced Data Protection is an opt-in feature and doesn’t necessarily require activation. It is Apple’s strongest data tool, which provides end-to-end encryption for the data that the user chooses to protect. This is different from the standard (default) encrypted data services that Apple provides for photos, back-ups, and notes, among other things. The flip side of having such a strong security feature from a user perspective is that if the Apple account holder were to lose access to the account, they would lose their data as well since there are no recovery paths.
Doing away with the feature altogether, the sign-ups have been currently halted, and the company is working on removing existing user access at a later date (which is yet to be confirmed). For the UK users who hadn’t availed of this feature, there would be no change. However, for the ones who are currently trying to avail it are met with a notification on the Advanced Data Protection settings page that states that the feature cannot be enabled anymore. Consequently, there is no clarity whether the data stored by the UK users who availed the former facility would now cease to exist as even Apple doesn’t have access to it. It is important to note that withdrawing the feature does not ensure compliance with the Investigative Powers Act (IPA) as it is applicable to tech firms worldwide that have a UK market. Similar requests to access data have been previously shut down by Apple in the US.
Apple’s Stand on Encryption and Government Requests
The Tech giant has resisted court orders, rejecting requests to write software that would allow officials to access and enable identification of iPhones operated by gunmen (made in 2016 and 2020). It is said that the supposed reasons for such a demand by the UK Home Office have been made owing to the elusive role of end-to-end encryption in hiding criminal activities such as child sexual abuse and terrorism, hampering the efforts of security officials in catching them. Over the years, Apple has emphasised time and again its reluctance to create a backdoor to its encrypted data, stating the consequences of it being more vulnerable to attackers once a pathway is created. The Salt Typhoon attack on the US Telecommunication system is a recent example that has alerted officials, who now encourage the use of end-to-end encryption. Barring this, such requests could set a dangerous precedent for how tech firms and governments operate together. This comes against the backdrop of the Paris AI Action Summit, where US Vice President J.D. Vance raised concerns regarding regulation. As per reports, Apple has now filed a legal complaint against the Investigatory Powers Tribunal, the UK’s judicial body that handles complaints with respect to surveillance power usage by public authorities.
The Broader Debate on Privacy vs. Security
This standoff raises critical questions about how tech firms and governments should collaborate without compromising fundamental rights. Striking the right balance between privacy and regulation is imperative, ensuring security concerns are addressed without dismantling individual data protection. The outcome of Apple’s legal challenge against the IPA may set a significant precedent for how encryption policies evolve in the future.
Phone farms refer to setups or systems using multiple phones collectively. Phone farms are often for deceptive purposes, to create repeated actions in high numbers quickly, or to achieve goals. These can include faking popularity through increasing views, likes, and comments and growing the number of followers. It can also include creating the illusion of legitimate activity through actions like automatic app downloads, ad views, clicks, registrations, installations and in-app engagement.
A phone farm is a network where cybercriminals exploit mobile incentive programs by using multiple phones to perform the same actions repeatedly. This can lead to misattributions and increased marketing spends. Phone farming involves exploiting paid-to-watch apps or other incentive-based programs over dozens of phones to increase the total amount earned. It can also be applied to operations that orchestrate dozens or hundreds of phones to create a certain outcome, such as improving restaurant ratings or App Store Optimization(ASO). Companies constantly update their platforms to combat phone farming, but it is nearly impossible to prevent people from exploiting such services for their own benefit.
How Do Phone Farms Work?
Phone farms are a collection of connected smartphones or mobile devices used for automated tasks, often remotely controlled by software programs. These devices are often used for advertising, monetization, and artificially inflating app ratings or social media engagement. The software used in phone farms is typically a bot or script that interacts with the operating system and installed apps. The phone farm operator connects the devices to the Internet via wired or wireless networks, VPNs, or other remote access software. Once the software is installed, the operator can use a web-based interface or command-line tool to schedule and monitor tasks, setting specific schedules or monitoring device status for proper operation.
Modus Operandi Behind Phone Farms
Phone farms have gained popularity due to the growing popularity and scope of the Internet and the presence of bots. Phone farmers use multiple phones simultaneously to perform illegitimate activity and mimic high numbers. The applications can range from ‘watching’ movie trailers and clicking on ads to giving fake ratings and creating false engagements. When phone farms drive up ‘engagement actions’ on social media through numerous likes and post shares, they help perpetuate a false narrative. Through phone click farms, bad actors also earn on each ad or video watched. Phone farmers claim to use this as a side hustle, as a means of making more money. Click farms can be modeled as companies providing digital engagement services or as individual corporations to multiply clicks for various objectives. They are operated on a much larger scale, with thousands of employees and billions of daily clicks, impressions, and engagements.
The Legality of Phone Farms
The question about the legality of phone farms presents a conundrum. It is notable that phone farms are also used for legitimate application in software development and market research, enabling developers to test applications across various devices and operating systems simultaneously. However, they are typically employed for more dubious purposes, such as social media manipulation, generatiing fake clicks on online ads, spamming, spreading misinformation, and facilitating cyberattacks, and such use cases classify as illegal and unethical behaviour.
The use of the technology to misrepresent information for nefarious intents is illegitimate and unethical. Phone farms are famed for violating the terms of the apps they use to make money by simulating clicks, creating multiple fake accounts and other activities through multiple phones, which can be illegal.
Furthermore, should any entity misrepresent its image/product/services through fake reviews/ratings obtained through bots and phone farms and create deliberately-false impressions for consumers, it is to be considered an unfair trade practice and may attract liabilities.
CyberPeace Policy Recommendations
CyberPeace advocates for truthful and responsible consumption of technology and the Internet. Businesses are encouraged to refrain from using such unethical methods to gain a business advantage and mimic fake popularity online. Businesses must be mindful to avoid any actions that may misrepresent information and/ or cause injury to consumers, including online users. The ethical implications of phone farms cannot be ignored, as they can erode public trust in digital platforms and contribute to a climate of online deception. Law enforcement agencies and regulators are encouraged to keep a check on any illegal use of mobile devices by cybercriminals to commit cyber crimes. Tech and social media platforms must implement monitoring and detection systems to analyse any unusual behaviour/activity on their platforms, looking for suspicious bot activity or phone farming groups. To stay protected from sophisticated threats and to ensure a secure online experience, netizens are encouraged to follow cybersecurity best practices and verify all information from authentic sources.
Final Words
Phone farms have the ability to generate massive amounts of social media interactions, capable of performing repetitive tasks such as clicking, scrolling, downloading, and more in very high volumes in very short periods of time. The potential for misuse of phone farms is higher than the legitimate uses they can be put to. As technology continues to evolve, the challenge lies in finding a balance between innovation and ethical use, ensuring that technology is harnessed responsibly.
Old footage of Indian Cricketer Virat Kohli celebrating Ganesh Chaturthi in September 2023 was being promoted as footage of Virat Kohli at the Ram Mandir Inauguration. A video of cricketer Virat Kohli attending a Ganesh Chaturthi celebration last year has surfaced, with the false claim that it shows him at the Ram Mandir consecration ceremony in Ayodhya on January 22. The Hindi newspaper Dainik Bhaskar and Gujarati newspaper Divya Bhaskar also displayed the now-viral video in their respective editions on January 23, 2024, escalating the false claim. After thorough Investigation, it was found that the Video was old and it was Ganesh Chaturthi Festival where the cricketer attended.
Claims:
Many social media posts, including those from news outlets such as Dainik Bhaskar and Gujarati News Paper Divya Bhaskar, show him attending the Ram Mandir consecration ceremony in Ayodhya on January 22, where after investigation it was found that the Video was of Virat Kohli attending Ganesh Chaturthi in September, 2023.
The caption of Dainik Bhaskar E-Paper reads, “ क्रिकेटर विराट कोहली भी नजर आए ”
Fact Check:
CyberPeace Research Team did a reverse Image Search of the Video where several results with the Same Black outfit was shared earlier, from where a Bollywood Entertainment Instagram Profile named Bollywood Society shared the same Video in its Page, the caption reads, “Virat Kohli snapped for Ganapaati Darshan” the post was made on 20 September, 2023.
Taking an indication from this we did some keyword search with the Information we have, and it was found in an article by Free Press Journal, Summarizing the article we got to know that Virat Kohli paid a visit to the residence of Shiv Sena leader Rahul Kanal to seek the blessings of Lord Ganpati. The Viral Video and the claim made by the news outlet is false and Misleading.
Conclusion:
The recent Claim made by the Viral Videos and News Outlet is an Old Footage of Virat Kohli attending Ganesh Chaturthi the Video back to the year 2023 but not of the recent auspicious day of Ram Mandir Pran Pratishtha. To be noted that, we also confirmed that Virat Kohli hadn’t attended the Program; there was no confirmation that Virat Kohli attended on 22 January at Ayodhya. Hence, we found this claim to be fake.
Claim: Virat Kohli attending the Ram Mandir consecration ceremony in Ayodhya on January 22
In an era when misinformation spreads like wildfire across the digital landscape, the need for effective strategies to counteract these challenges has grown exponentially in a very short period. Prebunking and Debunking are two approaches for countering the growing spread of misinformation online. Prebunking empowers individuals by teaching them to discern between true and false information and acts as a protective layer that comes into play even before people encounter malicious content. Debunking is the correction of false or misleading claims after exposure, aiming to undo or reverse the effects of a particular piece of misinformation. Debunking includes methods such as fact-checking, algorithmic correction on a platform, social correction by an individual or group of online peers, or fact-checking reports by expert organisations or journalists. An integrated approach which involves both strategies can be effective in countering the rapid spread of misinformation online.
Brief Analysis of Prebunking
Prebunking is a proactive practice that seeks to rebut erroneous information before it spreads. The goal is to train people to critically analyse information and develop ‘cognitive immunity’ so that they are less likely to be misled when they do encounter misinformation.
The Prebunking approach, grounded in Inoculation theory, teaches people to recognise, analyse and avoid manipulation and misleading content so that they build resilience against the same. Inoculation theory, a social psychology framework, suggests that pre-emptively conferring psychological resistance against malicious persuasion attempts can reduce susceptibility to misinformation across cultures. As the term suggests, the MO is to help the mind in the present develop resistance to influence that it may encounter in the future. Just as medical vaccines or inoculations help the body build resistance to future infections by administering weakened doses of the harm agent, inoculation theory seeks to teach people fact from fiction through exposure to examples of weak, dichotomous arguments, manipulation tactics like emotionally charged language, case studies that draw parallels between truths and distortions, and so on. In showing people the difference, inoculation theory teaches them to be on the lookout for misinformation and manipulation even, or especially, when they least expect it.
The core difference between Prebunking and Debunking is that while the former is preventative and seeks to provide a broad-spectrum cover against misinformation, the latter is reactive and focuses on specific instances of misinformation. While Debunking is closely tied to fact-checking, Prebunking is tied to a wider range of specific interventions, some of which increase motivation to be vigilant against misinformation and others increase the ability to engage in vigilance with success.
There is much to be said in favour of the Prebunking approach because these interventions build the capacity to identify misinformation and recognise red flags However, their success in practice may vary. It might be difficult to scale up Prebunking efforts and ensure their reach to a larger audience. Sustainability is critical in ensuring that Prebunking measures maintain their impact over time. Continuous reinforcement and reminders may be required to ensure that individuals retain the skills and information they gained from the Prebunking training activities. Misinformation tactics and strategies are always evolving, so it is critical that Prebunking interventions are also flexible and agile and respond promptly to developing challenges. This may be easier said than done, but with new misinformation and cyber threats developing frequently, it is a challenge that has to be addressed for Prebunking to be a successful long-term solution.
Encouraging people to be actively cautious while interacting with information, acquire critical thinking abilities, and reject the effect of misinformation requires a significant behavioural change over a relatively short period of time. Overcoming ingrained habits and prejudices, and countering a natural reluctance to change is no mean feat. Developing a widespread culture of information literacy requires years of social conditioning and unlearning and may pose a significant challenge to the effectiveness of Prebunking interventions.
Brief Analysis of Debunking
Debunking is a technique for identifying and informing people that certain news items or information are incorrect or misleading. It seeks to lessen the impact of misinformation that has already spread. The most popular kind of Debunking occurs through collaboration between fact-checking organisations and social media businesses. Journalists or other fact-checkers discover inaccurate or misleading material, and social media platforms flag or label it. Debunking is an important strategy for curtailing the spread of misinformation and promoting accuracy in the digital information ecosystem.
Debunking interventions are crucial in combating misinformation. However, there are certain challenges associated with the same. Debunking misinformation entails critically verifying facts and promoting corrected information. However, this is difficult owing to the rising complexity of modern tools used to generate narratives that combine truth and untruth, views and facts. These advanced approaches, which include emotional spectrum elements, deepfakes, audiovisual material, and pervasive trolling, necessitate a sophisticated reaction at all levels: technological, organisational, and cultural.
Furthermore, It is impossible to debunk all misinformation at any given time, which effectively means that it is impossible to protect everyone at all times, which means that at least some innocent netizens will fall victim to manipulation despite our best efforts. Debunking is inherently reactive in nature, addressing misinformation after it has grown extensively. This reactionary method may be less successful than proactive strategies such as Prebunking from the perspective of total harm done. Misinformation producers operate swiftly and unexpectedly, making it difficult for fact-checkers to keep up with the rapid dissemination of erroneous or misleading information. Debunking may need continuous exposure to fact-check to prevent erroneous beliefs from forming, implying that a single Debunking may not be enough to rectify misinformation. Debunking requires time and resources, and it is not possible to disprove every piece of misinformation that circulates at any particular moment. This constraint may cause certain misinformation to go unchecked, perhaps leading to unexpected effects. The misinformation on social media can be quickly spread and may become viral faster than Debunking pieces or articles. This leads to a situation in which misinformation spreads like a virus, while the antidote to debunked facts struggles to catch up.
Prebunking vs Debunking: Comparative Analysis
Prebunking interventions seek to educate people to recognise and reject misinformation before they are exposed to actual manipulation. Prebunking offers tactics for critical examination, lessening the individuals' susceptibility to misinformation in a variety of contexts. On the other hand, Debunking interventions involve correcting specific false claims after they have been circulated. While Debunking can address individual instances of misinformation, its impact on reducing overall reliance on misinformation may be limited by the reactive nature of the approach.
CyberPeace Policy Recommendations for Tech/Social Media Platforms
With the rising threat of online misinformation, tech/social media platforms can adopt an integrated strategy that includes both Prebunking and Debunking initiatives to be deployed and supported on all platforms to empower users to recognise the manipulative messaging through Prebunking and be aware of the accuracy of misinformation through Debunking interventions.
Gamified Inoculation: Tech/social media companies can encourage gamified inoculation campaigns, which is a competence-oriented approach to Prebunking misinformation. This can be effective in helping people immunise the receiver against subsequent exposures. It can empower people to build competencies to detect misinformation through gamified interventions.
Promotion of Prebunking and Debunking Campaigns through Algorithm Mechanisms:Tech/social media platforms may promote and guarantee that algorithms prioritise the distribution of Prebunking materials to users, boosting educational content that strengthens resistance to misinformation. Platform operators should incorporate algorithms that prioritise the visibility of Debunking content in order to combat the spread of erroneous information and deliver proper corrections; this can eventually address and aid in Prebunking and Debunking methods to reach a bigger or targeted audience.
User Empowerment to Counter Misinformation:Tech/social media platforms can design user-friendly interfaces that allow people to access Prebunking materials, quizzes, and instructional information to help them improve their critical thinking abilities. Furthermore, they can incorporate simple reporting tools for flagging misinformation, as well as links to fact-checking resources and corrections.
Partnership with Fact-Checking/Expert Organizations:Tech/social media platforms can facilitate Prebunking and Debunking initiatives/campaigns by collaborating with fact-checking/expert organisations and promoting such initiatives at a larger scale and ultimately fighting misinformation with joint hands initiatives.
Conclusion
The threat of online misinformation is only growing with every passing day and so, deploying effective countermeasures is essential. Prebunking and Debunking are the two such interventions. To sum up: Prebunking interventions try to increase resilience to misinformation, proactively lowering susceptibility to erroneous or misleading information and addressing broader patterns of misinformation consumption, while Debunking is effective in correcting a particular piece of misinformation and having a targeted impact on belief in individual false claims. An integrated approach involving both the methods and joint initiatives by tech/social media platforms and expert organizations can ultimately help in fighting the rising tide of online misinformation and establishing a resilient online information landscape.
Your institution or organization can partner with us in any one of our initiatives or policy research activities and complement the region-specific resources and talent we need.