iOS Lockdown Mode Feature: The Cyber Bouncer for Your iPhone!
Introduction
Your iPhone isn’t just a device: it’s a central hub for almost everything in your life. From personal photos and videos to sensitive data, it holds it all. You rely on it for essential services, from personal to official communications, sharing of information, banking and financial transactions, and more. With so much critical information stored on your device, protecting it from cyber threats becomes essential. This is where the iOS Lockdown Mode feature comes in as a digital bouncer to keep cyber crooks at bay.
Apple introduced the ‘lockdown’ mode in 2022. It is a new optional security feature and is available on iPhones, iPads, and Mac devices. It works as an extreme and optional protection mechanism for a certain segment of users who might be at a higher risk of being targeted by serious cyber threats and intrusions into their digital security. So people like journalists, activists, government officials, celebrities, cyber security professionals, law enforcement professionals, and lawyers etc are some of the intended beneficiaries of the feature. Sometimes the data on their devices can be highly confidential and it can cause a lot of disruption if leaked or compromised by cyber threats. Given how prevalent cyber attacks are in this day and age, the need for such a feature cannot be overstated. This feature aims at providing an additional firewall by limiting certain functions of the device and hence reducing the chances of the user being targeted in any digital attack.
How to Enable Lockdown Mode in Your iPhone
On your iPhone running on iOS 16 Developer Beta 3, you just need to go to Settings - Privacy and Security - Lockdown Mode. Tap on Turn on Lockdown Mode, and read all the information regarding the features that will be unavailable on your device if you go forward, and if you’re satisfied with the same all you have to do is scroll down and tap on Turn on Lockdown Mode. Your iPhone will get restarted with Lockdown Mode enabled.
Easy steps to enable lockdown mode are as follows:
- Open the Settings app.
- Tap Privacy & Security.
- Scroll down, tap Lockdown Mode, then tap Turn On Lockdown Mode.
How Lockdown Mode Protects You
Lockdown Mode is a security feature that prevents certain apps and features from functioning properly when enabled. For example, your device will not automatically connect to Wi-Fi networks without security and will disconnect from a non-secure network when Lockdown Mode is activated. Many other features may be affected because the system will prioritise security standards above the typical operational functions. Since lockdown mode restricts certain features and activities, one can exclude a particular app or website in Safari from being impacted and limited by restrictions. Only exclude trusted apps or websites if necessary.
References:
- https://support.apple.com/en-in/105120#:~:text=Tap%20Privacy%20%26%20Security.,then%20enter%20your%20device%20passcode
- https://www.business-standard.com/technology/tech-news/apple-lockdown-mode-what-is-it-and-how-it-prevents-spyware-attacks-124041200667_1.html
Related Blogs

Overview:
‘Kia Connect’ is the application that is used to connect ‘Kia’ cars which allows the user control various parameters of the vehicle through the application on his/her smartphone. The vulnerabilities found in most Kias built after 2013 with but little exception. Most of the risks are derived from a flawed API that deals with dealer relations and vehicle coordination.
Technical Breakdown of Exploitation:
- API Exploitation: The attack uses the vulnerabilities in Kia’s dealership network. The researchers also noticed that, for example, the logs generated while impersonating a dealer and registering on the Kia dealer portal would be sufficient for deriving access tokens needed for next steps.
- Accessing Vehicle Information: The license plate number allowed the attackers to get the Vehicle Identification Number (VIN) number of their preferred car. This VIN can then be used to look up more information about the car and is an essential number to determine for the shared car.
- Information Retrieval: Having the VIN number in hand, attackers can launch a number of requests to backends to pull more sensitive information about the car owner, including:
- Name
- Email address
- Phone number
- Geographical address
- Modifying Account Access: With this information, attackers could change the accounts settings to make them a second user on the car, thus being hidden from the actual owner of the account.
- Executing Remote Commands: Once again, it was discovered that attackers could remotely execute different commands on the vehicle, which includes:some text
- Unlocking doors
- Starting the engine
- Monitoring the location of the vehicle in terms of position.
- Honking the horn
Technical Execution:
The researchers demonstrated that an attacker could execute a series of four requests to gain control over a Kia vehicle:
- Generate Dealer Token: The attacker sends an HTTP request in order to create a dealer token.
- Retrieve Owner Information: As indicated using the generated token, they make another request to another endpoint that returns the owner’s email address and phone number.
- Modify Access Permissions: The attacker takes advantage of the leaked information (email address and VIN) of the owner to change between users accounts and make himself the second user.
- Execute Commands: As the last one, they can send commands to perform actions on the operated vehicle.
Security Response and Precautionary Measures for Vehicle Owners
- Regular Software Updates: Car owners must make sure their cars receive updates on the recent software updates provided by auto producers.
- Use Strong Passwords: The owners of Kia Connect accounts should develop specific and complex passwords for their accounts and then update them periodically. They should avoid using numbers like the birth dates, vehicle numbers and simple passwords.
- Enable Multi-Factor Authentication: For security, vehicle owners should turn on the use of the secondary authentication when it is available to protect against unauthorized access to an account.
- Limit Personal Information Sharing: Owners of vehicles should be careful with the details that are connected with the account on their car, like the e-mail or telephone number, sharing them on social networks, for example.
- Monitor Account Activity: It is also important to monitor the account activity because of change or access attempts that are unauthorized. In case of any abnormality or anything suspicious felt while using the car, report it to Kia customer support.
- Educate Yourself on Vehicle Security: Being aware of cyber threats that are connected to vehicles and learning about how to safeguard a vehicle from such threats.
- Consider Disabling Remote Features When Not Needed: If remote features are not needed, then it is better to turn them off, and then turn them on again when needed. This can prove to help diminish the attack vector for would-be hackers.
Industry Implications:
The findings from this research underscore broader issues within automotive cybersecurity:
- Web Security Gaps: Most car manufacturers pay more attention to equipment running in automobiles instead of the safety of the websites that the car uses to operate thereby exposing automobiles that are connected very much to risks.
- Continued Risks: Vehicles become increasingly connected to internet technologies. Auto makers will have to carry cyber security measures in their cars in the future.
Conclusion:
The weaknesses found in Kia’s connected car system are a key concern for Automotive security. Since cars need web connections for core services, suppliers also face the problem of risks and need to create effective safeguards. Kia took immediate actions to tighten the safety after disclosure; however, new threats will emerge as this is a dynamic domain involving connected technology. With growing awareness of these risks, it is now important for car makers not only to put in proper security measures but also to maintain customer communication on how it safeguards their information and cars against cyber dangers. That being an incredibly rapid approach to advancements in automotive technology, the key to its safety is in our capacity to shield it from ever-present cyber threats.
Reference:
- https://timesofindia.indiatimes.com/auto/cars/hackers-could-unlock-your-kia-car-with-just-a-license-plate-is-yours-safe/articleshow/113837543.cms
- https://www.thedrive.com/news/hackers-found-millions-of-kias-could-be-tracked-controlled-with-just-a-plate-number
- https://www.securityweek.com/millions-of-kia-cars-were-vulnerable-to-remote-hacking-researchers/
- https://news24online.com/auto/kia-vehicles-hack-connected-car-cybersecurity-threat/346248/
- https://www.malwarebytes.com/blog/news/2024/09/millions-of-kia-vehicles-were-vulnerable-to-remote-attacks-with-just-a-license-plate-number
- https://informationsecuritybuzz.com/kia-vulnerability-enables-remote-acces/
- https://samcurry.net/hacking-kia

Executive Summary:
A photoshopped image circulating online suggests Prime Minister Narendra Modi met with militant leader Hafiz Saeed. The actual photograph features PM Modi greeting former Pakistani Prime Minister Nawaz Sharif during a surprise diplomatic stopover in Lahore on December 25, 2015.
The Claim:
A widely shared image on social media purportedly shows PM Modi meeting Hafiz Saeed, a declared terrorist. The claim implies Modi is hostile towards India or aligned with terrorists.

Fact Check:
On our research and reverse image search we found that the Press Information Bureau (PIB) had tweeted about the visit on 25 December 2015, noting that PM Narendra Modi was warmly welcomed by then-Pakistani PM Nawaz Sharif in Lahore. The tweet included several images from various angles of the original meeting between Modi and Sharif. On the same day, PM Modi also posted a tweet stating he had spoken with Nawaz Sharif and extended birthday wishes. Additionally, no credible reports of any meeting between Modi and Hafiz Saeed, further validating that the viral image is digitally altered.


In our further research we found an identical photo, with former Pakistan Prime Minister Nawaz Sharif in place of Hafiz Saeed. This post was shared by Hindustan Times on X on 26 December 2015, pointing to the possibility that the viral image has been manipulated.
Conclusion:
The viral image claiming to show PM Modi with Hafiz Saeed is digitally manipulated. A reverse image search and official posts from the PIB and PM Modi confirm the original photo was taken during Modi’s visit to Lahore in December 2015, where he met Nawaz Sharif. No credible source supports any meeting between Modi and Hafiz Saeed, clearly proving the image is fake.
- Claim: Debunking the Edited Image Claim of PM Modi with Hafiz Saeed
- Claimed On: Social Media
- Fact Check: False and Misleading

Introduction
The term ‘super spreader’ is used to refer to social media and digital platform accounts that are able to quickly transmit information to a significantly large audience base in a short duration. The analogy references the medical term, where a small group of individuals is able to rapidly amplify the spread of an infection across a huge population. The fact that a few handful accounts are able to impact and influence many is attributed to a number of factors like large follower bases, high engagement rates, content attractiveness or virality and perceived credibility.
Super spreader accounts have become a considerable threat on social media because they are responsible for generating a large amount of low-credibility material online. These individuals or groups may create or disseminate low-credibility content for a number of reasons, running from social media fame to garnering political influence, from intentionally spreading propaganda to seeking financial gains. Given the exponential reach of these accounts, identifying, tracing and categorising such accounts as the sources of misinformation can be tricky. It can be equally difficult to actually recognise the content they spread for the misinformation that it actually is.
How Do A Few Accounts Spark Widespread Misinformation?
Recent research suggests that misinformation superspreaders, who consistently distribute low-credibility content, may be the primary cause of the issue of widespread misinformation about different topics. A study[1] by a team of social media analysts at Indiana University has found that a significant portion of tweets spreading misinformation are sent by a small percentage of a given user base. The researchers conducted a review of 2,397,388 tweets posted on Twitter (now X) that were flagged as having low credibility and details on who was sending them. The study found that it does not take a lot of influencers to sway the beliefs and opinions of large numbers. This is attributed to the impact of what they describe as superspreaders. The researchers collected 10 months of data, which added up to 2,397,388 tweets sent by 448,103 users, and then reviewed it, looking for tweets that were flagged as containing low-credibility information. They found that approximately a third of the low-credibility tweets had been posted by people using just 10 accounts, and that just 1,000 accounts were responsible for posting approximately 70% of such tweets.[2]
Case Study
- How Misinformation ‘Superspreaders’ Seed False Election Theories
During the 2020 U.S. presidential election, a small group of "repeat spreaders" aggressively pushed false election claims across various social media platforms for political gain, and this even led to rallies and radicalisation in the U.S.[3] Superspreaders accounts were responsible for disseminating a disproportionately large amount of misinformation related to the election, influencing public opinion and potentially undermining the electoral process.
In the domestic context, India was ranked highest for the risk of misinformation and disinformation according to experts surveyed for the World Economic Forum’s 2024 Global Risk Report. In today's digital age, misinformation, deep fakes, and AI-generated fakes pose a significant threat to the integrity of elections and democratic processes worldwide. With 64 countries conducting elections in 2024, the dissemination of false information carries grave implications that could influence outcomes and shape long-term socio-political landscapes. During the 2024 Indian elections, we witnessed a notable surge in deepfake videos of political personalities, raising concerns about the influence of misinformation on election outcomes.
- Role of Superspreaders During Covid-19
Clarity in public health communication is important when any grey areas or gaps in information can be manipulated so quickly. During the COVID-19 pandemic, misinformation related to the virus, vaccines, and public health measures spread rapidly on social media platforms, including Twitter (Now X). Some prominent accounts or popular pages on platforms like Facebook and Twitter(now X) were identified as superspreaders of COVID-19 misinformation, contributing to public confusion and potentially hindering efforts to combat the pandemic.
As per the Center for Countering Digital Hate Inc (US), The "disinformation dozen," a group of 12 prominent anti-vaccine accounts[4], were found to be responsible for a large amount of anti-vaccine content circulating on social media platforms, highlighting the significant role of superspreaders in influencing public perceptions and behaviours during a health crisis.
There are also incidents where users are unknowingly engaged in spreading misinformation by forwarding information or content which are not always shared by the original source but often just propagated by amplifiers, using other sources, websites, or YouTube videos that help in dissemination. The intermediary sharers amplify these messages on their pages, which is where it takes off. Hence such users do not always have to be the ones creating or deliberately popularising the misinformation, but they are the ones who expose more people to it because of their broad reach. This was observed during the pandemic when a handful of people were able to create a heavy digital impact sharing vaccine/virus-related misinformation.
- Role of Superspreaders in Influencing Investments and Finance
Misinformation and rumours in finance may have a considerable influence on stock markets, investor behaviour, and national financial stability. Individuals or accounts with huge followings or influence in the financial niche can operate as superspreaders of erroneous information, potentially leading to market manipulation, panic selling, or incorrect impressions about individual firms or investments.
Superspreaders in the finance domain can cause volatility in markets, affect investor confidence, and even trigger regulatory responses to address the spread of false information that may harm market integrity. In fact, there has been a rise in deepfake videos, and fake endorsements, with multiple social media profiles providing unsanctioned investing advice and directing followers to particular channels. This leads investors into dangerous financial decisions. The issue intensifies when scammers employ deepfake videos of notable personalities to boost their reputation and can actually shape people’s financial decisions.
Bots and Misinformation Spread on Social Media
Bots are automated accounts that are designed to execute certain activities, such as liking, sharing, or retweeting material, and they can broaden the reach of misinformation by swiftly spreading false narratives and adding to the virality of a certain piece of content. They can also artificially boost the popularity of disinformation by posting phony likes, shares, and comments, making it look more genuine and trustworthy to unsuspecting users. Bots can exploit social network algorithms by establishing false identities that interact with one another and with real users, increasing the spread of disinformation and pushing it to the top of users' feeds and search results.
Bots can use current topics or hashtags to introduce misinformation into popular conversations, allowing misleading information to acquire traction and reach a broader audience. They can lead to the construction of echo chambers, in which users are exposed to a narrow variety of perspectives and information, exacerbating the spread of disinformation inside restricted online groups. There are incidents reported where bot's were found as the sharers of content from low-credibility sources.
Bots are frequently employed as part of planned misinformation campaigns designed to propagate false information for political, ideological, or commercial gain. Bots, by automating the distribution of misleading information, can make it impossible to trace the misinformation back to its source. Understanding how bots work and their influence on information ecosystems is critical for combatting disinformation and increasing digital literacy among social media users.
CyberPeace Policy Recommendations
- Recommendations/Advisory for Netizens:
- Educating oneself: Netizens need to stay informed about current events, reliable fact-checking sources, misinformation counter-strategies, and common misinformation tactics, so that they can verify potentially problematic content before sharing.
- Recognising the threats and vulnerabilities: It is important for netizens to understand the consequences of spreading or consuming inaccurate information, fake news, or misinformation. Netizens must be cautious of sensationalised content spreading on social media as it might attempt to provoke strong reactions or to mold public opinions. Netizens must consider questioning the credibility of information, verifying its sources, and developing cognitive skills to identify low-credibility content and counter misinformation.
- Practice caution and skepticism: Netizens are advised to develop a healthy skepticism towards online information, and critically analyse the veracity of all information sources. Before spreading any strong opinions or claims, one must seek supporting evidence, factual data, and expert opinions, and verify and validate claims with reliable sources or fact-checking entities.
- Good netiquette on the Internet, thinking before forwarding any information: It is important for netizens to practice good netiquette in the online information landscape. One must exercise caution while sharing any information, especially if the information seems incorrect, unverified or controversial. It's important to critically examine facts and recognise and understand the implications of sharing false, manipulative, misleading or fake information/content. Netizens must also promote critical thinking and encourage their loved ones to think critically, verify information, seek reliable sources and counter misinformation.
- Adopting and promoting Prebunking and Debunking strategies: Prebunking and debunking are two effective strategies to counter misinformation. Netizens are advised to engage in sharing only accurate information and do fact-checking to debunk any misinformation. They can rely on reputable fact-checking experts/entities who are regularly engaged in producing prebunking and debunking reports and material. Netizens are further advised to familiarise themselves with fact-checking websites, and resources and verify the information.
- Recommendations for tech/social media platforms
- Detect, report and block malicious accounts: Tech/social media platforms must implement strict user authentication mechanisms to verify account holders' identities to minimise the formation of fraudulent or malicious accounts. This is imperative to weed out suspicious social media accounts, misinformation superspreader accounts and bots accounts. Platforms must be capable of analysing public content, especially viral or suspicious content to ascertain whether it is misleading, AI-generated, fake or deliberately misleading. Upon detection, platform operators must block malicious/ superspreader accounts. The same approach must apply to other community guidelines’ violations as well.
- Algorithm Improvements: Tech/social media platform operators must develop and deploy advanced algorithm mechanisms to detect suspicious accounts and recognise repetitive posting of misinformation. They can utilise advanced algorithms to identify such patterns and flag any misleading, inaccurate, or fake information.
- Dedicated Reporting Tools: It is important for the tech/social media platforms to adopt robust policies to take action against social media accounts engaged in malicious activities such as spreading misinformation, disinformation, and propaganda. They must empower users on the platforms to flag/report suspicious accounts, and misleading content or misinformation through user-friendly reporting tools.
- Holistic Approach: The battle against online mis/disinformation necessitates a thorough examination of the processes through which it spreads. This involves investing in information literacy education, modifying algorithms to provide exposure to varied viewpoints, and working on detecting malevolent bots that spread misleading information. Social media sites can employ similar algorithms internally to eliminate accounts that appear to be bots. All stakeholders must encourage digital literacy efforts that enable consumers to critically analyse information, verify sources, and report suspect content. Implementing prebunking and debunking strategies. These efforts can be further supported by collaboration with relevant entities such as cybersecurity experts, fact-checking entities, researchers, policy analysts and the government to combat the misinformation warfare on the Internet.
References:
- https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0302201 {1}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette {2}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette {3}
- https://counterhate.com/research/the-disinformation-dozen/ {4}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette
- https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0302201
- https://www.nytimes.com/2020/11/23/technology/election-misinformation-facebook-twitter.html
- https://www.wbur.org/onpoint/2021/08/06/vaccine-misinformation-and-a-look-inside-the-disinformation-dozen
- https://healthfeedback.org/misinformation-superspreaders-thriving-on-musk-owned-twitter/
- https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8139392/
- https://www.jmir.org/2021/5/e26933/
- https://www.yahoo.com/news/7-ways-avoid-becoming-misinformation-121939834.html