Alert: Cybercriminals Target Taxpayers and Banking Customers with Phishing Campaign and Malicious Application
PUBLISHED ON
Mar 15, 2023
10
March 3rd 2023, New Delhi: If you have received any message that contains a link asking users to download an application to avail Income Tax Refund or KYC benefits with the name of Income Tax Department or reputed Banks, Beware!
CyberPeace Foundation and Autobot Infosec Private Limited along with the academic partners under CyberPeace Center of Excellence (CCoE) recently conducted five different studies on phishing campaigns that have been circulating on the internet by using misleading tactics to convince users to install malicious applications on their devices. The first campaign impersonates the Income Tax Department, while the rest of the campaigns impersonate ICICI Bank, State Bank of India, IDFC Bank and Axis bank respectively. The phishing campaigns aim to trick users into divulging their personal and financial information.
After a detailed study, the research team found that:
All campaigns appear to be an offer from reputed entities, however hosted on third-party domains instead of the official website of the Income Tax Department or the respective Banks, raising suspicion.
The applications ask several access permissions of the device. Moreover some of them seek users to provide full control of the device. Allowing such access permission could result in a complete compromise of the system, including access to sensitive information such as microphone recordings, camera footage, text messages, contacts, pictures, videos, and even banking applications.
Cybercriminals created malicious applications using icons that closely resemble those of legitimate entities with the intention of enticing users into downloading the malicious applications.
The applications collect user’s personal and banking information. Getting into this type of trap could lead users to face significant financial losses.
While investigating the impersonated Income Tax Department’s application, the Research team identified the application sends http traffic to a remote server which acts as a Command and Control (CnC/C2) for the application.
Customers who desire to avail benefits or refunds from respective banks, download relevant apps, believing that the chosen app will assist them. However, they are not always aware that the app may be fraudulent.
“The Research highlights the importance of being vigilant while browsing the internet and not falling prey to such phishing attacks. It is crucial to be cautious when clicking on links or downloading attachments from unknown sources, as they may contain malware that can harm the device or compromise the data.” spokesperson, CyberPeace added.
In addition to this in an earlier report released in last month, the same research team had drawn attention to the WhatsApp messages masquerading as an offer from Tanishq Jewellers with links luring unsuspecting users with the promise of free valentine’s day presents making the rounds on the app.
CyberPeace Advisory:
The Research team recommends that people should avoid opening such messages sent via social platforms. One must always think before clicking on such links, or downloading any attachments from unauthorised sources.
Downloading any application from any third party sources instead of the official app store should be avoided. This will greatly reduce the risk of downloading a malicious app, as official app stores have strict guidelines for app developers and review each app before it gets published on the store.
Even if you download the application from an authorised source, check the app’s permissions before you install it. Some malicious apps may request access to sensitive information or resources on your device. If an app is asking for too many permissions, it’s best to avoid it.
Keep your device and the app-store app up to date. This will ensure that you have the latest security updates and bug fixes.
Falling into such a trap could result in a complete compromise of the system, including access to sensitive information such as microphone recordings, camera footage, text messages, contacts, pictures, videos, and even banking applications and could lead users to financial loss.
Do not share confidential details like credentials, banking information with such types of Phishing scams.
Never share or forward fake messages containing links on any social platform without proper verification.
In an era when misinformation spreads like wildfire across the digital landscape, the need for effective strategies to counteract these challenges has grown exponentially in a very short period. Prebunking and Debunking are two approaches for countering the growing spread of misinformation online. Prebunking empowers individuals by teaching them to discern between true and false information and acts as a protective layer that comes into play even before people encounter malicious content. Debunking is the correction of false or misleading claims after exposure, aiming to undo or reverse the effects of a particular piece of misinformation. Debunking includes methods such as fact-checking, algorithmic correction on a platform, social correction by an individual or group of online peers, or fact-checking reports by expert organisations or journalists. An integrated approach which involves both strategies can be effective in countering the rapid spread of misinformation online.
Brief Analysis of Prebunking
Prebunking is a proactive practice that seeks to rebut erroneous information before it spreads. The goal is to train people to critically analyse information and develop ‘cognitive immunity’ so that they are less likely to be misled when they do encounter misinformation.
The Prebunking approach, grounded in Inoculation theory, teaches people to recognise, analyse and avoid manipulation and misleading content so that they build resilience against the same. Inoculation theory, a social psychology framework, suggests that pre-emptively conferring psychological resistance against malicious persuasion attempts can reduce susceptibility to misinformation across cultures. As the term suggests, the MO is to help the mind in the present develop resistance to influence that it may encounter in the future. Just as medical vaccines or inoculations help the body build resistance to future infections by administering weakened doses of the harm agent, inoculation theory seeks to teach people fact from fiction through exposure to examples of weak, dichotomous arguments, manipulation tactics like emotionally charged language, case studies that draw parallels between truths and distortions, and so on. In showing people the difference, inoculation theory teaches them to be on the lookout for misinformation and manipulation even, or especially, when they least expect it.
The core difference between Prebunking and Debunking is that while the former is preventative and seeks to provide a broad-spectrum cover against misinformation, the latter is reactive and focuses on specific instances of misinformation. While Debunking is closely tied to fact-checking, Prebunking is tied to a wider range of specific interventions, some of which increase motivation to be vigilant against misinformation and others increase the ability to engage in vigilance with success.
There is much to be said in favour of the Prebunking approach because these interventions build the capacity to identify misinformation and recognise red flags However, their success in practice may vary. It might be difficult to scale up Prebunking efforts and ensure their reach to a larger audience. Sustainability is critical in ensuring that Prebunking measures maintain their impact over time. Continuous reinforcement and reminders may be required to ensure that individuals retain the skills and information they gained from the Prebunking training activities. Misinformation tactics and strategies are always evolving, so it is critical that Prebunking interventions are also flexible and agile and respond promptly to developing challenges. This may be easier said than done, but with new misinformation and cyber threats developing frequently, it is a challenge that has to be addressed for Prebunking to be a successful long-term solution.
Encouraging people to be actively cautious while interacting with information, acquire critical thinking abilities, and reject the effect of misinformation requires a significant behavioural change over a relatively short period of time. Overcoming ingrained habits and prejudices, and countering a natural reluctance to change is no mean feat. Developing a widespread culture of information literacy requires years of social conditioning and unlearning and may pose a significant challenge to the effectiveness of Prebunking interventions.
Brief Analysis of Debunking
Debunking is a technique for identifying and informing people that certain news items or information are incorrect or misleading. It seeks to lessen the impact of misinformation that has already spread. The most popular kind of Debunking occurs through collaboration between fact-checking organisations and social media businesses. Journalists or other fact-checkers discover inaccurate or misleading material, and social media platforms flag or label it. Debunking is an important strategy for curtailing the spread of misinformation and promoting accuracy in the digital information ecosystem.
Debunking interventions are crucial in combating misinformation. However, there are certain challenges associated with the same. Debunking misinformation entails critically verifying facts and promoting corrected information. However, this is difficult owing to the rising complexity of modern tools used to generate narratives that combine truth and untruth, views and facts. These advanced approaches, which include emotional spectrum elements, deepfakes, audiovisual material, and pervasive trolling, necessitate a sophisticated reaction at all levels: technological, organisational, and cultural.
Furthermore, It is impossible to debunk all misinformation at any given time, which effectively means that it is impossible to protect everyone at all times, which means that at least some innocent netizens will fall victim to manipulation despite our best efforts. Debunking is inherently reactive in nature, addressing misinformation after it has grown extensively. This reactionary method may be less successful than proactive strategies such as Prebunking from the perspective of total harm done. Misinformation producers operate swiftly and unexpectedly, making it difficult for fact-checkers to keep up with the rapid dissemination of erroneous or misleading information. Debunking may need continuous exposure to fact-check to prevent erroneous beliefs from forming, implying that a single Debunking may not be enough to rectify misinformation. Debunking requires time and resources, and it is not possible to disprove every piece of misinformation that circulates at any particular moment. This constraint may cause certain misinformation to go unchecked, perhaps leading to unexpected effects. The misinformation on social media can be quickly spread and may become viral faster than Debunking pieces or articles. This leads to a situation in which misinformation spreads like a virus, while the antidote to debunked facts struggles to catch up.
Prebunking vs Debunking: Comparative Analysis
Prebunking interventions seek to educate people to recognise and reject misinformation before they are exposed to actual manipulation. Prebunking offers tactics for critical examination, lessening the individuals' susceptibility to misinformation in a variety of contexts. On the other hand, Debunking interventions involve correcting specific false claims after they have been circulated. While Debunking can address individual instances of misinformation, its impact on reducing overall reliance on misinformation may be limited by the reactive nature of the approach.
CyberPeace Policy Recommendations for Tech/Social Media Platforms
With the rising threat of online misinformation, tech/social media platforms can adopt an integrated strategy that includes both Prebunking and Debunking initiatives to be deployed and supported on all platforms to empower users to recognise the manipulative messaging through Prebunking and be aware of the accuracy of misinformation through Debunking interventions.
Gamified Inoculation: Tech/social media companies can encourage gamified inoculation campaigns, which is a competence-oriented approach to Prebunking misinformation. This can be effective in helping people immunise the receiver against subsequent exposures. It can empower people to build competencies to detect misinformation through gamified interventions.
Promotion of Prebunking and Debunking Campaigns through Algorithm Mechanisms:Tech/social media platforms may promote and guarantee that algorithms prioritise the distribution of Prebunking materials to users, boosting educational content that strengthens resistance to misinformation. Platform operators should incorporate algorithms that prioritise the visibility of Debunking content in order to combat the spread of erroneous information and deliver proper corrections; this can eventually address and aid in Prebunking and Debunking methods to reach a bigger or targeted audience.
User Empowerment to Counter Misinformation:Tech/social media platforms can design user-friendly interfaces that allow people to access Prebunking materials, quizzes, and instructional information to help them improve their critical thinking abilities. Furthermore, they can incorporate simple reporting tools for flagging misinformation, as well as links to fact-checking resources and corrections.
Partnership with Fact-Checking/Expert Organizations:Tech/social media platforms can facilitate Prebunking and Debunking initiatives/campaigns by collaborating with fact-checking/expert organisations and promoting such initiatives at a larger scale and ultimately fighting misinformation with joint hands initiatives.
Conclusion
The threat of online misinformation is only growing with every passing day and so, deploying effective countermeasures is essential. Prebunking and Debunking are the two such interventions. To sum up: Prebunking interventions try to increase resilience to misinformation, proactively lowering susceptibility to erroneous or misleading information and addressing broader patterns of misinformation consumption, while Debunking is effective in correcting a particular piece of misinformation and having a targeted impact on belief in individual false claims. An integrated approach involving both the methods and joint initiatives by tech/social media platforms and expert organizations can ultimately help in fighting the rising tide of online misinformation and establishing a resilient online information landscape.
A viral video claiming the crash site of Air India Flight AI-171 in Ahmedabad has misled many people online. The video has been confirmed not to be from India or a recent crash, but was filmed at Universal Studios Hollywood on a TV or movie set meant to look like a plane crash set piece for a movie.
Claim:
A video that purportedly shows the wreckage of Air India Flight AI-171 after crashing in Ahmedabad on June 12, 2025, has circulated among social media users. The video shows a large amount of aircraft wreckage as well as destroyed homes and a scene reminiscent of an emergency, making it look genuine.
Fact check:
In our research, we took screenshots from the viral video and used reverse image search, which matched visuals from Universal Studios Hollywood. It became apparent that the video is actually from the most famous “War of the Worlds" set, located in Universal Studios Hollywood. The set features a 747 crash scene that was constructed permanently for Steven Spielberg's movie in 2005. We also found a YouTube video. The set has fake smoke poured on it, with debris scattered about and additional fake faceless structures built to represent a scene with a larger crisis. Multiple videos on YouTube here, here, and here can be found from the past with pictures of the tour at Universal Studios Hollywood, the Boeing 747 crash site, made for a movie.
The Universal Studios Hollywood tour includes a visit to a staged crash site featuring a Boeing 747, which has unfortunately been misused in viral posts to spread false information.
While doing research, we were able to locate imagery indicating that the video that went viral, along with the Universal Studios tour footage, provided an exact match and therefore verified that the video had no connection to the Ahmedabad incident. A side-by-side comparison tells us all we need to know to uncover the truth.
Conclusion:
The viral video claiming to show the aftermath of the Air India crash in Ahmedabad is entirely misleading and false. The video is showing a fictitious movie set from Universal Studios Hollywood, not a real disaster scene in India. Spreading misinformation like this can create unnecessary panic and confusion in sensitive situations. We urge viewers to only trust verified news and double-check claims before sharing any content online.
Claim: Massive explosion and debris shown in viral video after Air India crash.
Over the last decade, battlefields have percolated from mountains, deserts, jungles, seas, and the skies into the invisible networks of code and cables. Cyberwarfare is no longer a distant possibility but today’s reality. The cyberattacks of Estonia in 2007, the crippling of Iran’s nuclear program by the Stuxnet virus, the SolarWinds and Colonial Pipeline breaches in recent years have proved one thing: that nations can now paralyze economies and infrastructures without firing a bullet. Cyber operations now fall beyond the traditional threshold of war, allowing aggressors to exploit the grey zone where full-scale retaliation may be unlikely.
At the same time, this ambiguity has also given rise to the concept of cyber deterrence. It is a concept that has been borrowed from the nuclear strategies during the Cold War era and has been adapted to the digital age. At the core, cyber deterrence seeks to alter the adversary’s cost-benefit calculation that makes attacks either too costly or pointless to pursue. While power blocs like the US, Russia, and China continue to build up their cyber arsenals, smaller nations can hold unique advantages, most importantly in terms of their resilience, if not firepower.
Understanding the concept of Cyber Deterrence
Deterrence, in its classic sense, is about preventing action through the fear of consequences. It usually manifests in four mechanisms as follows:
Punishment by threatening to impose costs on attackers, whether by counter-attacks, economic sanctions, or even conventional forces.
Denial of attacks by making them futile through hardened defences, and ensuring the systems to resist, recover, and continue to function.
Entanglement by leveraging interdependence in trade, finance, and technology to make attacks costly for both attackers and defenders.
Norms can also help shape behaviour by stigmatizing reckless cyber actions by imposing reputational costs that can exceed any gains.
However, great powers have always emphasized the importance of punishment as a tool to showcase their power by employing offensive cyber arsenals to instill psychological pressure on their rivals. Yet in cyberspace, punishment has inherent flaws.
The Advantage of Asymmetry
For small states, smaller geographical size can be utilised as a benefit. Three advantages of this exist, such as:
With fewer critical infrastructures to protect, resources can be concentrated. For example, Denmark, with a modest population of $40 million cyber budget, is considered to be among the most cyber-secure nations, despite receiving billions of US spending.
Smaller bureaucracies enable faster response. The centralised cyber command of Singapore allows it to ensure a rapid coordination between the government and the private sector.
Smaller countries with lesser populations can foster a higher public awareness and participation in cyber hygiene by amplifying national resilience.
In short, defending a small digital fortress can be easier than securing a sprawling empire of interconnected systems.
Lessons from Estonia and Singapore
The 2007 crisis of Estonia remains a case study of cyber resilience. Although its government, bank, and media were targeted in offline mode, Estonia emerged stronger by investing heavily in cyber defense mechanisms. Another effort in this case stood was with the hosting of NATO’s Cooperative Cyber Defence Centre of Excellence to build one of the world’s most resilient e-governance models.
Singapore is another case. Where, recognising its vulnerability as a global financial hub, it has adopted a defense-centric deterrence strategy by focusing on redundancy, cyber education, and international partnership rather than offensive capacity. These approaches can also showcase that deterrence is not always about scaring attackers with retaliation, it is about making the attacks meaningless.
Cyber deterrence and Asymmetric Warfare
Cyber conflict is understood through the lens of asymmetric warfare, where weaker actors exploit the unconventional and stronger foes. As guerrillas get outmanoeuvred by superpowers in Vietnam or Afghanistan, small states hold the capability to frustrate the cyber giants by turning their size into a shield. The essence of asymmetric cyber defence also lies in three principles, which can be mentioned as;
Resilience over retaliation by ensuring a rapid recovery to neutralise the goals of the attackers.
Undertaking smart investments focusing on limited budgets over critical assets, not sprawling infrastructures.
Leveraging norms to shape the international opinions to stigmatize the aggressors and increase the reputational costs.
This also helps to transform the levels of cyber deterrence into a game of endurance rather than escalating it into a domain where small states can excel.
There remain challenges as well, as attribution problems persist, the smaller nations still depend on foreign technology, which the adversaries have sought to exploit. Issues over the shortage of talent have plagued the small states, as cyber professionals have migrated to get lucrative jobs abroad. Moreover, building deterrence capability through norms requires active multilateral cooperation, which may not be possible for all small nations to sustain.
Conclusion
Cyberwarfare represents a new frontier of asymmetric conflict where size does not guarantee safety or supremacy. Great powers have often dominated the offensive cyber arsenals, where small states have carved their own path towards security by focusing on defence, resilience, and international collaboration. The examples of Singapore and Estonia demonstrate the fact that the small size of a state can be its identity of a hidden strength in capabilities like cyberspace, allowing nimbleness, concentration of resources and societal cohesion. In the long run, cyber deterrence for small states will not rest on fearsome retaliation but on making attacks futile and recovery inevitable.
Your institution or organization can partner with us in any one of our initiatives or policy research activities and complement the region-specific resources and talent we need.