#FactCheck: Viral AI image shown as AI -171 caught fire after collision
Executive Summary:
A dramatic image circulating online, showing a Boeing 787 of Air India engulfed in flames after crashing into a building in Ahmedabad, is not a genuine photograph from the incident. Our research has confirmed it was created using artificial intelligence.

Claim:
Social media posts and forwarded messages allege that the image shows the actual crash of Air India Flight AI‑171 near Ahmedabad airport on June 12, 2025.

Fact Check:
In our research to validate the authenticity of the viral image, we conducted a reverse image search and analyzed it using AI-detection tools like Hive Moderation. The image showed clear signs of manipulation, distorted details, and inconsistent lighting. Hive Moderation flagged it as “Likely AI-generated”, confirming it was synthetically created and not a real photograph.

In contrast, verified visuals and information about the Air India Flight AI-171 crash have been published by credible news agencies like The Indian Express and Hindustan Times, confirmed by the aviation authorities. Authentic reports include on-ground video footage and official statements, none of which feature the viral image. This confirms that the circulating photo is unrelated to the actual incident.

Conclusion:
The viral photograph is a fabrication, created by AI, not a real depiction of the Ahmedabad crash. It does not represent factual visuals from the tragedy. It’s essential to rely on verified images from credible news agencies and official investigation reports when discussing such sensitive events.
- Claim: An Air India Boeing aircraft crashed into a building near Ahmedabad airport
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs
.webp)
Introduction
The unprecedented rise of social media, challenges with regional languages, and the heavy use of messaging apps like WhatsApp have all led to an increase in misinformation in India. False stories spread quickly and can cause significant harm, like political propaganda and health-related mis/misinformation. Programs that teach people how to use social media responsibly and attempt to check facts are essential, but they do not always connect with people deeply. Reading stories, attending lectures, and using tools that check facts are standard passive learning methods used in traditional media literacy programs.
Adding game-like features to non-game settings is called "gamification," it could be a new and interesting way to answer this question. Gamification involves engaging people by making them active players instead of just passive consumers of information. Research shows that interactive learning improves interest, thinking skills, and memory. People can learn to recognise fake news safely by turning fact-checking into a game before encountering it in real life. A study by Roozenbeek and van der Linden (2019) showed that playing misinformation games can significantly enhance people's capacity to recognise and avoid false information.
Several misinformation-related games have been successfully implemented worldwide:
- The Bad News Game – This browser-based game by Cambridge University lets players step into the shoes of a fake news creator, teaching them how misinformation is crafted and spread (Roozenbeek & van der Linden, 2019).
- Factitious – A quiz game where users swipe left or right to decide whether a news headline is real or fake (Guess et al., 2020).
- Go Viral! – A game designed to inoculate people against COVID-19 misinformation by simulating the tactics used by fake news peddlers (van der Linden et al., 2020).
For programs to effectively combat misinformation in India, they must consider factors such as the responsible use of smartphones, evolving language trends, and common misinformation patterns in the country. Here are some key aspects to keep in mind:
- Vernacular Languages
There should be games in Hindi, Tamil, Bengali, Telugu, and other major languages since that is how rumours spread in different areas and diverse cultural contexts. AI voice conversation and translation can help reduce literacy differences. Research shows that people are more likely to engage with and trust information in their native language (Pennycook & Rand, 2019).
- Games Based on WhatsApp
Interactive tests and chatbot-powered games can educate consumers directly within the app they use most frequently since WhatsApp is a significant hub for false information. A game with a WhatsApp-like interface where players may feel like they are in real life, having to decide whether to avoid, check the facts of, or forward messages that are going viral could be helpful in India.
- Detecting False Information
As part of a mobile-friendly game, players can pretend to be reporters or fact-checkers and have to prove stories that are going viral. They can do the same with real-life tools like reverse picture searches or reliable websites that check facts. Research shows that doing interactive tasks to find fake news makes people more aware of it over time (Lewandowsky et al., 2017).
- Reward-Based Participation
Participation could be increased by providing rewards for finishing misleading challenges, such as badges, diplomas, or even incentives on mobile data. This might be easier to do if there are relationships with phone companies. Reward-based learning has made people more interested and motivated in digital literacy classes (Deterding et al., 2011).
- Universities and Schools
Educational institutions can help people spot false information by adding game-like elements to their lessons. Hamari et al. (2014) say that students are more likely to join and remember what they learn when there are competitive and interactive parts to the learning. Misinformation games can be used in media studies classes at schools and universities by using models to teach students how to check sources, spot bias, and understand the psychological tricks that misinformation campaigns use.
What Artificial Intelligence Can Do for Gamification
Artificial intelligence can tailor learning experiences to each player in false games. AI-powered misinformation detection bots could lead participants through situations tailored to their learning level, ensuring they are consistently challenged. Recent natural language processing (NLP) developments enable AI to identify nuanced misinformation patterns and adjust gameplay accordingly (Zellers et al., 2019). This could be especially helpful in India, where fake news is spread differently depending on the language and area.
Possible Opportunities
Augmented reality (AR) scavenger hunts for misinformation, interactive misinformation events, and educational misinformation tournaments are all examples of games that help fight misinformation. India can help millions, especially young people, think critically and combat the spread of false information by making media literacy fun and interesting. Using Artificial Intelligence (AI) in gamified treatments for misinformation could be a fascinating area of study in the future. AI-powered bots could mimic real-time cases of misinformation and give quick feedback, which would help students learn more.
Problems and Moral Consequences
While gaming is an interesting way to fight false information, it also comes with some problems that you should think about:
- Ethical Concerns: Games that try to imitate how fake news spreads must ensure players do not learn how to spread false information by accident.
- Scalability: Although worldwide misinformation initiatives exist, developing and expanding localised versions for India's varied language and cultural contexts provide significant challenges.
- Assessing Impact: There is a necessity for rigorous research approaches to evaluate the efficacy of gamified treatments in altering misinformation-related behaviours, keeping cultural and socio-economic contexts in the picture.
Conclusion
A gamified approach can serve as an effective tool in India's fight against misinformation. By integrating game elements into digital literacy programs, it can encourage critical thinking and help people recognize misinformation more effectively. The goal is to scale these efforts, collaborate with educators, and leverage India's rapidly evolving technology to make fact-checking a regular practice rather than an occasional concern.
As technology and misinformation evolve, so must the strategies to counter them. A coordinated and multifaceted approach, one that involves active participation from netizens, strict platform guidelines, fact-checking initiatives, and support from expert organizations that proactively prebunk and debunk misinformation can be a strong way forward.
References
- Deterding, S., Dixon, D., Khaled, R., & Nacke, L. (2011). From game design elements to gamefulness: defining "gamification". Proceedings of the 15th International Academic MindTrek Conference.
- Guess, A., Nagler, J., & Tucker, J. (2020). Less than you think: Prevalence and predictors of fake news dissemination on Facebook. Science Advances.
- Hamari, J., Koivisto, J., & Sarsa, H. (2014). Does gamification work?—A literature review of empirical studies on gamification. Proceedings of the 47th Hawaii International Conference on System Sciences.
- Lewandowsky, S., Ecker, U. K., & Cook, J. (2017). Beyond misinformation: Understanding and coping with the “post-truth” era. Journal of Applied Research in Memory and Cognition.
- Pennycook, G., & Rand, D. G. (2019). Fighting misinformation on social media using “accuracy prompts”. Nature Human Behaviour.
- Roozenbeek, J., & van der Linden, S. (2019). The fake news game: actively inoculating against the risk of misinformation. Journal of Risk Research.
- van der Linden, S., Roozenbeek, J., Compton, J. (2020). Inoculating against fake news about COVID-19. Frontiers in Psychology.
- Zellers, R., Holtzman, A., Rashkin, H., Bisk, Y., Farhadi, A., Roesner, F., & Choi, Y. (2019). Defending against neural fake news. Advances in Neural Information Processing Systems.

Introduction
Phishing as a Service (PhaaS) platform 'LabHost' has been a significant player in cybercrime targeting North American banks, particularly financial institutes in Canada. LabHost offers turnkey phishing kits, infrastructure for hosting pages, email content generation, and campaign overview services to cybercriminals in exchange for a monthly subscription. The platform's popularity surged after introducing custom phishing kits for Canadian banks in the first half of 2023.Fortra reports that LabHost has overtaken Frappo, cybercriminals' previous favorite PhaaS platform, and is now the primary driving force behind most phishing attacks targeting Canadian bank customers.
In the digital realm, where the barriers to entry for nefarious activities are crumbling, and the tools of the trade are being packaged and sold with the same customer service one might expect from a legitimate software company. This is the world of Phishing-as-a-Service (PhaaS), and at the forefront of this ominous trend is LabHost, a platform that has been instrumental in escalating attacks on North American banks, with a particular focus on Canadian financial institutions.
LabHost is not a newcomer to the cybercrime scene, but its ascent to infamy was catalyzed by the introduction of custom phishing kits tailored for Canadian banks in the first half of 2023. The platform operates on a subscription model, offering turnkey solutions that include phishing kits, infrastructure for hosting malicious pages, email content generation, and campaign overview services. For a monthly fee, cybercriminals are handed the keys to a kingdom of deception and theft.
Emergence of Labhost
The rise of LabHost has been meticulously chronicled by various cyber security firms which reports that LabHost has dethroned the previously favored PhaaS platform, Frappo. LabHost has become the primary driving force behind the majority of phishing attacks targeting customers of Canadian banks. Despite suffering a disruptive outage in early October 2023, LabHost has rebounded with vigor, orchestrating several hundreds of attacks per month.
Their investigation into LabHost's operations reveals a tiered membership system: Standard, Premium, and World, with monthly fees of $179, $249, and $300, respectively. Each tier offers an escalating scope of targets, from Canadian banks to 70 institutions worldwide, excluding North America. The phishing templates provided by LabHost are not limited to financial entities; they also encompass online services like Spotify, postal delivery services like DHL, and regional telecommunication service providers.
LabRat
The true ingenuity of LabHost lies in its integration with 'LabRat,' a real-time phishing management tool that enables cybercriminals to monitor and control an active phishing attack. This tool is a linchpin in man-in-the-middle style attacks, designed to capture two-factor authentication codes, validate credentials, and bypass additional security measures. In essence, LabRat is the puppeteer's strings, allowing the phisher to manipulate the attack with precision and evade the safeguards that are the bulwarks of our digital fortresses.
LabSend
In the aftermath of its October disruption, LabHost unveiled 'LabSend,' an SMS spamming tool that embeds links to LabHost phishing pages in text messages. This tool orchestrates a symphony of automated smishing campaigns, randomizing portions of text messages to slip past the vigilant eyes of spam detection systems. Once the SMS lure is cast, LabSend responds to victims with customizable message templates, a Machiavellian touch to an already insidious scheme.
The Proliferation of PhaaS
The proliferation of PhaaS platforms like LabHost, 'Greatness,' and 'RobinBanks' has democratized cybercrime, lowering the threshold for entry and enabling even the most unskilled hackers to launch sophisticated attacks. These platforms are the catalysts for an exponential increase in the pool of threat actors, thereby magnifying the impact of cybersecurity on a global scale.
The ease with which these services can be accessed and utilized belies the complexity and skill traditionally required to execute successful phishing campaigns. Stephanie Carruthers, who leads an IBM X-Force phishing research project, notes that crafting a single phishing email can consume upwards of 16 hours, not accounting for the time and resources needed to establish the infrastructure for sending the email and harvesting credentials.
PhaaS platforms like LabHost have commoditized this process, offering a buffet of malevolent tools that can be customized and deployed with a few clicks. The implications are stark: the security measures that businesses and individuals have come to rely on, such as multi-factor authentication (MFA), are no longer impenetrable. PhaaS platforms have engineered ways to circumvent these defenses, rendering them vulnerable to exploitation.
Emerging Cyber Defense
In the face of this escalating threat, a multi-faceted defense strategy is imperative. Cybersecurity solutions like SpamTitan employ advanced AI and machine learning to identify and block phishing threats, while end-user training platforms like SafeTitan provide ongoing education to help individuals recognize and respond to phishing attempts. However, with phishing kits now capable of bypassing MFA,it is clear that more robust solutions, such as phishing-resistant MFA based on FIDO/WebAuthn authentication or Public Key Infrastructure (PKI), are necessary to thwart these advanced attacks.
Conclusion
The emergence of PhaaS platforms represents a significant shift in the landscape of cybercrime, one that requires a vigilant and sophisticated response. As we navigate this treacherous terrain, it is incumbent upon us to fortify our defenses, educate our users, and remain ever-watchful of the evolving tactics of cyber adversaries.
References
- https://www-bleepingcomputer-com.cdn.ampproject.org/c/s/www.bleepingcomputer.com/news/security/labhost-cybercrime-service-lets-anyone-phish-canadian-bank-users/amp/
- https://www.techtimes.com/articles/302130/20240228/phishing-platform-labhost-allows-cybercriminals-target-banks-canada.htm
- https://www.spamtitan.com/blog/phishing-as-a-service-threat/
- https://timesofindia.indiatimes.com/gadgets-news/five-government-provided-botnet-and-malware-cleaning-tools/articleshow/107951686.cms

Introduction
The United Nations (UN) has unveiled a set of principles, known as the 'Global Principles for Information Integrity', to combat the spread of online misinformation, disinformation, and hate speech. These guidelines aim to address the widespread harm caused by false information on digital platforms. The UN's Global Principles are based on five core principles: social trust and resilience, independent, free, and pluralistic media, healthy incentives, transparency and research, and public empowerment. The UN chief emphasized that the threats to information integrity are not new but are now spreading at unprecedented speeds due to digital platforms and artificial intelligence technologies.
These principles aim to enhance global cooperation in order to create a safer online environment. It was further highlighted that the spread of misinformation, disinformation, hate speech, and other risks in the information environment poses threats to democracy, human rights, climate action, and public health. This impact is intensified by the emergence of rapidly advancing Artificial Intelligence Technology (AI tech) that poses a growing threat to vulnerable groups in information environments.
The Highlights of Key Principles
- Societal Trust and Resilience: Trust in information sources and the ability and resilience to handle disruptions are critical for maintaining information integrity. Both are at risk from state and non-state actors exploiting the information ecosystem.
- Healthy Incentives: Current business models reliant on targeted advertising threaten information integrity. The complex, opaque nature of digital advertising benefits large tech companies and it requires reforms to ensure transparency and accountability.
- Public Empowerment: People require the capability to manage their online interactions, the availability of varied and trustworthy information, and the capacity to make informed decisions. Media and digital literacy are crucial, particularly for marginalized populations.
- Independent, Free, and Pluralistic Media: A free press supports democracy by fostering informed discourse, holding power accountable, and safeguarding human rights. Journalists must operate safely and freely, with access to diverse news sources.
- Transparency and research: Technology companies must be transparent about how information is propagated and how personal data is used. Research and privacy-preserving data access should be encouraged to address information integrity gaps while protecting those investigating and reporting on these issues.
Stakeholders Called for Action
Stakeholders, including technology companies, AI actors, advertisers, media, researchers, civil society organizations, state and political actors, and the UN, have been called to take action under the UN Global Principles for Information Integrity. These principles should be used to build and participate in broad cross-sector coalitions that bring together diverse expertise from civil society, academia, media, government, and the international private sector, focussing on capacity-building and meaningful youth engagement through dedicated advisory groups. Additionally, collaboration is required to develop multi-stakeholder action plans at regional, national, and local levels, engaging communities in grassroots initiatives and ensuring that youth are fully and meaningfully involved in the process.
Implementation and Monitoring
To effectively implement the UN Global Principles at large requires developing a multi-stakeholder action plan at various levels such as at the regional, national, and local levels. These plans should be informed and created by advice and counsel from an extensive range of communities including any of the grassroots initiatives having a deep understanding of regional challenges and their specific needs. Monitoring and evaluation are also regarded as essential components of the implementation process. Regular assessments of the progress, combined with the flexibility to adapt strategies as needed, will help ensure that the principles are effectively translated into practice.
Challenges and Considerations
Implementing these Global Principles of the UN will have certain challenges. The complexities that the digital landscape faces with the rapid pace of technological revamp, and alterations in the diversity of cultural and political contexts all present significant hurdles. Furthermore, the efforts to combat misinformation must be balanced with protecting fundamental rights, including the right to freedom of expression and privacy. Addressing these challenges to counter informational integrity will require continuous and ongoing collaboration with constant dialogue among stakeholders towards a commitment to innovation and continuous learning. It is also important to recognise and address the power imbalance within the information ecosystem, ensuring that all voices are heard and that any person, specifically, the marginalised communities is not cast aside.
Conclusion
The UN Global Principles for Online Misinformation and Information Integrity provide a comprehensive framework for addressing the critical challenges that are present while facing information integrity today. Advocating and promoting societal trust, healthy incentives, public empowerment, independent media, and transparency, these principles offer a passage towards a more resilient and trustworthy digital environment. The future success of these principles depends upon the collaborative efforts of all stakeholders, working together to safeguard the integrity of information for everyone.
References
- https://www.business-standard.com/world-news/un-unveils-global-principles-to-combat-online-misinformation-hate-speech-124062500317_1.html
- https://www.un.org/sustainabledevelopment/blog/2024/06/global-principles-information-integrity-launch/
- https://www.un.org/sites/un2.un.org/files/un-global-principles-for-information-integrity-en.pdf
- https://www.un.org/en/content/common-agenda-report/assets/pdf/Common_Agenda_Report_English.pdf