#FactCheck-Air Taxi is a prototype and is not launched to commercial public
Executive Summary:
Recent reports circulating on various social media platforms have falsely claimed that an air taxi prototype is operational and providing services between Amritsar, Chandigarh, Delhi, and Jaipur. These claims, accompanied by images and videos, have been widely shared, leading to significant public attention. However, upon conducting a thorough examination using reverse image search, it has been determined that the information is misleading and inaccurate. These assertions do not reflect the current reality and are not substantiated by credible sources

Claim:
The claim suggests that an air taxi prototype is already operational, servicing routes between Amritsar, Chandigarh, Delhi, and Jaipur. This assertion is accompanied by images of a futuristic aircraft, implying that such technology is currently being used to transport commercial passengers.

Fact Check:
The claim of air taxi and routes between Amritsar, Chandigarh, Delhi, and Jaipur has been found to be misleading. Also, so far, neither the Indian government nor the respective aviation authorities have issued any sort of public declarations nor industry insiders to claim any launch of any air taxi service. Further research followed a keyword-based search that directed us to a news report published in The Times of India on January 20, 2025. A similar post to the one seen in the viral video accompanied the report. It stated that Bengaluru-based aerospace startup Sarla Aviation launched its prototype air taxi called “Shunya” during the Bharat Mobility Global Expo. Under this plan, it looks to initiate electric flying taxis in Bangalore by 2028. This urban air transport program for India will be similar to what they are posting in this regard.

Conclusion:
The viral claim saying that there is an air taxi service in India between Amritsar, Chandigarh, Delhi, and Jaipur is entirely false. The pictures and information going viral are misleading and do not relate to any progress or implementation of air taxi technology in India. To date, there is no official confirmation or credible evidence that supports such a service. Information must be verified from reliable sources before it is believed or shared in order to prevent the spread of misinformation.
- Claim: A viral post claims an air taxi is operational between Amritsar, Chandigarh, Delhi, and Jaipur.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Executive Summary:
Given that AI technologies are evolving at a fast pace in 2024, an AI-oriented phishing attack on a large Indian financial institution illustrated the threats. The documentation of the attack specifics involves the identification of attack techniques, ramifications to the institution, intervention conducted, and resultant effects. The case study also turns to the challenges connected with the development of better protection and sensibilisation of automatized threats.
Introduction
Due to the advancement in AI technology, its uses in cybercrimes across the world have emerged significant in financial institutions. In this report a serious incident that happened in early 2024 is analysed, according to which a leading Indian bank was hit by a highly complex, highly intelligent AI-supported phishing operation. Attack made use of AI’s innate characteristic of data analysis and data persuasion which led into a severe compromise of the bank’s internal structures.
Background
The chosen financial institution, one of the largest banks in India, had a good background regarding the extremity of its cybersecurity policies. However, these global cyberattacks opened up new threats that AI-based methods posed that earlier forms of security could not entirely counter efficiently. The attackers concentrated on the top managers of the bank because it is evident that controlling such persons gives the option of entering the inner systems as well as financial information.
Attack Execution
The attackers utilised AI in sending the messages that were an exact look alike of internal messages sent between employees. From Facebook and Twitter content, blog entries, and lastly, LinkedIn connection history and email tenor of the bank’s executives, the AI used to create these emails was highly specific. Some of these emails possessed official formatting, specific internal language, and the CEO’s writing; this made them very realistic.
It also used that link in phishing emails that led the users to a pseudo internal portal in an attempt to obtain the login credentials. Due to sophistication, the targeted individuals thought the received emails were genuine, and entered their log in details easily to the bank’s network, thus allowing the attackers access.
Impact
It caused quite an impact to the bank in every aspect. Numerous executives of the company lost their passwords to the fake emails and compromised several financial databases with information from customer accounts and transactions. The break-in permitted the criminals to cease a number of the financial’s internet services hence disrupting its functions and those of its customers for a number of days.
They also suffered a devastating blow to their customer trust because the breach revealed the bank’s weakness against contemporary cyber threats. Apart from managing the immediate operations which dealt with mitigating the breach, the financial institution was also toppling a long-term reputational hit.
Technical Analysis and Findings
1. The AI techniques that are used in generation of the phishing emails are as follows:
- The attack used powerful NLP technology, which was most probably developed using the large-scaled transformer, such as GPT (Generative Pre-trained Transformer). Since these models are learned from large data samples they used the examples of the conversation pieces from social networks, emails and PC language to create quite credible emails.
Key Technical Features:
- Contextual Understanding: The AI was able to take into account the nature of prior interactions and thus write follow up emails that were perfectly in line with prior discourse.
- Style Mimicry: The AI replicated the writing of the CEO given the emails of the CEO and then extrapolated from the data given such elements as the tone, the language, and the format of the signature line.
- Adaptive Learning: The AI actively adapted from the mistakes, and feedback to tweak the generated emails for other tries and this made it difficult to detect.
2. Sophisticated Spear-Phishing Techniques
Unlike ordinary phishing scams, this attack was phishing using spear-phishing where the attackers would directly target specific people using emails. The AI used social engineering techniques that significantly increased the chances of certain individuals replying to certain emails based on algorithms which machine learning furnished.
Key Technical Features:
- Targeted Data Harvesting: Cyborgs found out the employees of the organisation and targeted messages via the public profiles and messengers were scraped.
- Behavioural Analysis: The latest behaviour pattern concerning the users of the social networking sites and other online platforms were used by the AI to forecast the courses of action expected to be taken by the end users such as clicking on the links or opening of the attachments.
- Real-Time Adjustments: These are times when it was determined that the response to the phishing email was necessary and the use of AI adjusted the consequent emails’ timing and content.
3. Advanced Evasion Techniques
The attackers were able to pull off this attack by leveraging AI in their evasion from the normal filters placed in emails. These techniques therefore entailed a modification of the contents of the emails in a manner that would not be easily detected by the spam filters while at the same time preserving the content of the message.
Key Technical Features:
- Dynamic Content Alteration: The AI merely changed the different aspects of the email message slightly to develop several versions of the phishing email that would compromise different algorithms.
- Polymorphic Attacks: In this case, polymorphic code was used in the phishing attack which implies that the actual payloads of the links changed frequently, which means that it was difficult for the AV tools to block them as they were perceived as threats.
- Phantom Domains: Another tactic employed was that of using AI in generating and disseminating phantom domains, that are actual web sites that appear to be legitimate but are in fact short lived specially created for this phishing attack, adding to the difficulty of detection.
4. Exploitation of Human Vulnerabilities
This kind of attack’s success was not only in AI but also in the vulnerability of people, trust in familiar language and the tendency to obey authorities.
Key Technical Features:
- Social Engineering: As for the second factor, AI determined specific psychological principles that should be used in order to maximise the chance of the targeted recipients opening the phishing emails, namely the principles of urgency and familiarity.
- Multi-Layered Deception: The AI was successfully able to have a two tiered approach of the emails being sent as once the targeted individuals opened the first mail, later the second one by pretext of being a follow up by a genuine company/personality.
Response
On sighting the breach, the bank’s cybersecurity personnel spring into action to try and limit the fallout. They reported the matter to the Indian Computer Emergency Response Team (CERT-In) to find who originated the attack and how to block any other intrusion. The bank also immediately started taking measures to strengthen its security a bit further, for instance, in filtering emails, and increasing the authentication procedures.
Knowing the risks, the bank realised that actions should be taken in order to enhance the cybersecurity level and implement a new wide-scale cybersecurity awareness program. This programme consisted of increasing the awareness of employees about possible AI-phishing in the organisation’s info space and the necessity of checking the sender’s identity beforehand.
Outcome
Despite the fact and evidence that this bank was able to regain its functionality after the attack without critical impacts with regards to its operations, the following issues were raised. Some of the losses that the financial institution reported include losses in form of compensation of the affected customers and costs of implementing measures to enhance the financial institution’s cybersecurity. However, the principle of the incident was significantly critical of the bank as customers and shareholders began to doubt the organisation’s capacity to safeguard information in the modern digital era of advanced artificial intelligence cyber threats.
This case depicts the importance for the financial firms to align their security plan in a way that fights the new security threats. The attack is also a message to other organisations in that they are not immune from such analysis attacks with AI and should take proper measures against such threats.
Conclusion
The recent AI-phishing attack on an Indian bank in 2024 is one of the indicators of potential modern attackers’ capabilities. Since the AI technology is still progressing, so are the advances of the cyberattacks. Financial institutions and several other organisations can only go as far as adopting adequate AI-aware cybersecurity solutions for their systems and data.
Moreover, this case raises awareness of how important it is to train the employees to be properly prepared to avoid the successful cyberattacks. The organisation’s cybersecurity awareness and secure employee behaviours, as well as practices that enable them to understand and report any likely artificial intelligence offences, helps the organisation to minimise risks from any AI attack.
Recommendations
- Enhanced AI-Based Defences: Financial institutions should employ AI-driven detection and response products that are capable of mitigating AI-operation-based cyber threats in real-time.
- Employee Training Programs: CYBER SECURITY: All employees should undergo frequent cybersecurity awareness training; here they should be trained on how to identify AI-populated phishing.
- Stricter Authentication Protocols: For more specific accounts, ID and other security procedures should be tight in order to get into sensitive ones.
- Collaboration with CERT-In: Continued engagement and coordination with authorities such as the Indian Computer Emergency Response Team (CERT-In) and other equivalents to constantly monitor new threats and valid recommendations.
- Public Communication Strategies: It is also important to establish effective communication plans to address the customers of the organisations and ensure that they remain trusted even when an organisation is facing a cyber threat.
Through implementing these, financial institutions have an opportunity for being ready with new threats that come with AI and cyber terrorism on essential financial assets in today’s complex IT environments.

Introduction
Digital Arrests are a form of scam that involves the digital restraint of individuals. These restraints can vary from restricting access to the account(s), and digital platforms, to implementing measures to prevent further digital activities or being restrained on video calling or being monitored through video calling. Typically, these scams target vulnerable individuals who are unfamiliar with digital fraud tactics, making them more susceptible to manipulation. These scams often target the victims on allegations of drug trafficking, money laundering, falsified documents, etc. These are serious crimes and these scammers scare the victim into thinking that either their identities were used to commit these crimes or they have committed these crimes. Recently there has been an uptick in the digital fraud scams in India highlighting the growing concerns.
The Legality of Digital Arrests in India
There is no legal provision for law enforcement to conduct ‘arrests’ via video calls or online monitoring. If you receive such calls, it is a clear scam. In fact, recently enacted new criminal laws do not provide for any provision for law enforcement agencies conducting a digital arrest. The law only provides for service of the summons and the proceedings in an electronic mode.
The Bhartiya Nagrik Suraksha Sanhita (BNSS), 2023 provides for the summons to be served electronically under section 63. The section defines the form of summons. It states that every summons served electronically shall be encrypted and bear the image of the seal of the Court or digital signature. Further, according to section 532 of the BNSS, the trial and proceedings may be held in electronic mode, by use of electronic communication or by the use of audio-video electronic means.
Modus Operandi
Under digital arrest scams, the scammer makes a connection via video calls (WhatsApp calls, skype, etc) with the victim over their alleged involvement in crimes (financial, drug trafficking, etc) in bogus charges. The victims are intimidated that the arrest will take place soon and till the time the arresting officers do not reach the victim they are to remain on the call and be under digital surveillance and not contact anyone during the ongoing investigation.
During this period, the scammers start collecting information from the victim to confirm their identity and create an atmosphere in which multiple senior officials are on the victim’s case and they are investigating the case thoroughly. By this time, the victim, scared out of their wits, sits through this arrest and it is then that the scammers posing as law enforcement officials make comments that they can avoid arrest by paying a certain amount of the fines to the accounts that they specify. This monitoring/ surveillance continues till the time the victim makes the transfers to the accounts provided by the scammers. These are the common manipulation tactics used by scammers in digital arrest fraud.
Recent Cyber Arrest Cases
- Recently a 35-year-old NBCC official was duped of Rs 55 lakh in a 'digital arrest' scam. Posing as customs officials, fraudsters claimed her details were linked to intercepted illegal items and a pending arrest. They kept her on video calls, convincing her to transfer Rs 55 lakh to avoid money laundering charges. After the transfer, the scammers vanished. A police investigation traced the funds to a fake company, leading to the arrest of suspects.
- Another recent case involved a neurologist who was duped Rs 2.81 crores in a ‘digital arrest’ scam. Fraudsters claimed her phone number and Aadhaar was linked to accounts transferring funds to an Individual. Under pressure, she was convinced to undergo “verification” and made multiple transactions over two days. The scammers threatened legal consequences for money laundering if she didn’t comply. Now a police investigation is ongoing, and her immense financial loss highlights the severity of this cybercrime.
- One another case took place where the victim was duped of Rs 7.67 crores in a prolonged ‘digital arrest’ scam over three months. Fraudsters posing as TRAI officials claimed complaints against her phone number and threatened to suspend it, alleging illegal use of another number linked to her Aadhaar. Pressured and manipulated through video calls, the victim was coerced into transferring large sums, even taking an Rs 80 lakh loan. The case is under investigation as authorities pursue the cybercriminals behind the massive fraud.
Best Practices
- Do not panic when you get any calls where sudden unexpected news is shared with you. Scammers thrive on the panic that they create.
- Do not share personal details such as Aadhaar number, PAN number etc with unknown or suspect entities. Be cautious of your personal and financial information such as credit card numbers, OTPs, or any other passwords with anyone.
- If individuals contact, claiming to be government officials, always verify their identities by contacting the entity through the proper channels.
- Report and block any fraudulent communications that are received and mark them as Spam. This would further inform other users if they see the caller ID being marked as fraud or spam.
- If you have been defrauded then report about the same to the authorities so that action can be taken and authorities can arrest the fraudsters.
- Do not transfer any money as part of ‘fines’ or ‘dues’ to the accounts that these calls or messages link to.
- In case of any threat, issue or discrepancy, file a complaint at cybercrime.gov.in or helpline number 1930. You can also seek assistance from the CyberPeace helpline at +91 9570000066.
References:
- https://www.cyberpeace.org/resources/blogs/digital-arrest-fraud
- https://www.business-standard.com/india-news/what-is-digital-house-arrest-find-out-how-to-avoid-this-new-scam-124052400799_1.html
- https://www.the420.in/ias-ips-officers-major-generals-doctors-and-professors-fall-victim-to-digital-arrest-losing-crores-stay-alert-read-5-real-cases-inside/
- https://indianexpress.com/article/cities/delhi/senior-nbcc-official-duped-in-case-of-digital-arrest-3-arrested-delhi-police-9588418/#:~:text=Of%20the%20duped%20amount%2C%20Rs,a%20Delhi%20police%20officer%20said (case study 1)
- https://timesofindia.indiatimes.com/city/lucknow/lucknow-sgpgims-professor-duped-of-rs-2-81-crore-in-digital-arrest-scam/articleshow/112521530.cms (case study 2)
- https://timesofindia.indiatimes.com/city/jaipur/bits-prof-duped-of-7-67cr-cops-want-cbi-probe-in-case/articleshow/109514200.cms (case study 3)

Introduction
In the ever-evolving world of technological innovation, a new chapter is being inscribed by the bold visionaries at Figure AI, a startup that is not merely capitalising on artificial intelligence rage but seeking to crest its very pinnacle. With the recent influx of a staggering $675 million in funding, this Sunnyvale, California-based enterprise has captured the imagination of industry giants and venture capitalists alike, all betting on a future where humanoid robots transcend the realm of science fiction to become an integral part of our daily lives.
The narrative of Figure AI's ascent is punctuated by the names of tech luminaries and corporate giants. Jeff Bezos, through his firm Explore Investments LLC, has infused a hefty $100 million into the venture. Microsoft, not to be outdone, has contributed a cool $95 million. Nvidia and an Amazon-affiliated fund have each bestowed $50 million upon Figure AI's ambitious endeavours. This surge of capital is a testament to the potential seen in the company's mission to develop general-purpose humanoid robots that promise to revolutionise industries and redefine human labour.
The Catalyst for Change
This investment craze can be traced back to the emergence of OpenAI's ChatGPT, a chatbot that caught the public eye in November 2022. Its success has not only ushered in a new era for AI but has also sparked a race among investors eager to stake their claim in startups determined to outshine their more established counterparts. OpenAI itself, once mulling over the acquisition of Figure AI, has now joined the ranks of its benefactors with a $5 million investment.
The roster of backers reads like a who's who of the tech and venture capital world. Intel's venture capital arm, LG Innotek, Samsung's investment group, Parkway Venture Capital, Align Ventures, ARK Venture Fund, Aliya Capital Partners, and Tamarack—all have invested their lot with Figure AI, signalling a broad consensus on the startup's potential to disrupt and innovate.
Yet, when probed for insights, these major players—Amazon, Nvidia, Microsoft, and Intel—have maintained a Sphinx-like silence, while Figure AI and other entities mentioned in the report have refrained from immediate responses to inquiries. This veil of secrecy only adds to the intrigue surrounding the company's prospects and the transformative impact its technology may have on society.
Need For AI Robots
Figure AI's robots are not mere assemblages of metal and circuitry; they are envisioned as versatile beings capable of navigating a multitude of environments and executing a diverse array of tasks. From working at aisles of warehouses to the bustling corridors of retail spaces, these humanoid automatons are being designed to fill the void of millions of jobs projected to remain vacant due to a shrinking human labour force.
The company's long-term mission statement is as audacious as it is altruistic: 'to develop general-purpose humanoids that make a positive impact on humanity and create a better life for future generations.' This noble pursuit is not just about engineering efficiency; it is about reshaping the very fabric of work, liberating humans from hazardous and menial tasks, and propelling us towards a future where our lives are enriched with purpose and fulfilment.
Conclusion
As we stand on the cusp of a new digital world, the strides of Figure AI serve as a beacon, illuminating the path towards machine and human symbiosis. The investment frenzy that has enveloped the company is a clarion call to all dreamers, pragmatists and innovators alike that the age of humanoid helpers is upon us, and the possibilities are as endless as our collective imagination.
Figure AI is forging a future where robots walk among us, not as novelties or overlords but as partners in forging a world where technology and humanity work together to unlock untold potential. The story of Figure AI is not just one of investment and innovation; it is a narrative of hope, a testament to the indomitable spirit of human ingenuity, and a preview of the wondrous epoch that lies just beyond the horizon.
References
- https://cybernews.com/tech/openai-bezos-nvidia-fund-robot-startup-figure-ai/
- https://www.thedailystar.net/business/news/bezos-nvidia-join-openai-funding-humanoid-robot-startup-3551476
- https://www.bloomberg.com/news/articles/2024-02-23/bezos-nvidia-join-openai-microsoft-in-funding-humanoid-robot-startup-figure-ai
- https://economictimes.indiatimes.com/tech/technology/bezos-nvidia-join-openai-in-funding-humanoid-robot-startup-report/articleshow/107967102.cms?from=mdr