#FactCheck: AI-Generated Audio Falsely Claims COAS Admitted to Loss of 6 Jets and 250 Soldiers
Executive Summary:
A viral video (archive link) claims General Upendra Dwivedi, Chief of Army Staff (COAS), admitted to losing six Air Force jets and 250 soldiers during clashes with Pakistan. Verification revealed the footage is from an IIT Madras speech, with no such statement made. AI detection confirmed parts of the audio were artificially generated.
Claim:
The claim in question is that General Upendra Dwivedi, Chief of Army Staff (COAS), admitted to losing six Indian Air Force jets and 250 soldiers during recent clashes with Pakistan.

Fact Check:
Upon conducting a reverse image search on key frames from the video, it was found that the original footage is from IIT Madras, where the Chief of Army Staff (COAS) was delivering a speech. The video is available on the official YouTube channel of ADGPI – Indian Army, published on 9 August 2025, with the description:
“Watch COAS address the faculty and students on ‘Operation Sindoor – A New Chapter in India’s Fight Against Terrorism,’ highlighting it as a calibrated, intelligence-led operation reflecting a doctrinal shift. On the occasion, he also focused on the major strides made in technology absorption and capability development by the Indian Army, while urging young minds to strive for excellence in their future endeavours.”
A review of the full speech revealed no reference to the destruction of six jets or the loss of 250 Army personnel. This indicates that the circulating claim is not supported by the original source and may contribute to the spread of misinformation.

Further using AI Detection tools like Hive Moderation we found that the voice is AI generated in between the lines.

Conclusion:
The claim is baseless. The video is a manipulated creation that combines genuine footage of General Dwivedi’s IIT Madras address with AI-generated audio to fabricate a false narrative. No credible source corroborates the alleged military losses.
- Claim: AI-Generated Audio Falsely Claims COAS Admitted to Loss of 6 Jets and 250 Soldiers
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

CAPTCHA, or the Completely Automated Public Turing Test to Tell Computers and Humans Apart function, is an image or distorted text that users have to identify or interpret to prove they are human. 2007 marked the inception of CAPTCHA, and Google developed its free service called reCAPTCHA, one of the most commonly used technologies to tell computers apart from humans. CAPTCHA protects websites from spam and abuse by using tests considered easy for humans but were supposed to be difficult for bots to solve.
But, now this has changed. With AI becoming more and more sophisticated, it is now capable of solving CAPTCHA tests at a rate that is more accurate than humans, rendering them increasingly ineffective. This raises the question of whether CAPTCHA is still effective as a detection tool with the advancements of AI.
CAPTCHA Evolution: From 2007 Till Now
CAPTCHA has evolved through various versions to keep bots at bay. reCAPTCHA v1 relied on distorted text recognition, v2 introduced image-based tasks and behavioural analysis, and v3 operated invisibly, assigning risk scores based on user interactions. While these advancements improved user experience and security, AI now solves CAPTCHA with 96% accuracy, surpassing humans (50-86%). Bots can mimic human behaviour, undermining CAPTCHA’s effectiveness and raising the question: is it still a reliable tool for distinguishing real people from bots?
Smarter Bots and Their Rise
AI advancements like machine learning, deep learning and neural networks have developed at a very fast pace in the past decade, making it easier for bots to bypass CAPTCHA. They allow the bots to process and interpret the CAPTCHA types like text and images with almost human-like behaviour. Some examples of AI developments against bots are OCR or Optical Character Recognition. The earlier versions of CAPTCHA relied on distorted text: AI because of this tech is able to recognise and decipher the distorted text, making CAPTCHA useless. AI is trained on huge datasets which allows Image Recognition by identifying the objects that are specific to the question asked. These bots can mimic human habits and patterns by Behavioural Analysis and therefore fool the CAPTCHA.
To defeat CAPTCHA, attackers have been known to use Adversarial Machine Learning, which refers to AI models trained specifically to defeat CAPTCHA. They collect CAPTCHA datasets and answers and create an AI that can predict correct answers. The implications that CAPTCHA failures have on platforms can range from fraud to spam to even cybersecurity breaches or cyberattacks.
CAPTCHA vs Privacy: GDPR and DPDP
GDPR and the DPDP Act emphasise protecting personal data, including online identifiers like IP addresses and cookies. Both frameworks mandate transparency when data is transferred internationally, raising compliance concerns for reCAPTCHA, which processes data on Google’s US servers. Additionally, reCAPTCHA's use of cookies and tracking technologies for risk scoring may conflict with the DPDP Act's broad definition of data. The lack of standardisation in CAPTCHA systems highlights the urgent need for policymakers to reevaluate regulatory approaches.
CyberPeace Analysis: The Future of Human Verification
CAPTCHA, once a cornerstone of online security, is losing ground as AI outperforms humans in solving these challenges with near-perfect accuracy. Innovations like invisible CAPTCHA and behavioural analysis provided temporary relief, but bots have adapted, exploiting vulnerabilities and undermining their effectiveness. This decline demands a shift in focus.
Emerging alternatives like AI-based anomaly detection, biometric authentication, and blockchain verification hold promise but raise ethical concerns like privacy, inclusivity, and surveillance. The battle against bots isn’t just about tools but it’s about reimagining trust and security in a rapidly evolving digital world.
AI is clearly winning the CAPTCHA war, but the real victory will be designing solutions that balance security, user experience and ethical responsibility. It’s time to embrace smarter, collaborative innovations to secure a human-centric internet.
References
- https://www.business-standard.com/technology/tech-news/bot-detection-no-longer-working-just-wait-until-ai-agents-come-along-124122300456_1.html
- https://www.milesrote.com/blog/ai-defeating-recaptcha-the-evolving-battle-between-bots-and-web-security
- https://www.technologyreview.com/2023/10/24/1081139/captchas-ai-websites-computing/
- https://datadome.co/guides/captcha/recaptcha-gdpr/
.webp)
Introduction
A Pew Research Center survey conducted in September 2023, found that among 1,453 age group of 13-17 year olds projected that the majority of the age group uses TikTok (63%), Snapchat (60%) and Instagram (59%) in the U.S. Further, in India the 13-19 year-olds age group makes up 31% of social media users in India, according to a report by Statista from 2021. This has been the leading cause of young users inadvertently or deliberately accessing adult content on social media platforms.
Brief Analysis of Meta’s Proposed AI Age Classifier
It can be seen as a step towards safer and moderated content for teen users, by placing age restrictions on teen social media users as sometimes they do not have enough cognitive skills to understand what content can be shared and consumed on these platforms and what can not as per their age. Moreover, there needs to be an understanding of platform policies and they need to understand that nothing can be completely erased from the internet.
Unrestricted access to social media exposes teens to potentially harmful or inappropriate online content, raising concerns about their safety and mental well-being. Meta's recent measures aim to address this, however striking a balance between engagement, protection, and privacy is also an essential part.
The AI-based Age Classifier proposed by Meta classifies users based on their age and places them in the ‘Teen Account’ category which has built-in limits on who can contact them, the content they see and more ways to connect and explore their interests. According to Meta, teens under 16 years of age will need parental permission to change these settings.
Meta's Proposed Solution: AI-Powered Age Classifier
This tool uses Artificial Intelligence (AI) to analyze users’ online behaviours and other profile information to estimate their age. It analyses different factors such as who follows the user, what kind of content they interact with, and even comments like birthday posts from friends. If the classifier detects that a user is likely under 18 years old, it will automatically switch them to a “Teen Account.” These accounts have more restricted privacy settings, such as limiting who can message the user and filtering the type of content they can see.
The adult classifier is anticipated to be deployed by next year and will start scanning for such users who may have lied about their age. All users found to be under 18 years old will be placed in the category of teen accounts, but 16-17 year olds will be able to adjust these settings if they want more flexibility, while younger teens will need parental permission. The effort is part of a broader strategy to protect teens from potentially harmful content on social media. This is especially important in today’s time as the invasion of privacy for anyone, particularly, can be penalised due to legal instruments like GDPR, DPDP Act, COPPA and many more.
Policy Implications and Compliances
Meta's AI Age Classifier addresses the growing concerns over teen safety on social media by categorizing users based on age, restricting minors' access to adult content, and enforcing parental controls. However, reliance on behavioural tracking might potentially impact the online privacy of teen users. Hence the approach of Meta needs to be aligned with applicable jurisdictional laws. In India, the recently enacted DPDP Act, of 2023 prohibits behavioural tracking and targeted advertising to children. Accuracy and privacy are the two main concerns that Meta should anticipate when they roll out the classifier.
Meta emphasises transparency to build user trust, and customizable parental controls empower families to manage teens' online experiences. This initiative reflects Meta's commitment to creating a safer, regulated digital space for young users worldwide, it must also align its policies properly with the regional policy and law standards. Meta’s proposed AI Age Classifier aims to protect teens from adult content, reassure parents by allowing them to curate acceptable content, and enhance platform integrity by ensuring a safer environment for teen users on Instagram.
Conclusion
Meta’s AI Age Classifier while promising to enhance teen safety and putting certain restrictions and parental controls on accounts categorised as ‘teen accounts’, must also properly align with global regulations like GDPR, and the DPDP Act with reference to India. This tool offers reassurance to parents and aims to foster a safer social media environment for teens. To support accurate age estimation and transparency, policy should focus on refining AI methods to minimise errors and ensure clear disclosures about data handling. Collaborative international standards are essential as privacy laws evolve. Meta’s initiative is intended to prioritise youth protection and build public trust in AI-driven moderation across social platforms, while it must also balance the online privacy of users while utilising these advanced tech measures on the platforms.
References
- https://familycenter.meta.com/in/our-products/instagram/
- https://www.indiatoday.in/technology/news/story/instagram-will-now-take-help-of-ai-to-check-if-kids-are-lying-about-their-age-on-app-2628464-2024-11-05
- https://www.bloomberg.com/news/articles/2024-11-04/instagram-plans-to-use-ai-to-catch-teens-lying-about-age
- https://tech.facebook.com/artificial-intelligence/2022/6/adult-classifier/
- https://indianexpress.com/article/technology/artificial-intelligence/too-young-to-use-instagram-metas-ai-classifier-could-help-catch-teens-lying-about-their-age-9658555/

Introduction
AI is transforming the way work is done and redefining the nature of jobs over the next decade. In the case of India, it is not just what duties will be taken over by machines, but how millions of employees will move to other sectors, which skills will become more sought-after, and how policy will have to change in response. This article relies on recent labour data of India's Periodic Labour Force Survey (PLFS, 2023-24) and discusses the vulnerabilities to disruption by location and social groups. It recommends viable actions that can be taken to ensure that risks are minimised and economic benefits maximised.
India’s Labour Market and Its Automation Readiness
According to India’s Periodic Labour Force Survey (PLFS), the labour market is changing and growing. Participation in the labour force improved to 60.1 per percent in 2023-24 versus 57.9 per cent the year before, and the ratio of the worker population also improved, signifying the increased employment uptake both in the rural and urban geographies (PLFS, 2023-24). There has also been an upsurge of female involvement. However, a big portion of the job market has been low-wage and informal, with most of the jobs being routine and thus most vulnerable to automation. The statistics indicate a two-tiered reality of the Indian labour market: an increased number of working individuals and a structural weakness.
AI-Driven Automation’s Impact on Tasks and Emerging Opportunities
AI-driven automation, for the most part, affects the task components of jobs rather than wiping out whole jobs. The most automatable tasks are routine and manual, and more recent developments in AI have extended to non-routine cognitive tasks like document review, customer query handling, basic coding and first-level decision-making. There are two concurrent findings of global studies. To start with, part of the ongoing tasks will be automated or expedited. Second, there will be completely new tasks and work positions around data annotation, the operation of AI systems, prompt engineering, algorithmic supervision and AI adherence (World Bank, 2025; McKinsey, 2017).
In the case of India, this change will be skewed by sector. The manufacturing, back-office IT services, retail and parts of financial services will see the highest rate of disruption due to the concentration of routine processes with the ease of technology adoption. In comparison, healthcare, education, high-tech manufacturing and AI safety auditing are placed to create new skilled jobs. NITI Aayog estimates huge returns in GDP with the adoption of AI but emphasises that India has to invest simultaneously in job creation and reskilling to achieve the returns (NITI Aayog, 2025).
Groups with Highest Vulnerability in the Transition to Automation
The PLFS emphasises that a large portion of the Indian population does not have any formal employment and that the social protection is minimal and formal training is not available to them. The risk of displacement is likely to be the greatest for informal employees, making up almost 90% of India’s labour force, who carry out low-skilled, repetitive jobs in the manufacturing and retail industry (PLFS, 2023-24). Women and young people in low-level service jobs also face a greater challenge of transition pressure unless the reskilling and placement efforts can be tailored to them. Meanwhile, major cities and urban centres are likely to have openings for most of the new skilled opportunities at the expense of an increasing geographic and social divide.
The Skills and Supply Challenge
While India’s education and research ecosystem is expanding, there remain significant gaps in preparing the workforce for AI-driven change. Given the vulnerabilities highlighted earlier, AI-focused reskilling must be a priority to equip workers with practical skills that meet industry needs. Short modular programs in areas such as cloud technologies, AI operations, data annotation, human-AI interaction, and cybersecurity can provide workers with employable skills. Particular attention should be given to routine-intensive sectors like manufacturing, retail, and back-office services, as well as to regions with high informal employment or lower access to formal training. Public-private partnerships and localised training initiatives can help ensure that reskilling translates into concrete job opportunities rather than purely theoretical knowledge (NITI Aayog, 2025)
The Way Forward
To facilitate the change process, the policy should focus on three interconnected goals: safeguarding the vulnerable, developing competencies on a large-scale level, and directing innovation towards the widespread ability to benefit.
- Protect the vulnerable through social buffers. Provide informal workers with social protection in the form of portable benefits, temporary income insurance based on reskilling, and earned training leave. While the new labour codes provide essential protections such as unemployment allowances and minimum wage standards, they could be strengthened by incorporating explicit provisions for reskilling. This would better support informal workers during job transitions and enhance workforce adaptability.
- Short modular courses on cloud computing, cybersecurity, data annotation, AI operations, and human-AI interaction should be planned through collaboration between public and private training providers. Special preference should be given to industry-certified certifications and apprenticeship-based placements. These apprenticeships should be made accessible in multiple languages to ensure inclusivity. Existing government initiatives, such as NASSCOM’s Future Skills Prime, need better outreach and marketing to reach the workforce effectively.
- Enhance local labour market mediators. Close the disparity between local demand and the supply of labour in the industry by enhancing placement services and government-subsidised internship programmes for displaced employees and encouraging firms to hire and train locally.
- Invest in AI literacy, AI ethics, and basic education. Democratise access to research and learning by introducing AI literacy in schools, increasing STEM seats in universities, and creating AI labs in the region (NITI Aayog, 2025).
- Encourage AI adoption that creates jobs rather than replaces them. Fiscal and regulatory incentives should prioritise AI tools that augment worker productivity in routine roles instead of eliminating positions. Public procurement can support firms that demonstrate responsible and inclusive deployment of AI, ensuring technology benefits both business and workforce.
- Supervise and oversee the transition. Use PLFS and real-time administrative data to monitor shrinking and expanding occupations. High-frequency labour market dashboards will allow making specific interventions in those regions in which the acceleration of displacement occurs.
Conclusion
The integration of AI will significantly impact the future of the Indian workforce, but policy will determine its effect on the labour market. The PLFS indicates increased employment but a structural weakness of informal and routine employment. Evidence from the Indian market and international research points to the fact that the appropriate combination of social protection, skills building and responsible technology implementation can change disruption into a path of upward mobility. There is a very limited window of action. The extent to which India will realise the productivity and GDP benefits predicted by national research, alongside the investments made in labour market infrastructure, remains uncertain. It is crucial that these efforts lead to the capture of gains and facilitate a fair and inclusive transition for workers.
References
- Annual Report Periodic Labour Force Survey (PLFS) JULY 2022 - JUNE 2023.
- Future Jobs: Robots, Artificial Intelligence, and Digital Platforms in East Asia and Pacific, World Bank.
- Jobs Lost, Jobs Gained: What the Future of Work Will Mean for Jobs, Skills, and Wages, McKinsey Global Institute
- Roadmap for Job Creation in the AI Economy, NITI Aayog
- India central bank chief warns of financial stability risks from growing use of AI, Reuters
- AI Cyber Attacks Statistics 2025, SQ Magazine.