#FactCheck: AI Video made by Pakistan which says they launched a cross-border airstrike on India's Udhampur Airbase
Executive Summary:
A social media video claims that India's Udhampur Air Force Station was destroyed by Pakistan's JF-17 fighter jets. According to official sources, the Udhampur base is still fully operational, and our research proves that the video was produced by artificial intelligence. The growing problem of AI-driven disinformation in the digital age is highlighted by this incident.

Claim:
A viral video alleges that Pakistan's JF-17 fighter jets successfully destroyed the Udhampur Air Force Base in India. The footage shows aircraft engulfed in flames, accompanied by narration claiming the base's destruction during recent cross-border hostilities.

Fact Check :
The Udhampur Air Force Station was destroyed by Pakistani JF-17 fighter jets, according to a recent viral video that has been shown to be completely untrue. The audio and visuals in the video have been conclusively identified as AI-generated based on a thorough analysis using AI detection tools such as Hive Moderation. The footage was found to contain synthetic elements by Hive Moderation, confirming that the images were altered to deceive viewers. Further undermining the untrue claims in the video is the Press Information Bureau (PIB) of India, which has clearly declared that the Udhampur Airbase is still fully operational and has not been the scene of any such attack.

Our analysis of recent disinformation campaigns highlights the growing concern that AI-generated content is being weaponized to spread misinformation and incite panic, which is highlighted by the purposeful misattribution of the video to a military attack.
Conclusion:
It is untrue that the Udhampur Air Force Station was destroyed by Pakistan's JF-17 fighter jets. This claim is supported by an AI-generated video that presents irrelevant footage incorrectly. The Udhampur base is still intact and fully functional, according to official sources. This incident emphasizes how crucial it is to confirm information from reliable sources, particularly during periods of elevated geopolitical tension.
- Claim: Recent video footage shows destruction caused by Pakistani jets at the Udhampur Airbase.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
Online dating platforms have become a common way for individuals to connect in today’s digital age. For many in the LGBTQ+ community, especially in environments where offline meeting spaces are limited, these platforms offer a way to find companionship and support. However, alongside these opportunities come serious risks. Users are increasingly being targeted by cybercrimes such as blackmail, sextortion, identity theft, and online harassment. These incidents often go unreported due to stigma and concerns about privacy. The impact of such crimes can be both emotional and financial, highlighting the need for greater awareness and digital safety.
Cybercrime On LGBTQ+ Dating Apps: A Threat Landscape
According to the NCRB 2022 report, there has been a 24.4% increase in cybercrimes. But unfortunately, the queer community-specific data is not available. Cybercrimes that target LGBTQ+ users in very organised and predatory. In several Indian cities, gangs actively monitor dating platforms to the point that potential victims, especially young queers and those who seem discreet about their identity, become targets. Once the contact is established, perpetrators use a standard operating process, building false trust, forcing private exchanges, and then gradually starting blackmail and financial exploitation. Many queer victims are blackmailed with threats of exposure to families or workplaces, often by fake police demanding bribes. Fear of stigma and insensitive policing discourages reporting. Cyber criminal gangs exploit these gaps on dating apps. Despite some arrests, under-reporting persists, and activists call for stronger platform safety.
Types of Cyber Crimes against Queer Community on Dating Apps
- Romance scam or “Lonely hearts scam”: Scammers build trust with false stories (military, doctors, NGO workers) and quickly express strong romantic interest. They later request money, claiming emergencies. They often try to create multiple accounts to avoid profile bans.
- Sugar daddy scam: In this type of scam, the fraudster offers money or allowance in exchange for things like chatting, sending photos, or other interactions. They usually offer a specific amount and want to use some uncommon payment gateways. After telling you they will send you a lot of money, they often make up a story like: “My last sugar baby cheated me, so now you must first send me a small amount to prove you are trustworthy.” This is just a trick to make you send them money first.
- Sextortion / Blackmail scam: Scammers record explicit chats or pretend to be underage, then threaten exposure unless you pay. Some target discreet users. Never send explicit content or pay blackmailers.
- Investment Scams: Scammers posing as traders or bankers convince victims to invest in fake opportunities. Some "flip" small amounts to build trust, then disappear with larger sums. Real investors won’t approach you on dating apps. Don’t share financial info or transfer money.
- Pay-Before-You-Meet scam: Scammer demands upfront payment (gift cards, gas money, membership fees) before meeting, then vanishes. Never pay anyone before meeting in person.
- Security app registration scam: Scammers ask you to register on fake "security apps" to steal your info, claiming it ensures your safety. Research apps before registering. Be wary of quick link requests.
- The Verification code scam: Scammers trick you into giving them SMS verification codes, allowing them to hijack your accounts. Never share verification codes with anyone.
- Third-party app links: Mass spam messages with suspicious links that steal info or infect devices. Don’t click suspicious links or “Google me” messages.
- Support message scam: Messages pretending to be from application support, offering prizes or fake shows to lure you to malicious sites.
Platform Accountability & Challenges
The issue of online dating platforms in India is characterised by weak grievance redressal, poor takedown of abusive profiles, and limited moderation practices. Most platforms appoint grievance officers or offer an in-app complaint portal, but complaints are often unanswered or receive only automated and AI-generated responses. This highlights the gap between policy and enforcement on the ground.
Abusive or fake profiles, often used for scams, hate crimes, and outing LGBTQ+ individuals, remain active long after being reported. In India, organised extortion gangs have exploited such profiles to lure, assault, rob, and blackmail queer men. Moderation teams often struggle with backlogs and lack the resources needed to handle even the most serious complaints.
Despite offering privacy settings and restricting profile visibility, moderation practices in India are still weak, leaving large segments of users vulnerable to impersonation, catfishing, and fraud. The concept of pseudonymisation can help protect vulnerable communities, but it is difficult to distinguish authentic users from malicious actors without robust, privacy-respecting verification systems.
Since many LGBTQ+ individuals prefer to maintain their confidentiality, while others are more vocal about their identities, in either case, the data shared by an individual with an online dating platform must be vigilantly protected. The Digital Personal Data Protection Act, 2023, mandates the protection of personal data. Section 8(4) provides: “A Data Fiduciary shall implement appropriate technical and organisational measures to ensure effective observance of the provisions of this Act and the rules made thereunder.” Accordingly, digital platforms collecting such data should adopt the necessary technical and organisational measures to comply with data protection laws.
Recommendations
The Supreme Court has been proactive in this regard, through decisions like Navtej Singh Johar v. Union of India, which decriminalised same-sex relationships. Justice K.S. Puttaswamy (Retd.) v. Union of India and Ors., acknowledged the right to privacy as a fundamental right, and, most recently, the 2025 affirmation of the right to digital access. However, to protect LGBTQ+ people online, more robust legal frameworks are still required.
There is a requirement for a dedicated commission or an empowered LGBTQ+ cell. Like the National Commission for Women (NCW), which works to safeguard the rights of women, a similar commission would address community-specific issues, including cybercrime, privacy violations, and discrimination on digital platforms. It may serve as an institutional link between the victim, the digital platforms, the government, and the police. Dating Platforms must enhance their security features and grievance mechanisms to safeguard the users.
Best Practices
Scammers use data sets and plans to target individuals seeking specific interests, such as love, sex, money, or association. Do not make financial transactions, such as signing up for third-party platforms or services. Scammers may attempt to create accounts for others, which can be used to access dating platforms and harm legitimate users. Users should be vigilant about sharing sensitive information, such as private images, contact information, or addresses, as scammers can use this information to threaten users. Stay smart, stay cyber safe.
References
- https://www.hindustantimes.com/htcity/cinema/16yearold-queer-child-pranshu-dies-by-suicide-due-to-bullying-did-we-fail-as-a-society-mental-health-expert-opines-101701172202794.html#google_vignette
- https://www.ijsr.net/archive/v11i6/SR22617213031.pdf
- https://help.grindr.com/hc/en-us/articles/1500009328241-Scam-awareness-guide
- http://meity.gov.in/static/uploads/2024/06/2bf1f0e9f04e6fb4f8fef35e82c42aa5.pdf
- https://mib.gov.in/sites/default/files/2024-02/IT%28Intermediary%20Guidelines%20and%20Digital%20Media%20Ethics%20Code%29%20Rules%2C%202021%20English.pdf

Executive Summary:
A video circulating on social media falsely claims that India’s Finance Minister, Smt. Nirmala Sitharaman, has endorsed an investment platform promising unusually high returns. Upon investigation, it was confirmed that the video is a deepfake—digitally manipulated using artificial intelligence. The Finance Minister has made no such endorsement through any official platform. This incident highlights a concerning trend of scammers using AI-generated videos to create misleading and seemingly legitimate advertisements to deceive the public.

Claim:
A viral video falsely claims that the Finance Minister of India Smt. Nirmala Sitharaman is endorsing an investment platform, promoting it as a secure and highly profitable scheme for Indian citizens. The video alleges that individuals can start with an investment of ₹22,000 and earn up to ₹25 lakh per month as guaranteed daily income.

Fact check:
By doing a reverse image search from the key frames of the viral fake video we found an original YouTube clip of the Finance Minister of India delivering a speech on the webinar regarding 'Regulatory, Investment and EODB reforms'. Upon further research we have not found anything related to the viral investment scheme in the whole video.
The manipulated video has had an AI-generated voice/audio and scripted text injected into it to make it appear as if she has approved an investment platform.

The key to deepfakes is that they seem relatively realistic in their facial movement; however, if you look closely, you can see that there are mismatched lip-syncing and visual transitions that are out of the ordinary, and the results prove our point.


Also, there doesn't appear to be any acknowledgment of any such endorsement from a legitimate government website or a credible news outlet. This video is a fabricated piece of misinformation to attempt to scam the viewers by leveraging the image of a trusted public figure.
Conclusion:
The viral video showing the Finance Minister of India, Smt. Nirmala Sitharaman promoting an investment platform is fake and AI-generated. This is a clear case of deepfake misuse aimed at misleading the public and luring individuals into fraudulent schemes. Citizens are advised to exercise caution, verify any such claims through official government channels, and refrain from clicking on unknown investment links circulating on social media.
- Claim: Nirmala Sitharaman promoted an investment app in a viral video.
- Claimed On: Social Media
- Fact Check: False and Misleading

Introduction
AI has penetrated most industries and telecom is no exception. According to a survey by Nvidia, enhancing customer experiences is the biggest AI opportunity for the telecom industry, with 35% of respondents identifying customer experiences as their key AI success story. Further, the study found nearly 90% of telecom companies use AI, with 48% in the piloting phase and 41% actively deploying AI. Most telecom service providers (53%) agree or strongly agree that adopting AI would provide a competitive advantage. AI in telecom is primed to be the next big thing and Google has not ignored this opportunity. It is reported that Google will soon add “AI Replies” to the phone app’s call screening feature.
How Does The ‘AI Call Screener’ Work?
With the busy lives people lead nowadays, Google has created a helpful tool to answer the challenge of responding to calls amidst busy schedules. Google Pixel smartphones are now fitted with a new feature that deploys AI-powered calling tools that can help with call screening, note-making during an important call, filtering and declining spam, and most importantly ending the frustration of being on hold.
In the official Google Phone app, users can respond to a caller through “new AI-powered smart replies”. While “contextual call screen replies” are already part of the app, this new feature allows users to not have to pick up the call themselves.
- With this new feature, Google Assistant will be able to respond to the call with a customised audio response.
- The Google Assistant, responding to the call, will ask the caller’s name and the purpose of the call. If they are calling about an appointment, for instance, Google will show the user suggested responses specific to that call, such as ‘Confirm’ or ‘Cancel appointment’.
Google will build on the call-screening feature by using a “multi-step, multi-turn conversational AI” to suggest replies more appropriate to the nature of the call. Google’s Gemini Nano AI model is set to power this new feature and enable it to handle phone calls and messages even if the phone is locked and respond even when the caller is silent.
Benefits of AI-Powered Call Screening
This AI-powered call screening feature offers multiple benefits:
- The AI feature will enhance user convenience by reducing the disruptions caused by spam calls. This will, in turn, increase productivity.
- It will increase call privacy and security by filtering high-risk calls, thereby protecting users from attempts of fraud and cyber crimes such as phishing.
- The new feature can potentially increase efficiency in business communications by screening for important calls, delegating routine inquiries and optimising customer service.
Key Policy Considerations
Adhering to transparent, ethical, and inclusive policies while anticipating regulatory changes can establish Google as a responsible innovator in AI call management. Some key considerations for AI Call Screener from a policy perspective are:
- The AI screen caller will process and transcribe sensitive voice data, therefore, the data handling policies for such need to be transparent to reassure users of regulatory compliance with various laws.
- AI has been at a crossroads in its ethical use and mitigation of bias. It will require the algorithms to be designed to avoid bias and reflect inclusivity in its understanding of language.
- The data that the screener will be using is further complicated by global and regional regulations such as data privacy regulations like the GDPR, DPDP Act, CCPA etc., for consent to record or transcribe calls while focussing on user rights and regulations.
Conclusion: A Balanced Approach to AI in Telecommunications
Google’s AI Call Screener offers a glimpse into the future of automated call management, reshaping customer service and telemarketing by streamlining interactions and reducing spam. As this technology evolves, businesses may adopt similar tools, balancing customer engagement with fewer unwanted calls. The AI-driven screening will also impact call centres, shifting roles toward complex, human-centred interactions while automation handles routine calls. They could have a potential effect on support and managerial roles. Ultimately, as AI call management grows, responsible design and transparency will be in demand to ensure a seamless, beneficial experience for all users.
References
- https://resources.nvidia.com/en-us-ai-in-telco/state-of-ai-in-telco-2024-report
- https://store.google.com/intl/en/ideas/articles/pixel-call-assist-phone-screen/
- https://www.thehindu.com/sci-tech/technology/google-working-on-ai-replies-for-call-screening-feature/article68844973.ece
- https://indianexpress.com/article/technology/artificial-intelligence/google-ai-replies-call-screening-9659612/