#FactCheck: Viral Video Showing Pakistan Shot Down Indian Air Force' MiG-29 Fighter Jet
Executive Summary
Recent claims circulating on social media allege that an Indian Air Force MiG-29 fighter jet was shot down by Pakistani forces during "Operation Sindoor." These reports suggest the incident involved a jet crash attributed to hostile action. However, these assertions have been officially refuted. No credible evidence supports the existence of such an operation or the downing of an Indian aircraft as described. The Indian Air Force has not confirmed any such event, and the claim appears to be misinformation.

Claim
A social media rumor has been circulating, suggesting that an Indian Air Force MiG-29 fighter jet was shot down by Pakistani Air forces during "Operation Sindoor." The claim is accompanied by images purported to show the wreckage of the aircraft.

Fact Check
The social media posts have falsely claimed that a Pakistani Air Force shot down an Indian Air Force MiG-29 during "Operation Sindoor." This claim has been confirmed to be untrue. The image being circulated is not related to any recent IAF operations and has been previously used in unrelated contexts. The content being shared is misleading and does not reflect any verified incident involving the Indian Air Force.

After conducting research by extracting key frames from the video and performing reverse image searches, we successfully traced the original post, which was first published in 2024, and can be seen in a news article from The Hindu and Times of India.
A MiG-29 fighter jet of the Indian Air Force (IAF), engaged in a routine training mission, crashed near Barmer, Rajasthan, on Monday evening (September 2, 2024). Fortunately, the pilot safely ejected and escaped unscathed, hence the claim is false and an act to spread misinformation.

Conclusion
The claims regarding the downing of an Indian Air Force MiG-29 during "Operation Sindoor" are unfounded and lack any credible verification. The image being circulated is outdated and unrelated to current IAF operations. There has been no official confirmation of such an incident, and the narrative appears to be misleading. Peoples are advised to rely on verified sources for accurate information regarding defence matters.
- Claim: Pakistan Shot down an Indian Fighter Jet, MIG-29
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
Advanced deepfake technology blurs the line between authentic and fake. To ascertain the credibility of the content it has become important to differentiate between genuine and manipulated or curated online content highly shared on social media platforms. AI-generated fake voice clone, videos are proliferating on the Internet and social media. There is the use of sophisticated AI algorithms that help manipulate or generate synthetic multimedia content such as audio, video and images. As a result, it has become increasingly difficult to differentiate between genuine, altered, or fake multimedia content. McAfee Corp., a well-known or popular global leader in online protection, has recently launched an AI-powered deepfake audio detection technology under Project “Mockingbird” intending to safeguard consumers against the surging threat of fabricated or AI-generated audio or voice clones to dupe people for money or unauthorisly obtaining their personal information. McAfee Corp. announced its AI-powered deepfake audio detection technology, known as Project Mockingbird, at the Consumer Electronics Show, 2024.
What is voice cloning?
To create a voice clone of anyone's, audio can be deeplyfaked, too, which closely resembles a real voice but, in actuality, is a fake voice created through deepfake technology.
Emerging Threats: Cybercriminal Exploitation of Artificial Intelligence in Identity Fraud, Voice Cloning, and Hacking Acceleration
AI is used for all kinds of things from smart tech to robotics and gaming. Cybercriminals are misusing artificial intelligence for rather nefarious reasons including voice cloning to commit cyber fraud activities. Artificial intelligence can be used to manipulate the lips of an individual so it looks like they're saying something different, it could also be used for identity fraud to make it possible to impersonate someone for a remote verification for your bank and it also makes traditional hacking more convenient. Cybercriminals have been misusing advanced technologies such as artificial intelligence, which has led to an increase in the speed and volume of cyber attacks, and that's been the theme in recent times.
Technical Analysis
To combat Audio cloning fraudulent activities, McAfee Labs has developed a robust AI model that precisely detects artificially generated audio used in videos or otherwise.
- Context-Based Recognition: Contextual assessment is used by technological devices to examine audio components in the overall setting of an audio. It improves the model's capacity to recognise discrepancies suggestive of artificial intelligence-generated audio by evaluating its surroundings information.
- Conductual Examination: Psychological detection techniques examine linguistic habits and subtleties, concentrating on departures from typical individual behaviour. Examining speech patterns, tempo, and pronunciation enables the model to identify artificially or synthetically produced material.
- Classification Models: Auditory components are categorised by categorisation algorithms for detection according to established traits of human communication. The technology differentiates between real and artificial intelligence-synthesized voices by comparing them against an extensive library of legitimate human speech features.
- Accuracy Outcomes: McAfee Labs' deepfake voice recognition solution, which boasts an impressive ninety per cent success rate, is based on a combined approach incorporating psychological, context-specific, and categorised identification models. Through examining audio components in the larger video context and examining speech characteristics, such as intonation, rhythm, and pronunciation, the system can identify discrepancies that could be signs of artificial intelligence-produced audio. Categorical models make an additional contribution by classifying audio information according to characteristics of known human speech. This all-encompassing strategy is essential for precisely recognising and reducing the risks connected to AI-generated audio data, offering a strong barrier against the growing danger of deepfake situations.
- Application Instances: The technique protects against various harmful programs, such as celebrity voice-cloning fraud and misleading content about important subjects.
Conclusion
It is important to foster ethical and responsible consumption of technology. Awareness of common uses of artificial intelligence is a first step toward broader public engagement with debates about the appropriate role and boundaries for AI. Project Mockingbird by Macafee employs AI-driven deepfake audio detection to safeguard against cyber criminals who are using fabricated AI-generated audio for scams and manipulating the public image of notable figures, protecting consumers from financial and personal information risks.
References:
- https://www.cnbctv18.com/technology/mcafee-deepfake-audio-detection-technology-against-rise-in-ai-generated-misinformation-18740471.htm
- https://www.thehindubusinessline.com/info-tech/mcafee-unveils-advanced-deepfake-audio-detection-technology/article67718951.ece
- https://lifestyle.livemint.com/smart-living/innovation/ces-2024-mcafee-ai-technology-audio-project-mockingbird-111704714835601.html
- https://news.abplive.com/fact-check/audio-deepfakes-adding-to-cacophony-of-online-misinformation-abpp-1654724

Introduction
Deepfakes are artificial intelligence (AI) technology that employs deep learning to generate realistic-looking but phoney films or images. Algorithms use large volumes of data to analyse and discover patterns in order to provide compelling and realistic results. Deepfakes use this technology to modify movies or photos to make them appear as if they involve events or persons that never happened or existed.The procedure begins with gathering large volumes of visual and auditory data about the target individual, which is usually obtained from publicly accessible sources such as social media or public appearances. This data is then utilised for training a deep-learning model to resemble the target of deep fakes.
Recent Cases of Deepfakes-
In an unusual turn of events, a man from northern China became the victim of a sophisticated deep fake technology. This incident has heightened concerns about using artificial intelligence (AI) tools to aid financial crimes, putting authorities and the general public on high alert.
During a video conversation, a scammer successfully impersonated the victim’s close friend using AI-powered face-swapping technology. The scammer duped the unwary victim into transferring 4.3 million yuan (nearly Rs 5 crore). The fraud occurred in Baotou, China.
AI ‘deep fakes’ of innocent images fuel spike in sextortion scams
Artificial intelligence-generated “deepfakes” are fuelling sextortion frauds like a dry brush in a raging wildfire. According to the FBI, the number of nationally reported sextortion instances came to 322% between February 2022 and February 2023, with a notable spike since April due to AI-doctored photographs. And as per the FBI, innocent photographs or videos posted on social media or sent in communications can be distorted into sexually explicit, AI-generated visuals that are “true-to-life” and practically hard to distinguish. According to the FBI, predators often located in other countries use doctored AI photographs against juveniles to compel money from them or their families or to obtain actual sexually graphic images.
Deepfake Applications
- Lensa AI.
- Deepfakes Web.
- Reface.
- MyHeritage.
- DeepFaceLab.
- Deep Art.
- Face Swap Live.
- FaceApp.
Deepfake examples
There are numerous high-profile Deepfake examples available. Deepfake films include one released by actor Jordan Peele, who used actual footage of Barack Obama and his own imitation of Obama to convey a warning about Deepfake videos.
A video shows Facebook CEO Mark Zuckerberg discussing how Facebook ‘controls the future’ with stolen user data, most notably on Instagram. The original video is from a speech he delivered on Russian election meddling; only 21 seconds of that address were used to create the new version. However, the vocal impersonation fell short of Jordan Peele’s Obama and revealed the truth.
The dark side of AI-Generated Misinformation
- Misinformation generated by AI-generated the truth, making it difficult to distinguish fact from fiction.
- People can unmask AI content by looking for discrepancies and lacking the human touch.
- AI content detection technologies can detect and neutralise disinformation, preventing it from spreading.
Safeguards against Deepfakes-
Technology is not the only way to guard against Deepfake videos. Good fundamental security methods are incredibly effective for combating Deepfake.For example, incorporating automatic checks into any mechanism for disbursing payments might have prevented numerous Deepfake and related frauds. You might also:
- Regular backups safeguard your data from ransomware and allow you to restore damaged data.
- Using different, strong passwords for different accounts ensures that just because one network or service has been compromised, it does not imply that others have been compromised as well. You do not want someone to be able to access your other accounts if they get into your Facebook account.
- To secure your home network, laptop, and smartphone against cyber dangers, use a good security package such as Kaspersky Total Security. This bundle includes anti-virus software, a VPN to prevent compromised Wi-Fi connections, and webcam security.
What is the future of Deepfake –
Deepfake is constantly growing. Deepfake films were easy to spot two years ago because of the clumsy movement and the fact that the simulated figure never looked to blink. However, the most recent generation of bogus videos has evolved and adapted.
There are currently approximately 15,000 Deepfake videos available online. Some are just for fun, while others attempt to sway your opinion. But now that it only takes a day or two to make a new Deepfake, that number could rise rapidly.
Conclusion-
The distinction between authentic and fake content will undoubtedly become more challenging to identify as technology advances. As a result, experts feel it should not be up to individuals to discover deep fakes in the wild. “The responsibility should be on the developers, toolmakers, and tech companies to create invisible watermarks and signal what the source of that image is,” they stated. Several startups are also working on approaches for detecting deep fakes.

Introduction
The Online Lottery Scam involves a scammer reaching out through email, phone or SMS to inform you that you have won a significant amount of money in a lottery, instructing you to contact an agent at a specific phone number or email address that actually belongs to the fraudster. Once the agent is reached out to, the recipient will need to cover processing charges in order to claim the lottery reward. Upfront Paying is required in order to receive your reward. However, actual rewards come at no cost. Additionally, such defective 'offers’ often contain phishing attacks, tricking users into clicking on malicious links.
Modus Operandi
The common lottery fraud starts with a message stating that the receiver has won a large lottery prize. These messages are frequently crafted to imitate official correspondence from reputable institutions, sweepstakes, or foreign administrations. The scammers request the receiver to give personal information like name, address, and banking details, or to make a payment for taxes, processing fees, or legal procedures. After the victim sends the money or discloses their personal details, the scammers may vanish or persist in requesting more payments for different reasons.
Tactics and Psychological Manipulation
These fraudulent techniques mostly rely on psychological manipulation to work. Fraudsters by persuading the victims create the fake sense of emergency that they must act quickly in order to get the lottery prize. Additionally, they prey on people's hopes for a better life by convincing them that this unanticipated gain has the power to change their destiny. Many people fall prey to the scam because they are driven by the desire to get wealthy and fail to recognize the warning indications. Additionally, fraudsters frequently use convincing language and fictitious documentation that appears authentic, hence users need to be extra cautious and recognise the early signs of such online fraudulent activities.
Festive Season and Uptick in Deceptive Online Scams
As the festive season begins, there is a surge in deceptive online scams that aim at targeting innocent internet users. A few examples of such scams include, free Navratri garba passes, quiz participation opportunities, coupons offering freebies, fake offers of cheap jewellery, counterfeit product sales, festival lotteries, fake lucky draws and charity appeals. Most of these scams are targeted to lure the victims for financial gain.
In 2023, CyberPeace released a research report on the Navratri festivities scam where we highlighted the ‘Tanishq iPhone 15 Gift’ scam which involved fraudsters posing as Tanishq, a well-known jewellery brand, and offering fake iPhone 15 as Navratri gifts. Victims were lured into clicking on malicious links. CyberPeace issued a detailed advisory within the report, highlighting that the public must exercise vigilance, scrutinise the legitimacy of such offers, and take precautionary measures to shield themselves from falling prey to such deceptive cyber schemes.
Preventive Measures for Lottery Scams
To avoid lottery scams ,users should avoid responding to messages or calls about fake lottery wins, verify the source of the lottery, maintain confidentiality by not sharing sensitive personal details, approach unexpected windfalls with scepticism, avoid upfront payment requests, and recognize manipulative tactics by scammers. Ignoring messages or calls about fake lottery wins is a smart move. Verifying the source and asking probing questions is also crucial. Users are also advisednot to click on such unsolicited links of lottery prizes received in emails or messages as such links can be phishing attempts. These best practices can help protect the victims against scammers who pressurise victims to act quickly that led them to fall prey to such scams.
Must-Know Tips to Prevent Lottery Scams
● It is advised to steer clear of any communication that offers lotteries or giveaways, as these are often perceived as too good to be true.
● It is advised to refrain from transferring money to individuals/entities who are unknown without verifying their identity and credibility.
● If you have already given the fraudsters your bank account details, it is crucial to alert your bank immediately.
● Report any such incidents on the National Cyber Crime Reporting Portal at cybercrime.gov.in or Cyber Crime Helpline Number 1930.