#FactCheck: A viral claim suggests that by turning on Advance Chat Privacy, Meta AI can avoid reading Whatsapp chats.
Executive Summary:
A viral social media video falsely claims that Meta AI reads all WhatsApp group and individual chats by default, and that enabling “Advanced Chat Privacy” can stop this. On performing reverse image search we found a blog post of WhatsApp which was posted in the month of April 2025 which claims that all personal and group chats remain protected with end to end (E2E) encryption, accessible only to the sender and recipient. Meta AI can interact only with messages explicitly sent to it or tagged with @MetaAI. The “Advanced Chat Privacy” feature is designed to prevent external sharing of chats, not to restrict Meta AI access. Therefore, the viral claim is misleading and factually incorrect, aimed at creating unnecessary fear among users.
Claim:
A viral social media video [archived link] alleges that Meta AI is actively accessing private conversations on WhatsApp, including both group and individual chats, due to the current default settings. The video further claims that users can safeguard their privacy by enabling the “Advanced Chat Privacy” feature, which purportedly prevents such access.

Fact Check:
Upon doing reverse image search from the keyframe of the viral video, we found a WhatsApp blog post from April 2025 that explains new privacy features to help users control their chats and data. It states that Meta AI can only see messages directly sent to it or tagged with @Meta AI. All personal and group chats are secured with end-to-end encryption, so only the sender and receiver can read them. The "Advanced Chat Privacy" setting helps stop chats from being shared outside WhatsApp, like blocking exports or auto-downloads, but it doesn’t affect Meta AI since it’s already blocked from reading chats. This shows the viral claim is false and meant to confuse people.


Conclusion:
The claim that Meta AI is reading WhatsApp Group Chats and that enabling the "Advance Chat Privacy" setting can prevent this is false and misleading. WhatsApp has officially confirmed that Meta AI only accesses messages explicitly shared with it, and all chats remain protected by end-to-end encryption, ensuring privacy. The "Advanced Chat Privacy" setting does not relate to Meta AI access, as it is already restricted by default.
- Claim: Viral social media video claims that WhatsApp Group Chats are being read by Meta AI due to current settings, and enabling the "Advance Chat Privacy" setting can prevent this.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Executive Summary:
A viral video circulating on social media inaccurately suggests that it shows Israel moving nuclear weapons in preparation for an assault on Iran, but a detailed research has established that it instead shows a SpaceX Starship rocket (Starship 36) being towed for a pre-planned test in Texas, USA, and the footage does not provide any evidence to back-up the claim of an Israeli action or a nuclear missile.

Claim:
Multiple posts on social media sharing a video clip of what appeared to be a large, missile-like object being towed to an unknown location by a very large vehicle and stated it is Israel preparing for a nuclear attack on Iran.
The caption of the video said: "Israel is going to launch a nuclear attack on Iran! #Israel”. The viral post received lots of engagement, helpingClaim: to spread misinformation and unfounded fear about the rising conflicts in the Middle East.

Fact check:
By doing reverse image search using the key frames of the viral footage, this landed us at a Facebook post dated June 16, 2025.

A YouTube livestream from NASASpaceflight is dated 15th June 2025. Both sources make it clear that the object was clearly identified as SpaceX Starship 36. This rocket was being towed at SpaceX's Texas facility in advance of a static fire test and as part of the overall preparation for the 10th test flight. In the video, there is clearly no military ordinance or personnel, or Israel’s nuclear attack on Iran markings.
More support for our conclusions came from several articles from SPACE.com, which briefly reported on the Starship's explosion shortly thereafter during various testing iterations.



Also, there was no mention of any Israeli nuclear mobilization by any reputable media or defence agencies. The resemblance between a large rocket and a missile likely added some confusion. Below is a video describing the difference, but the context and upload location have no relation to the State of Israel or Iran.

Conclusion:
The viral video alleging that the actual video showed Israel getting ready to launch a nuclear attack on Iran is false and misleading. In fact, the video was from Texas, showing the civilian transport of SpaceX’s Starship 36. This highlighted how easily unrelated videos can be used to create panic and spread misinformation. If you plan on sharing claims like this, verify them instead using trusted websites and tools.
- Claim: Misleading video on Israel is ready to go nuclear on Iran
- Claimed On: Social Media
- Fact Check: False and Misleading

Introduction
A message has recently circulated on WhatsApp alleging that voice and video chats made through the app will be recorded, and devices will be linked to the Ministry of Electronics and Information Technology’s system from now on. WhatsApp from now, record the chat activities and forward the details to the Government. The Anti-Government News has been shared on social media.
Message claims
- The fake WhatsApp message claims that an 11-point new communication guideline has been established and that voice and video calls will be recorded and saved. It goes on to say that WhatsApp devices will be linked to the Ministry’s system and that Facebook, Twitter, Instagram, and all other social media platforms will be monitored in the future.
- The fake WhatsApp message further advises individuals not to transmit ‘any nasty post or video against the government or the Prime Minister regarding politics or the current situation’. The bogus message goes on to say that it is a “crime” to write or transmit a negative message on any political or religious subject and that doing so could result in “arrest without a warrant.”
- The false message claims that any message in a WhatsApp group with three blue ticks indicates that the message has been noted by the government. It also notifies Group members that if a message has 1 Blue tick and 2 Red ticks, the government is checking their information, and if a member has 3 Red ticks, the government has begun procedures against the user, and they will receive a court summons shortly.
WhatsApp does not record voice and video calls
There has been news which is spreading that WhatsApp records voice calls and video calls of the users. the news is spread through a message that has been recently shared on social media. As per the Government, the news is fake, that WhatsApp cannot record voice and video calls. Only third-party apps can record voice and video calls. Usually, users use third-party Apps to record voice and video calls.
Third-party apps used for recording voice and video calls
- App Call recorder
- Call recorder- Cube ACR
- Video Call Screen recorder for WhatsApp FB
- AZ Screen Recorder
- Video Call Recorder for WhatsApp
Case Study
In 2022 there was a fake message spreading on social media, suggesting that the government might monitor WhatsApp talks and act against users. According to this fake message, a new WhatsApp policy has been released, and it claims that from now on, every message that is regarded as suspicious will have three 3 Blue ticks, indicating that the government has taken note of that message. And the same fake news is spreading nowadays.
WhatsApp Privacy policies against recording voice and video chats
The WhatsApp privacy policies say that voice calls, video calls, and even chats cannot be recorded through WhatsApp because of end-to-end encryption settings. End-to-end encryption ensures that the communication between two people will be kept private and safe.
WhatsApp Brand New Features
- Chat lock feature: WhatsApp Chat Lock allows you to store chats in a folder that can only be viewed using your device’s password or biometrics such as a fingerprint. When you lock a chat, the details of the conversation are automatically hidden in notifications. The motive of WhatsApp behind the cha lock feature is to discover new methods to keep your messages private and safe. The feature allows the protection of most private conversations with an extra degree of security
- Edit chats feature: WhatsApp can now edit your WhatsApp messages up to 15 minutes after they have been sent. With this feature, the users can make the correction in the chat or can add some extra points, users want to add.
Conclusion
The spread of misinformation and fake news is a significant problem in the age of the internet. It can have serious consequences for individuals, communities, and even nations. The news is fake as per the government, as neither WhatsApp nor the government could have access to WhatsApp chats, voice, and video calls on WhatsApp because of end-to-end encryption. End-to-end encryption ensures to protect of the communications of the users. The government previous year blocked 60 social media platforms because of the spreading of Anti India News. There is a fact check unit which identifies misleading and false online content.

Introduction
Deepfake technology, which combines the words "deep learning" and "fake," uses highly developed artificial intelligence—specifically, generative adversarial networks (GANs)—to produce computer-generated content that is remarkably lifelike, including audio and video recordings. Because it can provide credible false information, there are concerns about its misuse, including identity theft and the transmission of fake information. Cybercriminals leverage AI tools and technologies for malicious activities or for committing various cyber frauds. By such misuse of advanced technologies such as AI, deepfake, and voice clones. Such new cyber threats have emerged.
India Topmost destination for deepfake attacks
According to Sumsub’s identity fraud report 2023, a well-known digital identity verification company with headquarters in the UK. India, Bangladesh, and Pakistan have become an important participants in the Asia-Pacific identity fraud scene with India’s fraud rate growing exponentially by 2.99% from 2022 to 2023. They are among the top ten nations most impacted by the use of deepfake technology. Deepfake technology is being used in a significant number of cybercrimes, according to the newly released Sumsub Identity Fraud Report for 2023, and this trend is expected to continue in the upcoming year. This highlights the need for increased cybersecurity awareness and safeguards as identity fraud poses an increasing concern in the area.
How Deeepfake Works
Deepfakes are a fascinating and worrisome phenomenon that have emerged in the modern digital landscape. These realistic-looking but wholly artificial videos have become quite popular in the last few months. Such realistic-looking, but wholly artificial, movies have been ingrained in the very fabric of our digital civilisation as we navigate its vast landscape. The consequences are enormous and the attraction is irresistible.
Deep Learning Algorithms
Deepfakes examine large datasets, frequently pictures or videos of a target person, using deep learning techniques, especially Generative Adversarial Networks. By mimicking and learning from gestures, speech patterns, and facial expressions, these algorithms can extract valuable information from the data. By using sophisticated approaches, generative models create material that mixes seamlessly with the target context. Misuse of this technology, including the dissemination of false information, is a worry. Sophisticated detection techniques are becoming more and more necessary to separate real content from modified content as deepfake capabilities improve.
Generative Adversarial Networks
Deepfake technology is based on GANs, which use a dual-network design. Made up of a discriminator and a generator, they participate in an ongoing cycle of competition. The discriminator assesses how authentic the generated information is, whereas the generator aims to create fake material, such as realistic voice patterns or facial expressions. The process of creating and evaluating continuously leads to a persistent improvement in Deepfake's effectiveness over time. The whole deepfake production process gets better over time as the discriminator adjusts to become more perceptive and the generator adapts to produce more and more convincing content.
Effect on Community
The extensive use of Deepfake technology has serious ramifications for several industries. As technology develops, immediate action is required to appropriately manage its effects. And promoting ethical use of technologies. This includes strict laws and technological safeguards. Deepfakes are computer trickery that mimics prominent politicians' statements or videos. Thus, it's a serious issue since it has the potential to spread instability and make it difficult for the public to understand the true nature of politics. Deepfake technology has the potential to generate totally new characters or bring stars back to life for posthumous roles in the entertainment industry. It gets harder and harder to tell fake content from authentic content, which makes it simpler for hackers to trick people and businesses.
Ongoing Deepfake Assaults In India
Deepfake videos continue to target popular celebrities, Priyanka Chopra is the most recent victim of this unsettling trend. Priyanka's deepfake adopts a different strategy than other examples including actresses like Rashmika Mandanna, Katrina Kaif, Kajol, and Alia Bhatt. Rather than editing her face in contentious situations, the misleading film keeps her look the same but modifies her voice and replaces real interview quotes with made-up commercial phrases. The deceptive video shows Priyanka promoting a product and talking about her yearly salary, highlighting the worrying development of deepfake technology and its possible effects on prominent personalities.
Actions Considered by Authorities
A PIL was filed requesting the Delhi High Court that access to websites that produce deepfakes be blocked. The petitioner's attorney argued in court that the government should at the very least establish some guidelines to hold individuals accountable for their misuse of deepfake and AI technology. He also proposed that websites should be asked to identify information produced through AI as such and that they should be prevented from producing illegally. A division bench highlighted how complicated the problem is and suggested the government (Centre) to arrive at a balanced solution without infringing the right to freedom of speech and expression (internet).
Information Technology Minister Ashwini Vaishnaw stated that new laws and guidelines would be implemented by the government to curb the dissemination of deepfake content. He presided over a meeting involving social media companies to talk about the problem of deepfakes. "We will begin drafting regulation immediately, and soon, we are going to have a fresh set of regulations for deepfakes. this might come in the way of amending the current framework or ushering in new rules, or a new law," he stated.
Prevention and Detection Techniques
To effectively combat the growing threat posed by the misuse of deepfake technology, people and institutions should place a high priority on developing critical thinking abilities, carefully examining visual and auditory cues for discrepancies, making use of tools like reverse image searches, keeping up with the latest developments in deepfake trends, and rigorously fact-check reputable media sources. Important actions to improve resistance against deepfake threats include putting in place strong security policies, integrating cutting-edge deepfake detection technologies, supporting the development of ethical AI, and encouraging candid communication and cooperation. We can all work together to effectively and mindfully manage the problems presented by deepfake technology by combining these tactics and adjusting the constantly changing terrain.
Conclusion
Advanced artificial intelligence-powered deepfake technology produces extraordinarily lifelike computer-generated information, raising both creative and moral questions. Misuse of tech or deepfake presents major difficulties such as identity theft and the propagation of misleading information, as demonstrated by examples in India, such as the latest deepfake video involving Priyanka Chopra. It is important to develop critical thinking abilities, use detection strategies including analyzing audio quality and facial expressions, and keep up with current trends in order to counter this danger. A thorough strategy that incorporates fact-checking, preventative tactics, and awareness-raising is necessary to protect against the negative effects of deepfake technology. Important actions to improve resistance against deepfake threats include putting in place strong security policies, integrating cutting-edge deepfake detection technologies, supporting the development of ethical AI, and encouraging candid communication and cooperation. We can all work together to effectively and mindfully manage the problems presented by deepfake technology by combining these tactics and making adjustments to the constantly changing terrain. Creating a true cyber-safe environment for netizens.
References:
- https://yourstory.com/2023/11/unveiling-deepfake-technology-impact
- https://www.indiatoday.in/movies/celebrities/story/deepfake-alert-priyanka-chopra-falls-prey-after-rashmika-mandanna-katrina-kaif-and-alia-bhatt-2472293-2023-12-05
- https://www.csoonline.com/article/1251094/deepfakes-emerge-as-a-top-security-threat-ahead-of-the-2024-us-election.html
- https://timesofindia.indiatimes.com/city/delhi/hc-unwilling-to-step-in-to-curb-deepfakes-delhi-high-court/articleshow/105739942.cms
- https://www.indiatoday.in/india/story/india-among-top-targets-of-deepfake-identity-fraud-2472241-2023-12-05
- https://sumsub.com/fraud-report-2023/