#FactCheck: An image shows Sunita Williams with Trump and Elon Musk post her space return.
Executive Summary:
Our research has determined that a widely circulated social media image purportedly showing astronaut Sunita Williams with U.S. President Donald Trump and entrepreneur Elon Musk following her return from space is AI-generated. There is no verifiable evidence to suggest that such a meeting took place or was officially announced. The image exhibits clear indicators of AI generation, including inconsistencies in facial features and unnatural detailing.
Claim:
It was claimed on social media that after returning to Earth from space, astronaut Sunita Williams met with U.S. President Donald Trump and Elon Musk, as shown in a circulated picture.

Fact Check:
Following a comprehensive analysis using Hive Moderation, the image has been verified as fake and AI-generated. Distinct signs of AI manipulation include unnatural skin texture, inconsistent lighting, and distorted facial features. Furthermore, no credible news sources or official reports substantiate or confirm such a meeting. The image is likely a digitally altered post designed to mislead viewers.

While reviewing the accounts that shared the image, we found that former Indian cricketer Manoj Tiwary had also posted the same image and a video of a space capsule returning, congratulating Sunita Williams on her homecoming. Notably, the image featured a Grok watermark in the bottom right corner, confirming that it was AI-generated.

Additionally, we discovered a post from Grok on X (formerly known as Twitter) featuring the watermark, stating that the image was likely AI-generated.
Conclusion:
As per our research on the viral image of Sunita Williams with Donald Trump and Elon Musk is AI-generated. Indicators such as unnatural facial features, lighting inconsistencies, and a Grok watermark suggest digital manipulation. No credible sources validate the meeting, and a post from Grok on X further supports this finding. This case underscores the need for careful verification before sharing online content to prevent the spread of misinformation.
- Claim: Sunita Williams met Donald Trump and Elon Musk after her space mission.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
Considering the development of technology, Voice cloning schemes are one such issue that has recently come to light. Scammers are moving forward with AI, and their methods and plans for deceiving and scamming people have also altered. Deepfake technology creates realistic imitations of a person’s voice that can be used to conduct fraud, dupe a person into giving up crucial information, or even impersonate a person for illegal purposes. We will look at the dangers and risks associated with AI voice cloning frauds, how scammers operate and how one might protect themselves from one.
What is Deepfake?
Artificial intelligence (AI), known as “deepfake,” can produce fake or altered audio, video, and film that pass for the real thing. The words “deep learning” and “fake” are combined to get the name “deep fake”. Deepfake technology creates content with a realistic appearance or sound by analysing and synthesising diverse volumes of data using machine learning algorithms. Con artists employ technology to portray someone doing something that has never been in audio or visual form. The best example is the American President, who used deep voice impersonation technology. Deep voice impersonation technology can be used maliciously, such as in deep voice fraud or disseminating false information. As a result, there is growing concerned about the potential influence of deep fake technology on society and the need for effective tools to detect and mitigate the hazards it may provide.
What exactly are deepfake voice scams?
Artificial intelligence (AI) is sometimes utilised in deepfake speech frauds to create synthetic audio recordings that seem like real people. Con artists can impersonate someone else over the phone and pressure their victims into providing personal information or paying money by using contemporary technology. A con artist may pose as a bank employee, a government official, or a friend or relative by utilising a deep false voice. It aims to earn the victim’s trust and raise the likelihood that they will fall for the hoax by conveying a false sense of familiarity and urgency. Deep fake speech frauds are increasing in frequency as deep fake technology becomes more widely available, more sophisticated, and harder to detect. In order to avoid becoming a victim of such fraud, it is necessary to be aware of the risks and take appropriate measures.
Why do cybercriminals use AI voice deep fake?
In order to mislead users into providing private information, money, or system access, cybercriminals utilise artificial intelligence (AI) speech-deep spoofing technology to claim to be people or entities. Using AI voice-deep fake technology, cybercriminals can create audio recordings that mimic real people or entities, such as CEOs, government officials, or bank employees, and use them to trick victims into taking activities that are advantageous to the criminals. This can involve asking victims for money, disclosing login credentials, or revealing sensitive information. In phishing assaults, where fraudsters create audio recordings that impersonate genuine messages from organisations or people that victims trust, deepfake AI voice technology can also be employed. These audio recordings can trick people into downloading malware, clicking on dangerous links, or giving out personal information. Additionally, false audio evidence can be produced using AI voice-deep fake technology to support false claims or accusations. This is particularly risky regarding legal processes because falsified audio evidence may lead to wrongful convictions or acquittals. Artificial intelligence voice deep fake technology gives con artists a potent tool for tricking and controlling victims. Every organisation and the general population must be informed of this technology’s risk and adopt the appropriate safety measures.
How to spot voice deepfake and avoid them?
Deep fake technology has made it simpler for con artists to edit audio recordings and create phoney voices that exactly mimic real people. As a result, a brand-new scam called the “deep fake voice scam” has surfaced. In order to trick the victim into handing over money or private information, the con artist assumes another person’s identity and uses a fake voice. What are some ways to protect oneself from deepfake voice scams? Here are some guidelines to help you spot them and keep away from them:
- Steer clear of telemarketing calls
- One of the most common tactics used by deep fake voice con artists, who pretend to be bank personnel or government officials, is making unsolicited phone calls.
- Listen closely to the voice
- Anyone who phones you pretending to be someone else should pay special attention to their voice. Are there any peculiar pauses or inflexions in their speech? Something that doesn’t seem right can be a deep voice fraud.
- Verify the caller’s identity
- It’s crucial to verify the caller’s identity in order to avoid falling for a deep false voice scam. You might ask for their name, job title, and employer when in doubt. You can then do some research to be sure they are who they say they are.
- Never divulge confidential information
- No matter who calls, never give out personal information like your Aadhar, bank account information, or passwords over the phone. Any legitimate companies or organisations will never request personal or financial information over the phone; if they do, it’s a warning sign that they’re a scammer.
- Report any suspicious activities
- Inform the appropriate authorities if you think you’ve fallen victim to a deep voice fraud. This may include your bank, credit card company, local police station, or the nearest cyber cell. By reporting the fraud, you could prevent others from being a victim.
Conclusion
In conclusion, the field of AI voice deep fake technology is fast expanding and has huge potential for beneficial and detrimental effects. While deep fake voice technology has the potential to be used for good, such as improving speech recognition systems or making voice assistants sound more realistic, it may also be used for evil, such as deep fake voice frauds and impersonation to fabricate stories. Users must be aware of the hazard and take the necessary precautions to protect themselves as AI voice deep fake technology develops, making it harder to detect and prevent deep fake schemes. Additionally, it is necessary to conduct ongoing research and develop efficient techniques to identify and control the risks related to this technology. We must deploy AI appropriately and ethically to ensure that AI voice-deep fake technology benefits society rather than harming or deceiving it.
Reference

Overview:
In today’s digital landscape, safeguarding personal data and communications is more crucial than ever. WhatsApp, as one of the world’s leading messaging platforms, consistently enhances its security features to protect user interactions, offering a seamless and private messaging experience
App Lock: Secure Access with Biometric Authentication
To fortify security at the device level, WhatsApp offers an app lock feature, enabling users to protect their app with biometric authentication such as fingerprint or Face ID. This feature ensures that only authorized users can access the app, adding an additional layer of protection to private conversations.
How to Enable App Lock:
- Open WhatsApp and navigate to Settings.
- Select Privacy.
- Scroll down and tap App Lock.
- Activate Fingerprint Lock or Face ID and follow the on-screen instructions.

Chat Lock: Restrict Access to Private Conversations
WhatsApp allows users to lock specific chats, moving them to a secured folder that requires biometric authentication or a passcode for access. This feature is ideal for safeguarding sensitive conversations from unauthorized viewing.
How to Lock a Chat:
- Open WhatsApp and select the chat to be locked.
- Tap on the three dots (Android) or More Options (iPhone).
- Select Lock Chat
- Enable the lock using Fingerprint or Face ID.

Privacy Checkup: Strengthening Security Preferences
The privacy checkup tool assists users in reviewing and customizing essential security settings. It provides guidance on adjusting visibility preferences, call security, and blocked contacts, ensuring a personalized and secure communication experience.
How to Run Privacy Checkup:
- Open WhatsApp and navigate to Settings.
- Tap Privacy.
- Select Privacy Checkup and follow the prompts to adjust settings.

Automatic Blocking of Unknown Accounts and Messages
To combat spam and potential security threats, WhatsApp automatically restricts unknown accounts that send excessive messages. Users can also manually block or report suspicious contacts to further enhance security.
How to Manage Blocking of Unknown Accounts:
- Open WhatsApp and go to Settings.
- Select Privacy.
- Tap to Advanced
- Enable Block unknown account messages

IP Address Protection in Calls
To prevent tracking and enhance privacy, WhatsApp provides an option to hide IP addresses during calls. When enabled, calls are routed through WhatsApp’s servers, preventing location exposure via direct connections.
How to Enable IP Address Protection in Calls:
- Open WhatsApp and go to Settings.
- Select Privacy, then tap Advanced.
- Enable Protect IP Address in Calls.

Disappearing Messages: Auto-Deleting Conversations
Disappearing messages help maintain confidentiality by automatically deleting sent messages after a predefined period—24 hours, 7 days, or 90 days. This feature is particularly beneficial for reducing digital footprints.
How to Enable Disappearing Messages:
- Open the chat and tap the Chat Name.
- Select Disappearing Messages.
- Choose the preferred duration before messages disappear.

View Once: One-Time Access to Media Files
The ‘View Once’ feature ensures that shared photos and videos can only be viewed a single time before being automatically deleted, reducing the risk of unauthorized storage or redistribution.
How to Send View Once Media:
- Open a chat and tap the attachment icon.
- Choose Camera or Gallery to select media.
- Tap the ‘1’ icon before sending the media file.

Group Privacy Controls: Manage Who Can Add You
WhatsApp provides users with the ability to control group invitations, preventing unwanted additions by unknown individuals. Users can restrict group invitations to ‘Everyone,’ ‘My Contacts,’ or ‘My Contacts Except…’ for enhanced privacy.
How to Adjust Group Privacy Settings:
- Open WhatsApp and go to Settings.
- Select Privacy and tap Groups.
- Choose from the available options: Everyone, My Contacts, or My Contacts Except

Conclusion
WhatsApp continuously enhances its security features to protect user privacy and ensure safe communication. With tools like App Lock, Chat Lock, Privacy Checkup, IP Address Protection, and Disappearing Messages, users can safeguard their data and interactions. Features like View Once and Group Privacy Controls further enhance confidentiality. By enabling these settings, users can maintain a secure and private messaging experience, effectively reducing risks associated with unauthorized access, tracking, and digital footprints. Stay updated and leverage these features for enhanced security.

Introduction
“GPS Spoofing” though formerly was confined to conflict zones as a consequence, has lately become a growing hazard for pilots and aircraft operators across the world, and several countries have been facing such issues. This definition stems from the US Radio Technical Commission for Aeronautics, which delivers specialized advice for government regulatory authorities. Global Positioning System (GPS) is considered an emergent part of aviation infrastructure as it supersedes traditional radio beams used to direct planes towards the landing. “GPS spoofing” occurs when a double-dealing radio signal overrides a legitimate GPS satellite alert where the receiver gets false location information. In the present times, this is the first time civilian passenger flights have faced such a significant danger, though GPS signal interference of this character has existed for over a decade. According to the Agency France-Presse (AFP), false GPS signals mislead onboard plane procedures and problematise the job of airline pilots that are surging around conflict areas. GPS spoofing may also be the outcome of military electronic warfare systems that have been deployed in zones combating regional tension. GPS spoofing can further lead to significant upheavals in commercial aviation, which include arrivals and departures of passengers apart from safety.
Spoofing might likewise involve one country’s military sending false GPS signals to an enemy plane or drone to impede its capability to operate, which has a collateral impact on airliners operating at a near distance. Collateral impairment in commercial aircraft can occur as confrontations escalate and militaries send faulty GPS signals to attempt to thwart drones and other aircraft. It could, therefore, lead to a global crisis, leading to the loss of civilian aircraft in an area already at a high-risk zone close to an operational battle area. Furthermore, GPS jamming is different from GPS Spoofing. While jamming is when the GPS signals are jammed or obstructed, spoofing is very distinct and way more threatening.
Global Reporting
An International Civil Aviation Organization (ICAO) assessment released in 2019 indicated that there were 65 spoofing incidents across the Middle East in the preceding two years, according to the C4ADS report. At the beginning of 2018, Euro control received more than 800 reports of Global Navigation Satellite System (GNSS) interference in Europe. Also, GPS spoofing in Eastern Europe and the Middle East has resulted in up to 80nm divergence from the flight route and aircraft impacted have had to depend on radar vectors from Air Traffic Control (ATC). According to Forbes, flight data intelligence website OPSGROUP, constituted of 8,000 members including pilots and controllers, has been reporting spoofing incidents since September 2023. Similarly, over 20 airlines and corporate jets flying over Iran diverted from their planned path after they were directed off the pathway by misleading GPS signals transmitted from the ground, subjugating the navigation systems of the aircraft.
In this context, vicious hackers, however at large, have lately realized how to override the critical Inertial Reference Systems (IRS) of an airplane, which is the essential element of technology and is known by the manufacturers as the “brains” of an aircraft. However, the current IRS is not prepared to counter this kind of attack. IRS uses accelerometers, gyroscopes and electronics to deliver accurate attitude, speed, and navigation data so that a plane can decide how it is moving through the airspace. GPS spoofing occurrences make the IRS ineffective, and in numerous cases, all navigation power is lost.
Red Flag from Agencies
The European Union Aviation Safety Agency (EASA) and the International Air Transport Association (IATA) correspondingly hosted a workshop on incidents where people have spoofed and obstructed satellite navigation systems and inferred that these direct a considerable challenge to security. IATA and EASA have further taken measures to communicate information about GPS tampering so that crew and pilots can make sure to determine when it is transpiring. The EASA had further pre-cautioned about an upsurge in reports of GPS spoofing and jamming happenings in the Baltic Sea area, around the Black Sea, and regions near Russia and Finland in 2022 and 2023. According to industry officials, empowering the latest technologies for civil aircraft can take several years, and while GPS spoofing incidents have been increasing, there is no time to dawdle. Experts have noted critical navigation failures on airplanes, as there have been several recent reports of alarming cyber attacks that have changed planes' in-flight GPS. As per experts, GPS spoofing could affect commercial airlines and cause further disarray. Due to this, there are possibilities that pilots can divert from the flight route, further flying into a no-fly zone or any unauthorized zone, putting them at risk.
According to OpsGroup, a global group of pilots and technicians first brought awareness and warning to the following issue when the Federal Aviation Administration (FAA) issued a forewarning on the security of flight risk to civil aviation operations over the spate of attacks. In addition, as per the civil aviation regulator Directorate General of Civil Aviation (DGCA), a forewarning circular on spoofing threats to planes' GPS signals when flying over parts of the Middle East was issued. DGCA advisory further notes the aviation industry is scuffling with uncertainties considering the contemporary dangers and information of GNSS jamming and spoofing.
Conclusion
As the aviation industry continues to grapple with GPS spoofing problems, it is entirely unprepared to combat this, although the industry should consider discovering attainable technologies to prevent them. As International conflicts become convoluted, technological solutions are unrestricted and can be pricey, intricate and not always efficacious depending on what sort of spoofing is used.
As GPS interference attacks become more complex, specialized resolutions should be invariably contemporized. Improving education and training (to increase awareness among pilots, air traffic controllers and other aviation experts), receiver technology (Creating and enforcing more state-of-the-art GPS receiver technology), ameliorating monitoring and reporting (Installing robust monitoring systems), cooperation (collaboration among stakeholders like government bodies, aviation organisations etc.), data/information sharing, regulatory measures (regulations and guidelines by regulatory and government bodies) can help in averting GPS spoofing.
References
- https://economictimes.indiatimes.com/industry/transportation/airlines-/-aviation/false-gps-signal-surge-makes-life-hard-for-pilots/articleshow/108363076.cms?from=mdr
- https://nypost.com/2023/11/20/lifestyle/hackers-are-taking-over-planes-gps-experts-are-lost-on-how-to-fix-it/
- https://www.timesnownews.com/india/planes-losing-gps-signal-over-middle-east-dgca-flags-spoofing-threat-article-105475388
- https://www.firstpost.com/world/gps-spoofing-deceptive-gps-lead-over-20-planes-astray-in-iran-13190902.html
- https://www.forbes.com/sites/erictegler/2024/01/31/gps-spoofing-is-now-affecting-airplanes-in-parts-of-europe/?sh=48fbe725c550
- https://www.insurancejournal.com/news/international/2024/01/30/758635.htm
- https://airwaysmag.com/gps-spoofing-commercial-aviation/
- https://www.wsj.com/articles/aviation-industry-to-tackle-gps-security-concerns-c11a917f
- https://www.deccanherald.com/world/explained-what-is-gps-spoofing-that-has-misguided-around-20-planes-near-iran-iraq-border-and-how-dangerous-is-this-2708342