#FactCheck: An image shows Sunita Williams with Trump and Elon Musk post her space return.
Executive Summary:
Our research has determined that a widely circulated social media image purportedly showing astronaut Sunita Williams with U.S. President Donald Trump and entrepreneur Elon Musk following her return from space is AI-generated. There is no verifiable evidence to suggest that such a meeting took place or was officially announced. The image exhibits clear indicators of AI generation, including inconsistencies in facial features and unnatural detailing.
Claim:
It was claimed on social media that after returning to Earth from space, astronaut Sunita Williams met with U.S. President Donald Trump and Elon Musk, as shown in a circulated picture.

Fact Check:
Following a comprehensive analysis using Hive Moderation, the image has been verified as fake and AI-generated. Distinct signs of AI manipulation include unnatural skin texture, inconsistent lighting, and distorted facial features. Furthermore, no credible news sources or official reports substantiate or confirm such a meeting. The image is likely a digitally altered post designed to mislead viewers.

While reviewing the accounts that shared the image, we found that former Indian cricketer Manoj Tiwary had also posted the same image and a video of a space capsule returning, congratulating Sunita Williams on her homecoming. Notably, the image featured a Grok watermark in the bottom right corner, confirming that it was AI-generated.

Additionally, we discovered a post from Grok on X (formerly known as Twitter) featuring the watermark, stating that the image was likely AI-generated.
Conclusion:
As per our research on the viral image of Sunita Williams with Donald Trump and Elon Musk is AI-generated. Indicators such as unnatural facial features, lighting inconsistencies, and a Grok watermark suggest digital manipulation. No credible sources validate the meeting, and a post from Grok on X further supports this finding. This case underscores the need for careful verification before sharing online content to prevent the spread of misinformation.
- Claim: Sunita Williams met Donald Trump and Elon Musk after her space mission.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs
.webp)
Introduction
Misinformation is a major issue in the AI age, exacerbated by the broad adoption of AI technologies. The misuse of deepfakes, bots, and content-generating algorithms have made it simpler for bad actors to propagate misinformation on a large scale. These technologies are capable of creating manipulative audio/video content, propagate political propaganda, defame individuals, or incite societal unrest. AI-powered bots may flood internet platforms with false information, swaying public opinion in subtle ways. The spread of misinformation endangers democracy, public health, and social order. It has the potential to affect voter sentiments, erode faith in the election process, and even spark violence. Addressing misinformation includes expanding digital literacy, strengthening platform detection capabilities, incorporating regulatory checks, and removing incorrect information.
AI's Role in Misinformation Creation
AI's growth in its capabilities to generate content have grown exponentially in recent years. Legitimate uses or purposes of AI many-a-times take a backseat and result in the exploitation of content that already exists on the internet. One of the main examples of misinformation flooding the internet is when AI-powered bots flood social media platforms with fake news at a scale and speed that makes it impossible for humans to track and figure out whether the same is true or false.
The netizens in India are greatly influenced by viral content on social media. AI-generated misinformation can have particularly negative consequences. Being literate in the traditional sense of the word does not automatically guarantee one the ability to parse through the nuances of social media content authenticity and impact. Literacy, be it social media literacy or internet literacy, is under attack and one of the main contributors to this is the rampant rise of AI-generated misinformation. Some of the most common examples of misinformation that can be found are related to elections, public health, and communal issues. These issues have one common factor that connects them, which is that they evoke strong emotions in people and as such can go viral very quickly and influence social behaviour, to the extent that they may lead to social unrest, political instability and even violence. Such developments lead to public mistrust in the authorities and institutions, which is dangerous in any economy, but even more so in a country like India which is home to a very large population comprising a diverse range of identity groups.
Misinformation and Gen AI
Generative AI (GAI) is a powerful tool that allows individuals to create massive amounts of realistic-seeming content, including imitating real people's voices and creating photos and videos that are indistinguishable from reality. Advanced deepfake technology blurs the line between authentic and fake. However, when used smartly, GAI is also capable of providing a greater number of content consumers with trustworthy information, counteracting misinformation.
Generative AI (GAI) is a technology that has entered the realm of autonomous content production and language creation, which is linked to the issue of misinformation. It is often difficult to determine if content originates from humans or machines and if we can trust what we read, see, or hear. This has led to media users becoming more confused about their relationship with media platforms and content and highlighted the need for a change in traditional journalistic principles.
We have seen a number of different examples of GAI in action in recent times, from fully AI-generated fake news websites to fake Joe Biden robocalls telling the Democrats in the U.S. not to vote. The consequences of such content and the impact it could have on life as we know it are almost too vast to even comprehend at present. If our ability to identify reality is quickly fading, how will we make critical decisions or navigate the digital landscape safely? As such, the safe and ethical use and applications of this technology needs to be a top global priority.
Challenges for Policymakers
AI's ability to generate anonymous content makes it difficult to hold perpetrators accountable due to the massive amount of data generated. The decentralised nature of the internet further complicates regulation efforts, as misinformation can spread across multiple platforms and jurisdictions. Balancing the need to protect the freedom of speech and expression with the need to combat misinformation is a challenge. Over-regulation could stifle legitimate discourse, while under-regulation could allow misinformation to propagate unchecked. India's multilingual population adds more layers to already-complex issue, as AI-generated misinformation is tailored to different languages and cultural contexts, making it harder to detect and counter. Therefore, developing strategies catering to the multilingual population is necessary.
Potential Solutions
To effectively combat AI-generated misinformation in India, an approach that is multi-faceted and multi-dimensional is essential. Some potential solutions are as follows:
- Developing a framework that is specific in its application to address AI-generated content. It should include stricter penalties for the originator and spreader and dissemination of fake content in proportionality to its consequences. The framework should establish clear and concise guidelines for social media platforms to ensure that proactive measures are taken to detect and remove AI-generated misinformation.
- Investing in tools that are driven by AI for customised detection and flagging of misinformation in real time. This can help in identifying deepfakes, manipulated images, and other forms of AI-generated content.
- The primary aim should be to encourage different collaborations between tech companies, cyber security orgnisations, academic institutions and government agencies to develop solutions for combating misinformation.
- Digital literacy programs will empower individuals by training them to evaluate online content. Educational programs in schools and communities teach critical thinking and media literacy skills, enabling individuals to better discern between real and fake content.
Conclusion
AI-generated misinformation presents a significant threat to India, and it is safe to say that the risks posed are at scale with the rapid rate at which the nation is developing technologically. As the country moves towards greater digital literacy and unprecedented mobile technology adoption, one must be cognizant of the fact that even a single piece of misinformation can quickly and deeply reach and influence a large portion of the population. Indian policymakers need to rise to the challenge of AI-generated misinformation and counteract it by developing comprehensive strategies that not only focus on regulation and technological innovation but also encourage public education. AI technologies are misused by bad actors to create hyper-realistic fake content including deepfakes and fabricated news stories, which can be extremely hard to distinguish from the truth. The battle against misinformation is complex and ongoing, but by developing and deploying the right policies, tools, digital defense frameworks and other mechanisms, we can navigate these challenges and safeguard the online information landscape.
References:
- https://economictimes.indiatimes.com/news/how-to/how-ai-powered-tools-deepfakes-pose-a-misinformation-challenge-for-internet-users/articleshow/98770592.cms?from=mdr
- https://www.dw.com/en/india-ai-driven-political-messaging-raises-ethical-dilemma/a-69172400
- https://pure.rug.nl/ws/portalfiles/portal/975865684/proceedings.pdf#page=62

Introduction
In the past few decades, technology has rapidly advanced, significantly impacting various aspects of life. Today, we live in a world shaped by technology, which continues to influence human progress and culture. While technology offers many benefits, it also presents certain challenges. It has increased dependence on machines, reduced physical activity, and encouraged more sedentary lifestyles. The excessive use of gadgets has contributed to social isolation. Different age groups experience the negative aspects of the digital world in distinct ways. For example, older adults often face difficulties with digital literacy and accessing information. This makes them more vulnerable to cyber fraud. A major concern is that many older individuals may not be familiar with identifying authentic versus fraudulent online transactions. The consequences of such cybercrimes go beyond financial loss. Victims may also experience emotional distress, reputational harm, and a loss of trust in digital platforms.
Why Senior Citizens Are A Vulnerable Target
Digital exploitation involves a variety of influencing tactics, such as coercion, undue influence, manipulation, and frequently some sort of deception, which makes senior citizens easy targets for scammers. Senior citizens have been largely neglected in research on this burgeoning type of digital crime. Many of our parents and grandparents grew up in an era when politeness and trust were very common, making it difficult for them to say “no” or recognise when someone was attempting to scam them. Seniors who struggle with financial stability may be more likely to fall for scams promising financial relief or security. They might encounter obstacles in learning to use new technologies, mainly due to unfamiliarity. It is important to note that these factors do not make seniors weak or incapable. Rather, it is the responsibility of the community to recognise and address the unique vulnerabilities of our senior population and work to prevent them from falling victim to scams.
Senior citizens are the most susceptible to social engineering attacks. Scammers may impersonate people, such as family members in distress, government officials, and deceive seniors into sending money or sharing personal information. Some of them are:
- The grandparent scam
- Tech support scam
- Government impersonation scams
- Romance scams
- Digital arrest
Protecting Senior Citizens from Digital Scams
As a society, we must focus on educating seniors about common cyber fraud techniques such as impersonation of family members or government officials, the use of fake emergencies, or offers that seem too good to be true. It is important to guide them on how to verify suspicious calls and emails, caution them against sharing personal information online, and use real-life examples to enhance their understanding.
Larger organisations and NGOs can play a key role in protecting senior citizens from digital scams by conducting fraud awareness training, engaging in one-on-one conversations, inviting seniors to share their experiences through podcasts, and organising seminars and workshops specifically for individuals aged 60 and above.
Safety Tips
In today's digital age, safeguarding oneself from cyber threats is crucial for people of all ages. Here are some essential steps everyone should take at a personal level to remain cyber secure:
- Ensuring that software and operating systems are regularly updated allows users to benefit from the latest security fixes, reducing their vulnerability to cyber threats.
- Avoiding the sharing of personal information online is also essential. Monitoring bank statements is equally important, as it helps in quickly identifying signs of potential cybercrime. Reviewing financial transactions and reporting any unusual activity to the bank can assist in detecting and preventing fraud.
- If suspicious activity is suspected, it is advisable to contact the company directly using a different phone line. This is because cybercriminals can sometimes keep the original line open, leading individuals to believe they are speaking with a legitimate representative. In such cases, attackers may impersonate trusted organisations to deceive users and gain sensitive information.
- If an individual becomes a victim of cybercrime, they should take immediate action to protect their personal information and seek professional guidance.
- Stay calm and respond swiftly and wisely. Begin by collecting and preserving all evidence—this includes screenshots, suspicious messages, emails, or any unusual activity. Report the incident immediately to the police or through an official platform like www.cybercrime.gov.in and the helpline number 1930.
- If financial information is compromised, the affected individual must alert their bank or financial institution without delay to secure their accounts. They should also update passwords and implement two-factor authentication as additional safeguards.
Conclusion: Collective Action for Cyber Dignity and Inclusion
Elder abuse in the digital age is an invisible crisis. It’s time we bring it into the spotlight and confront it with education, empathy, and collective action. Safeguarding senior citizens from cybercrime necessitates a comprehensive approach that combines education, vigilance, and technological safeguards. By fostering awareness and providing the necessary tools and support, we can empower senior citizens to navigate the digital world safely and confidently. Let us stand together to support these initiatives, to be the guardians our elders deserve, and to ensure that the digital world remains a place of opportunity, not exploitation.
REFERENCES -
- https://portal.ct.gov/ag/consumer-issues/hot-scams/the-grandparents-scam
- https://www.fbi.gov/how-we-can-help-you/scams-and-safety/common-frauds-and-scams/tech-support-scams
- https://consumer.ftc.gov/articles/how-avoid-government-impersonation-scam
- https://www.jpmorgan.com/insights/fraud/fraud-mitigation/helping-your-elderly-and-vulnerable-loved-ones-avoid-the-scammers
- https://www.fbi.gov/how-we-can-help-you/scams-and-safety/common-frauds-and-scams/romance-scams
- https://www.jpmorgan.com/insights/fraud/fraud-mitigation/helping-your-elderly-and-vulnerable-loved-ones-avoid-the-scammers

Executive Summary:
A post on X (formerly Twitter) featuring an image that has been widely shared with misleading captions, claiming to show men riding an elephant next to a tiger in Bihar, India. This post has sparked both fascination and skepticism on social media. However, our investigation has revealed that the image is misleading. It is not a recent photograph; rather, it is a photo of an incident from 2011. Always verify claims before sharing.

Claims:
An image purporting to depict men riding an elephant next to a tiger in Bihar has gone viral, implying that this astonishing event truly took place.

Fact Check:
After investigation of the viral image using Reverse Image Search shows that it comes from an older video. The footage shows a tiger that was shot after it became a man-eater by forest guard. The tiger killed six people and caused panic in local villages in the Ramnagar division of Uttarakhand in January, 2011.

Before sharing viral posts, take a brief moment to verify the facts. Misinformation spreads quickly and it’s far better to rely on trusted fact-checking sources.
Conclusion:
The claim that men rode an elephant alongside a tiger in Bihar is false. The photo presented as recent actually originates from the past and does not depict a current event. Social media users should exercise caution and verify sensational claims before sharing them.
- Claim: The video shows people casually interacting with a tiger in Bihar
- Claimed On:Instagram and X (Formerly Known As Twitter)
- Fact Check: False and Misleading