#FactCheck: An image shows Sunita Williams with Trump and Elon Musk post her space return.
Executive Summary:
Our research has determined that a widely circulated social media image purportedly showing astronaut Sunita Williams with U.S. President Donald Trump and entrepreneur Elon Musk following her return from space is AI-generated. There is no verifiable evidence to suggest that such a meeting took place or was officially announced. The image exhibits clear indicators of AI generation, including inconsistencies in facial features and unnatural detailing.
Claim:
It was claimed on social media that after returning to Earth from space, astronaut Sunita Williams met with U.S. President Donald Trump and Elon Musk, as shown in a circulated picture.

Fact Check:
Following a comprehensive analysis using Hive Moderation, the image has been verified as fake and AI-generated. Distinct signs of AI manipulation include unnatural skin texture, inconsistent lighting, and distorted facial features. Furthermore, no credible news sources or official reports substantiate or confirm such a meeting. The image is likely a digitally altered post designed to mislead viewers.

While reviewing the accounts that shared the image, we found that former Indian cricketer Manoj Tiwary had also posted the same image and a video of a space capsule returning, congratulating Sunita Williams on her homecoming. Notably, the image featured a Grok watermark in the bottom right corner, confirming that it was AI-generated.

Additionally, we discovered a post from Grok on X (formerly known as Twitter) featuring the watermark, stating that the image was likely AI-generated.
Conclusion:
As per our research on the viral image of Sunita Williams with Donald Trump and Elon Musk is AI-generated. Indicators such as unnatural facial features, lighting inconsistencies, and a Grok watermark suggest digital manipulation. No credible sources validate the meeting, and a post from Grok on X further supports this finding. This case underscores the need for careful verification before sharing online content to prevent the spread of misinformation.
- Claim: Sunita Williams met Donald Trump and Elon Musk after her space mission.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
Google’s search engine is widely known for its ability to tailor its search results based on user activity, enhancing the relevance of search outcomes. Recently, Google introduced the ‘Try Without Personalisation’ feature. This feature allows users to view results independent of their prior activity. This change marks a significant shift in platform experiences, offering users more control over their search experience while addressing privacy concerns.
However, even in this non-personalised mode, certain contextual factors including location, language, and device type, continue to influence results. This essentially provides the search with a baseline level of relevance. This feature carries significant policy implications, particularly in the areas of privacy, consumer rights, and market competition.
Understanding the Feature
When users engage with this option of non-personalised search, it will no longer show them helpful individual results that are personalisation-dependent and will instead provide unbiased search results. Essentially,this feature provides users with neutral (non-personalised) search results by bypassing their data.
This feature allows the following changes:
- Disables the user’s ability to find past searches in Autofill/Autocomplete.
- Does not pause or delete stored activity within a user’s Google account. Users, because of this feature, will be able to pause or delete stored activity through data and privacy controls.
- The feature doesn't delete or disable app/website preferences like language or search settings are some of the unaffected preferences.
- It also does not disable or delete the material that users save.
- When a user is signed in, they can ‘turn off the personalisation’ by clicking on the search option at the end of the webpage. These changes, offered by the feature, in functionality, have significant implications for privacy, competition, and user trust.
Policy Implications: An Analysis
This feature aligns with global privacy frameworks such as the GDPR in the EU and the DPDP Act in India. By adhering to principles like data minimisation and user consent, it offers users control over their data and the choice to enable or disable personalisation, thereby enhancing user autonomy and trust.
However, there is a trade-off between user expectations for relevance and the impartiality of non-personalised results. Additionally, the introduction of such features may align with emerging regulations on data usage, transparency, and consent. Policymakers play a crucial role in encouraging innovations like these while ensuring they safeguard user rights and maintain a competitive market.
Conclusion and Future Outlook
Google's 'Try Without Personalisation' feature represents a pivotal moment for innovation by balancing user privacy with search functionality. By aligning with global privacy frameworks such as the GDPR and the DPDP Act, it empowers users to control their data while navigating the complex interplay between relevance and neutrality. However, its success hinges on overcoming technical hurdles, fostering user understanding, and addressing competitive and regulatory scrutiny. As digital platforms increasingly prioritise transparency, such features could redefine user expectations and regulatory standards in the evolving tech ecosystem.
References
.webp)
The 2020s mark the emergence of deepfakes in general media discourse. The rise in deepfake technology is defined by a very simple yet concerning fact: it is now possible to create perfect imitations of anyone using AI tools that can create audio in any person's voice and generate realistic images and videos of almost anyone doing pretty much anything. The proliferation of deepfake content in the media poses great challenges to the functioning of democracies. especially as such materials can deprive the public of the accurate information it needs to make informed decisions in elections. Deepfakes are created using AI, which combines different technologies to produce synthetic content.
Understanding Deepfakes
Deepfakes are synthetically generated content created using artificial intelligence (AI). This technology works on an advanced algorithm that creates hyper-realistic videos by using a person’s face, voice or likeness utilising techniques such as machine learning. The utilisation and progression of deepfake technology holds vast potential, both benign and malicious.
An example is when the NGO Malaria No More which had used deepfake technology in 2019 to sync David Beckham’s lip movements with different voices in nine languages, amplified its anti-malaria message.
Deepfakes have a dark side too. They have been used to spread false information, manipulate public opinion, and damage reputations. They can harm mental health and have significant social impacts. The ease of creating deepfakes makes it difficult to verify media authenticity, eroding trust in journalism and creating confusion about what is true and what is not. Their potential to cause harm has made it necessary to consider legal and regulatory approaches.
India’s Legal Landscape Surrounding Deepfakes
India presently lacks a specific law dealing with deepfakes, but the existing legal provisions offer some safeguards against mischief caused.
- Deepfakes created with the intent of spreading misinformation or damaging someone’s reputation can be prosecuted under the Bharatiya Nyaya Sanhita of 2023. It deals with the consequences of such acts under Section 356, governing defamation law.
- The Information Technology Act of 2000, the primary law that regulates Indian cyberspace. Any unauthorised disclosure of personal information which is used to create deepfakes for harassment or voyeurism is a violation of the act.
- The unauthorised use of a person's likeness in a deepfake can become a violation of their intellectual property rights and lead to copyright infringement.
- India’s privacy law, the Digital Personal Data Protection Act, regulates and limits the misuse of personal data. It has the potential to address deepfakes by ensuring that individuals’ likenesses are not used without their consent in digital contexts.
India, at present, needs legislation that can specifically address the challenges deepfakes pose. The proposed legislation, aptly titled, ‘the Digital India Act’ aims to tackle various digital issues, including the misuse of deepfake technology and the spread of misinformation. Additionally, states like Maharashtra have proposed laws targeting deepfakes used for defamation or fraud, highlighting growing concerns about their impact on the digital landscape.
Policy Approaches to Regulation of Deepfakes
- Criminalising and penalising the making, creation and distribution of harmful deepfakes as illegal will act as a deterrent.
- There should be a process that mandates the disclosures for synthetic media. This would be to inform viewers that the content has been created using AI.
- Encouraging tech companies to implement stricter policies on deepfake content moderation can enhance accountability and reduce harmful misinformation.
- The public’s understanding of deepfakes should be promoted. Especially, via awareness campaigns that will empower citizens to critically evaluate digital content and make informed decisions.
Deepfake, Global Overview
There has been an increase in the momentum to regulate deepfakes globally. In October 2023, US President Biden signed an executive order on AI risks instructing the US Commerce Department to form labelling standards for AI-generated content. California and Texas have passed laws against the dangerous distribution of deepfake images that affect electoral contexts and Virginia has targeted a law on the non-consensual distribution of deepfake pornography.
China promulgated regulations requiring explicit marking of doctored content. The European Union has tightened its Code of Practice on Disinformation by requiring social media to flag deepfakes, otherwise they risk facing hefty fines and proposed transparency mandates under the EU AI Act. These measures highlight a global recognition of the risks that deepfakes pose and the need for a robust regulatory framework.
Conclusion
With deepfakes being a significant source of risk to trust and democratic processes, a multi-pronged approach to regulation is in order. From enshrining measures against deepfake technology in specific laws and penalising the same, mandating transparency and enabling public awareness, the legislators have a challenge ahead of them. National and international efforts have highlighted the urgent need for a comprehensive framework to enable measures to curb the misuse and also promote responsible innovation. Cooperation during these trying times will be important to shield truth and integrity in the digital age.
References
- https://digitalcommons.usf.edu/cgi/viewcontent.cgi?article=2245&context=jss
- https://www.thehindu.com/news/national/regulating-deepfakes-generative-ai-in-india-explained/article67591640.ece
- https://www.brennancenter.org/our-work/research-reports/regulating-ai-deepfakes-and-synthetic-media-political-arena
- https://www.responsible.ai/a-look-at-global-deepfake-regulation-approaches/
- https://thesecretariat.in/article/wake-up-call-for-law-making-on-deepfakes-and-misinformation

Introduction
The digital expanse of the metaverse has recently come under scrutiny following a gruesome incident. In a digital realm crafted for connection and exploration, a 16-year-old girl’s avatar falls victim to an agonising assault that kindled the fire of ethno-legal and societal discourse. The incident is a stark reminder that the cyberverse, offering endless possibilities and experiences, also has glaring challenges that require serious consideration. The incident involves a sixteen-year-old teen girl being raped through her digital avatar by a few members of Metaverse.
This incident has sparked a critical question of genuine psychological trauma inflicted by virtual experiences. The incident with a 16-year-old girl highlights the strong emotional repercussions caused by illicit virtual actions. While the physical realm remains unharmed, the digital assault can leave permanent scars on the psyche of the girl. This issue raises a critical question about the ethical implications of virtual interactions and the responsibilities of service providers to protect users' well-being on their platforms.
The Judicial Quagmire
The digital nature of these assaults gives impetus to complex jurisdictions which are profound in cyber offences. We are still novices in navigating the digital labyrinth where avatars have the ability to transcend borders with just a click of a mouse. The current legal structure is not equipped to tackle virtual crimes, calling for urgent reforms in critical legal structure. The Policymakers and legal Professionals must define virtual offenses first with clear and defined jurisdictional boundaries ensuring justice isn’t hampered due to geographical restrictions.
Meta’s Accountability
Meta, a platform where this gruesome incident occurred, finds itself at the crossroads of ethical dilemma. The company implemented plenty of safeguards that proved futile in preventing such harrowing acts. The incident has raised several questions about the broader role and responsibilities of tech juggernauts. Some of the questions demanding immediate answers as how a company can strike a balance between innovation and the protection of its users.
The Tightrope of Ethics
Metaverse is the epitome of innovation, yet this harrowing incident highlights a fundamental ethical contention. The real challenge is to harness the power of virtual reality while addressing the risks of digital hostilities. Society is still facing this conundrum, stakeholders must work in tandem to formulate robust and effective legal structures to protect the rights and well-being of users. This also includes balancing technological development and ethical challenges which require collective effort.
Reflections of Society
Beyond legal and ethical considerations, this act calls for wider societal reflections. It emphasises the pressing need for a cultural shift fostering empathy, digital civility and respect. As we tread deeper into the virtual realm, we must strive to cultivate ethos upholding dignity in both the digital and real world. This shift is only possible through awareness campaigns, educational initiatives and strong community engagement to foster a culture of respect and responsibility.
Safer and Ethical Way Forward
A multidimensional approach is essential to address the complicated challenges cyber violence poses. Several measures can pave the way for safer cyberspace for netizens.
- Legislative Reforms - There’s an urgent need to revamp legislative frameworks to mitigate and effectively address the complexities of these new and emerging virtual offences. The tech companies must collaborate with the government on formulating best practices and help develop standard security measures prioritising user protection.
- Public Awareness and Engagement - Initiating public awareness campaigns to educate users on crucial issues such as cyber resilience, ethics, digital detox and responsible online behaviour play a critical role in making netizens vigilant to avoid cyber hostilities and help fellow netizens in distress. Civil society organisations and think tanks such as CyberPeace Foundation are the pioneers of cyber safety campaigns in the country, working in tandem with governments across the globe to curb the evil of cyber hostilities.
- Interdisciplinary Research: The policymakers should delve deeper into the ethical, psychological and societal ramifications of digital interactions. The multidisciplinary approach in research is crucial for formulating policy based on evidence.
Conclusion
The digital Gang Rape is a wake-up call, demanding the bold measure to confront the intricate legal, societal and ethical pitfalls of the metaverse. As we navigate digital labyrinth, our collective decisions will help shape the metaverse's future. By nurturing the culture of empathy, responsibility and innovation, we can forge a path honouring the dignity of netizens, upholding ethical principles and fostering a vibrant and safe cyberverse. In this significant movement, ethical vigilance, diligence and active collaboration are indispensable.
References:
- https://www.thehindu.com/sci-tech/technology/virtual-gang-rape-reported-in-the-metaverse-probe-underway/article67705164.ece
- https://thesouthfirst.com/news/teen-uk-girl-virtually-gang-raped-in-metaverse-are-indian-laws-equipped-to-handle-similar-cases/