#FactCheck - "Deep fake Falsely Claimed as a photo of Arvind Kejriwal welcoming Elon Musk when he visited India to discuss Delhi’s administrative policies.”
Executive Summary:
A viral online image claims to show Arvind Kejriwal, Chief Minister of Delhi, welcoming Elon Musk during his visit to India to discuss Delhi’s administrative policies. However, the CyberPeace Research Team has confirmed that the image is a deep fake, created using AI technology. The assertion that Elon Musk visited India to discuss Delhi’s administrative policies is false and misleading.


Claim
A viral image claims that Arvind Kejriwal welcomed Elon Musk during his visit to India to discuss Delhi’s administrative policies.


Fact Check:
Upon receiving the viral posts, we conducted a reverse image search using InVid Reverse Image searching tool. The search traced the image back to different unrelated sources featuring both Arvind Kejriwal and Elon Musk, but none of the sources depicted them together or involved any such event. The viral image displayed visible inconsistencies, such as lighting disparities and unnatural blending, which prompted further investigation.
Using advanced AI detection tools like TrueMedia.org and Hive AI Detection tool, we analyzed the image. The analysis confirmed with 97.5% confidence that the image was a deepfake. The tools identified “substantial evidence of manipulation,” particularly in the merging of facial features and the alignment of clothes and background, which were artificially generated.




Moreover, a review of official statements and credible reports revealed no record of Elon Musk visiting India to discuss Delhi’s administrative policies. Neither Arvind Kejriwal’s office nor Tesla or SpaceX made any announcement regarding such an event, further debunking the viral claim.
Conclusion:
The viral image claiming that Arvind Kejriwal welcomed Elon Musk during his visit to India to discuss Delhi’s administrative policies is a deep fake. Tools like Reverse Image search and AI detection confirm the image’s manipulation through AI technology. Additionally, there is no supporting evidence from any credible sources. The CyberPeace Research Team confirms the claim is false and misleading.
- Claim: Arvind Kejriwal welcomed Elon Musk to India to discuss Delhi’s administrative policies, viral on social media.
- Claimed on: Facebook and X(Formerly Twitter)
- Fact Check: False & Misleading
Related Blogs

Introduction
Considering the development of technology, Voice cloning schemes are one such issue that has recently come to light. Scammers are moving forward with AI, and their methods and plans for deceiving and scamming people have also altered. Deepfake technology creates realistic imitations of a person’s voice that can be used to conduct fraud, dupe a person into giving up crucial information, or even impersonate a person for illegal purposes. We will look at the dangers and risks associated with AI voice cloning frauds, how scammers operate and how one might protect themselves from one.
What is Deepfake?
Artificial intelligence (AI), known as “deepfake,” can produce fake or altered audio, video, and film that pass for the real thing. The words “deep learning” and “fake” are combined to get the name “deep fake”. Deepfake technology creates content with a realistic appearance or sound by analysing and synthesising diverse volumes of data using machine learning algorithms. Con artists employ technology to portray someone doing something that has never been in audio or visual form. The best example is the American President, who used deep voice impersonation technology. Deep voice impersonation technology can be used maliciously, such as in deep voice fraud or disseminating false information. As a result, there is growing concerned about the potential influence of deep fake technology on society and the need for effective tools to detect and mitigate the hazards it may provide.
What exactly are deepfake voice scams?
Artificial intelligence (AI) is sometimes utilised in deepfake speech frauds to create synthetic audio recordings that seem like real people. Con artists can impersonate someone else over the phone and pressure their victims into providing personal information or paying money by using contemporary technology. A con artist may pose as a bank employee, a government official, or a friend or relative by utilising a deep false voice. It aims to earn the victim’s trust and raise the likelihood that they will fall for the hoax by conveying a false sense of familiarity and urgency. Deep fake speech frauds are increasing in frequency as deep fake technology becomes more widely available, more sophisticated, and harder to detect. In order to avoid becoming a victim of such fraud, it is necessary to be aware of the risks and take appropriate measures.
Why do cybercriminals use AI voice deep fake?
In order to mislead users into providing private information, money, or system access, cybercriminals utilise artificial intelligence (AI) speech-deep spoofing technology to claim to be people or entities. Using AI voice-deep fake technology, cybercriminals can create audio recordings that mimic real people or entities, such as CEOs, government officials, or bank employees, and use them to trick victims into taking activities that are advantageous to the criminals. This can involve asking victims for money, disclosing login credentials, or revealing sensitive information. In phishing assaults, where fraudsters create audio recordings that impersonate genuine messages from organisations or people that victims trust, deepfake AI voice technology can also be employed. These audio recordings can trick people into downloading malware, clicking on dangerous links, or giving out personal information. Additionally, false audio evidence can be produced using AI voice-deep fake technology to support false claims or accusations. This is particularly risky regarding legal processes because falsified audio evidence may lead to wrongful convictions or acquittals. Artificial intelligence voice deep fake technology gives con artists a potent tool for tricking and controlling victims. Every organisation and the general population must be informed of this technology’s risk and adopt the appropriate safety measures.
How to spot voice deepfake and avoid them?
Deep fake technology has made it simpler for con artists to edit audio recordings and create phoney voices that exactly mimic real people. As a result, a brand-new scam called the “deep fake voice scam” has surfaced. In order to trick the victim into handing over money or private information, the con artist assumes another person’s identity and uses a fake voice. What are some ways to protect oneself from deepfake voice scams? Here are some guidelines to help you spot them and keep away from them:
- Steer clear of telemarketing calls
- One of the most common tactics used by deep fake voice con artists, who pretend to be bank personnel or government officials, is making unsolicited phone calls.
- Listen closely to the voice
- Anyone who phones you pretending to be someone else should pay special attention to their voice. Are there any peculiar pauses or inflexions in their speech? Something that doesn’t seem right can be a deep voice fraud.
- Verify the caller’s identity
- It’s crucial to verify the caller’s identity in order to avoid falling for a deep false voice scam. You might ask for their name, job title, and employer when in doubt. You can then do some research to be sure they are who they say they are.
- Never divulge confidential information
- No matter who calls, never give out personal information like your Aadhar, bank account information, or passwords over the phone. Any legitimate companies or organisations will never request personal or financial information over the phone; if they do, it’s a warning sign that they’re a scammer.
- Report any suspicious activities
- Inform the appropriate authorities if you think you’ve fallen victim to a deep voice fraud. This may include your bank, credit card company, local police station, or the nearest cyber cell. By reporting the fraud, you could prevent others from being a victim.
Conclusion
In conclusion, the field of AI voice deep fake technology is fast expanding and has huge potential for beneficial and detrimental effects. While deep fake voice technology has the potential to be used for good, such as improving speech recognition systems or making voice assistants sound more realistic, it may also be used for evil, such as deep fake voice frauds and impersonation to fabricate stories. Users must be aware of the hazard and take the necessary precautions to protect themselves as AI voice deep fake technology develops, making it harder to detect and prevent deep fake schemes. Additionally, it is necessary to conduct ongoing research and develop efficient techniques to identify and control the risks related to this technology. We must deploy AI appropriately and ethically to ensure that AI voice-deep fake technology benefits society rather than harming or deceiving it.
Reference
.webp)
Introduction
In July 2025, the Digital Defence Report prepared by Microsoft raised an alarm that India is part of the top target countries in AI-powered nation-state cyberattacks with malicious agents automating phishing, creating convincing deepfakes, and influencing opinion with the help of generative AI (Microsoft Digital Defence Report, 2025). Most of the attention in the world has continued to be on the United States and Europe, but Asia-Pacific and especially India have become a major target in terms of AI-based cyber activities. This blog discusses the role of AI in espionage, redefining the threat environment of India, the reaction of the government, and what India can learn by looking at the example of cyber giants worldwide.
Understanding AI-Powered Cyber Espionage
Conventional cyber-espionage intends to hack systems, steal information or bring down networks. With the emergence of generative AI, these strategies have changed completely. It is now possible to automate reconnaissance, create fake voices and videos of authorities and create highly advanced phishing campaigns which can pass off as genuine even to a trained expert. According to the report made by Microsoft, AI is being used by state-sponsored groups to expand their activities and increase accuracy in victims (Microsoft Digital Defence Report, 2025). Based on SQ Magazine, almost 42 percent of state-based cyber campaigns in 2025 had AIs like adaptive malware or intelligent vulnerability scanners (SQ Magazine, 2025).
AI is altering the power dynamic of cyberspace. The tools previously needing significant technical expertise or substantial investments have become ubiquitous, and smaller countries can conduct sophisticated cyber operations as well as non-state actors. The outcome is the speeding up of the arms race with AI serving as the weapon and the armour.
India’s Exposure and Response
The weakness of the threat landscape lies in the growing online infrastructure and geopolitical location. The attack surface has expanded the magnitude of hundreds of millions of citizens with the integration of platforms like DigiLocker and CoWIN. Financial institutions, government portals and defence networks are increasingly becoming targets of cyber attacks that are more sophisticated. Faking videos of prominent figures, phishing letters with the official templates, and manipulation of the social media are currently all being a part of disinformation campaigns (Microsoft Digital Defence Report, 2025).
According to the Data Security Council of India (DSCI), the India Cyber Threat Report 2025 reported that attacks using AI are growing exponentially, particularly in the shape of malicious behaviour and social engineering (DSCI, 2025). The nodal cyber-response agency of India, CERT-In, has made several warnings regarding scams related to AI and AI-generated fake content that is aimed at stealing personal information or deceiving the population. Meanwhile, enforcement and red-teaming actions have been intensified, but the communication between central agencies and state police and the private platforms is not even. There is also an acute shortage of cybersecurity talents in India, as less than 20 percent of cyber defence jobs are occupied by qualified specialists (DSCI, 2025).
Government and Policy Evolution
The government response to AI-enabled threats is taking three forms, namely regulation, institutional enhancing, and capacity building. The Digital Personal Data Protection Act 2023 saw a major move in defining digital responsibility (Government of India, 2023). Nonetheless, threats that involve AI-specific issues like data poisoning, model manipulation, or automated disinformation remain grey areas. The following National Cybersecurity Strategy will attempt to remedy them by establishing AI-government guidelines and responsibility standards to major sectors.
At the institutional level, the efforts of such organisations as the National Critical Information Infrastructure Protection Centre (NCIIPC) and the Defence Cyber Agency are also being incorporated into their processes with the help of AI-based monitoring. There is also an emerging public-private initiative. As an example, the CyberPeace Foundation and national universities have signed a memorandum of understanding that currently facilitates the specialised training in AI-driven threat analysis and digital forensics (Times of India, August 2025). Even after these positive indications, India does not have any cohesive system of reporting cases of AI. The publication on arXiv in September 2025 underlines the importance of the fact that legal approaches to AI-failure reporting need to be developed by countries to approach AI-initiated failures in such fields as national security with accountability (arXiv, 2025).
Global Implications and Lessons for India
Major economies all over the world are increasing rapidly to integrate AI innovation with cybersecurity preparedness. The United States and United Kingdom are spending big on AI-enhanced military systems, performing machine learning in security operations hubs and organising AI-based “red team” exercises (Microsoft Digital Defence Report, 2025). Japan is testing cross-ministry threat-sharing platforms that utilise AI analytics and real-time decision-making (Microsoft Digital Defence Report, 2025).
Four lessons can be distinguished as far as India is concerned.
- To begin with, the cyber defence should shift to proactive intelligence in place of reactive investigation. It is not only possible to detect the adversary behaviour after the attacks, but to simulate them in advance using AI.
- Second, teamwork is essential. The issue of cybersecurity cannot be entrusted to government enforcement. The private sector that maintains the majority of the digital infrastructure in India must be actively involved in providing information and knowledge.
- Third, there is the issue of AI sovereignty. Building or hosting its own defensive AI tools in India will diminish dependence on foreign vendors, and minimise the possible vulnerabilities of the supply-chain.
- Lastly, the initial defence is digital literacy. The citizens should be trained on how to detect deepfakes, phishing, and other manipulated information. The importance of creating human awareness cannot be underestimated as much as technical defences (SQ Magazine, 2025).
Conclusion
AI has altered the reasoning behind cyber warfare. There are quicker attacks, more difficult to trace and scalable as never before. In the case of India, it is no longer about developing better firewalls but rather the ability to develop anticipatory intelligence to counter AI-powered threats. This requires a national policy that incorporates technology, policy and education.
India can transform its vulnerability to strength with the sustained investment, ethical AI governance, and healthy cooperation between the government and the business sector. The following step in cybersecurity does not concern who possesses more firewalls than the other but aims to learn and adjust more quickly and successfully in a world where machines already belong to the battlefield (Microsoft Digital Defence Report, 2025).
References:
- Microsoft Digital Defense Report 2025
- India Cyber Threat Report 2025, DSCI
- Lucknow based organisations to help strengthen cybercrime research training policy ecosystem
- AI Cyber Attacks Statistics 2025: How Attacks, Deepfakes & Ransomware Have Escalated, SQ Magazine
- Incorporating AI Incident Reporting into Telecommunications Law and Policy: Insights from India.
- The Digital Personal Data Protection Act, 2023

Introduction
The mysteries of the universe have been a subject of curiosity for humans over thousands of years. To solve these unfolding mysteries of the universe, astrophysicists are always busy, and with the growing technology this seems to be achievable. Recently, with the help of Artificial Intelligence (AI), scientists have discovered the depths of the cosmos. AI has revealed the secret equation that properly “weighs” galaxy clusters. This groundbreaking discovery not only sheds light on the formation and behavior of these clusters but also marks a turning point in the investigation and discoveries of new cosmos. Scientists and AI have collaborated to uncover an astounding 430,000 galaxies strewn throughout the cosmos. The large haul includes 30,000 ring galaxies, which are considered the most unusual of all galaxy forms. The discoveries are the first outcomes of the "GALAXY CRUISE" citizen science initiative. They were given by 10,000 volunteers who sifted through data from the Subaru Telescope. After training the AI on 20,000 human-classified galaxies, scientists released it loose on 700,000 galaxies from the Subaru data.
Brief Analysis
A group of astronomers from the National Astronomical Observatory of Japan (NAOJ) have successfully applied AI to ultra-wide field-of-view images captured by the Subaru Telescope. The researchers achieved a high accuracy rate in finding and classifying spiral galaxies, with the technique being used alongside citizen science for future discoveries.
Astronomers are increasingly using AI to analyse and clean raw astronomical images for scientific research. This involves feeding photos of galaxies into neural network algorithms, which can identify patterns in real data more quickly and less prone to error than manual classification. These networks have numerous interconnected nodes and can recognise patterns, with algorithms now 98% accurate in categorising galaxies.
Another application of AI is to explore the nature of the universe, particularly dark matter and dark energy, which make up over 95% energy of the universe. The quantity and changes in these elements have significant implications for everything from galaxy arrangement.
AI is capable of analysing massive amounts of data, as training data for dark matter and energy comes from complex computer simulations. The neural network is fed these findings to learn about the changing parameters of the universe, allowing cosmologists to target the network towards actual data.
These methods are becoming increasingly important as astronomical observatories generate enormous amounts of data. High-resolution photographs of the sky will be produced from over 60 petabytes of raw data by the Vera C. AI-assisted computers are being utilized for this.
Data annotation techniques for training neural networks include simple tagging and more advanced types like image classification, which classify an image to understand it as a whole. More advanced data annotation methods, such as semantic segmentation, involve grouping an image into clusters and giving each cluster a label.
This way, AI is being used for space exploration and is becoming a crucial tool. It also enables the processing and analysis of vast amounts of data. This advanced technology is fostering the understanding of the universe. However, clear policy guidelines and ethical use of technology should be prioritized while harnessing the true potential of contemporary technology.
Policy Recommendation
- Real-Time Data Sharing and Collaboration - Effective policies and frameworks should be established to promote real-time data sharing among astronomers, AI developers and research institutes. Open access to astronomical data should be encouraged to facilitate better innovation and bolster the application of AI in space exploration.
- Ethical AI Use - Proper guidelines and a well-structured ethical framework can facilitate judicious AI use in space exploration. The framework can play a critical role in addressing AI issues pertaining to data privacy, AI Algorithm bias and transparent decision-making processes involving AI-based tech.
- Investing in Research and Development (R&D) in the AI sector - Government and corporate giants should prioritise this opportunity to capitalise on the avenue of AI R&D in the field of space tech and exploration. Such as funding initiatives focusing on developing AI algorithms coded for processing astronomical data, optimising telescope operations and detecting celestial bodies.
- Citizen Science and Public Engagement - Promotion of citizen science initiatives can allow better leverage of AI tools to involve the public in astronomical research. Prominent examples include the SETI @ Home program (Search for Extraterrestrial Intelligence), encouraging better outreach to educate and engage citizens in AI-enabled discovery programs such as the identification of exoplanets, classification of galaxies and discovery of life beyond earth through detecting anomalies in radio waves.
- Education and Training - Training programs should be implemented to educate astronomers in AI techniques and the intricacies of data science. There is a need to foster collaboration between AI experts, data scientists and astronomers to harness the full potential of AI in space exploration.
- Bolster Computing Infrastructure - Authorities should ensure proper computing infrastructure should be implemented to facilitate better application of AI in astronomy. This further calls for greater investment in high-performance computing devices and structures to process large amounts of data and AI modelling to analyze astronomical data.
Conclusion
AI has seen an expansive growth in the field of space exploration. As seen, its multifaceted use cases include discovering new galaxies and classifying celestial objects by analyzing the changing parameters of outer space. Nevertheless, to fully harness its potential, robust policy and regulatory initiatives are required to bolster real-time data sharing not just within the scientific community but also between nations. Policy considerations such as investment in research, promoting citizen scientific initiatives and ensuring education and funding for astronomers. A critical aspect is improving key computing infrastructure, which is crucial for processing the vast amount of data generated by astronomical observatories.
References
- https://mindy-support.com/news-post/astronomers-are-using-ai-to-make-discoveries/
- https://www.space.com/citizen-scientists-artificial-intelligence-galaxy-discovery
- https://www.sciencedaily.com/releases/2024/03/240325114118.htm
- https://phys.org/news/2023-03-artificial-intelligence-secret-equation-galaxy.html
- https://www.space.com/astronomy-research-ai-future