#FactCheck - AI-Generated Video Falsely Claims Salman Khan Is Joining AIMIM
A video of Bollywood actor Salman Khan is being widely circulated on social media, in which he can allegedly be heard saying that he will soon join Asaduddin Owaisi’s party, the All India Majlis-e-Ittehadul Muslimeen (AIMIM). Along with the video, a purported image of Salman Khan with Asaduddin Owaisi is also being shared. Social media users are claiming that Salman Khan is set to join the AIMIM party.
CyberPeace research found the viral claim to be false. Our research revealed that Salman Khan has not made any such statement, and that both the viral video and the accompanying image are AI-generated.
Claim
Social media users claim that Salman Khan has announced his decision to join AIMIM.On 19 January 2026, a Facebook user shared the viral video with the caption, “What did Salman say about Owaisi?” In the video, Salman Khan can allegedly be heard saying that he is going to join Owaisi’s party. (The link to the post, its archived version, and screenshots are available.)

Fact Check:
To verify the claim, we first searched Google using relevant keywords. However, no credible or reliable media reports were found supporting the claim that Salman Khan is joining AIMIM.

In the next step of verification, we extracted key frames from the viral video and conducted a reverse image search using Google Lens. This led us to a video posted on Salman Khan’s official Instagram account on 21 April 2023. In the original video, Salman Khan is seen talking about an event scheduled to take place in Dubai. A careful review of the full video confirmed that no statement related to AIMIM or Asaduddin Owaisi is made.

Further analysis of the viral clip revealed that Salman Khan’s voice sounds unnatural and robotic. To verify this, we scanned the video using AURGIN AI, an AI-generated content detection tool. According to the tool’s analysis, the viral video was generated using artificial intelligence.

Conclusion
Salman Khan has not announced that he is joining the AIMIM party. The viral video and the image circulating on social media are AI-generated and manipulated.
Related Blogs
.webp)
Introduction
MEITY’s Indian Computer Emergency Response Team (CERT-In) in collaboration with SISA, a global leader in forensics-driven cyber security company, launched the ‘Certified Security Professional for Artificial Intelligence’ (CSPAI) program on 23rd September. This initiative marks the first of its kind ANAB-accredited AI security certification. The CSPAI also complements global AI governance efforts. International efforts like the OECD AI Principles and the European Union's AI Act, which aim to regulate AI technologies to ensure fairness, transparency, and accountability in AI systems are the sounding board for this initiative.
About the Initiative
The Certified Security Professional for Artificial Intelligence (CSPAI) is the world’s first ANAB-accredited certification program that focuses on Cyber Security for AI. The collaboration between CERT-In and SISA plays a pivotal role in shaping AI security policies. Such partnerships between the public and private players bridge the gap between government regulatory needs and the technological expertise of private players, creating comprehensive and enforceable AI security policies. The CSPAI has been specifically designed to integrate AI and GenAI into business applications while aligning security measures to meet the unique challenges that AI systems pose. The program emphasises the strategic application of Generative AI and Large Language Models in future AI deployments. It also highlights the significant advantages of integrating LLMs into business applications.
The program is tailored for security professionals to understand the do’s and don’ts of AI integration into business applications, with a comprehensive focus on sustainable practices for securing AI-based applications. This is achieved through comprehensive risk identification and assessment frameworks recommended by ISO and NIST. The program also emphasises continuous assessment and conformance to AI laws across various nations, ensuring that AI applications adhere to standards for trustworthy and ethical AI practices.
Aim of the Initiative
As AI technology integrates itself to become an intrinsic part of business operations, a growing need for AI security expertise across industries is visible. Keeping this thought in the focal point, the accreditation program has been created to equip professionals with the knowledge and tools to secure AI systems. The CSPAI program aims to make a safer digital future while creating an environment that fosters innovation and responsibility in the evolving cybersecurity landscape focusing on Generative AI (GenAI) and Large Language Models (LLMs).
Conclusion
This Public-Private Partnership between the CERT-In and SISA, which led to the creation of the Certified Security Professional for Artificial Intelligence (CSPAI) represents a groundbreaking initiative towards AI and its responsible usage. CSPAI can be seen as an initiative addressing the growing demand for cybersecurity expertise in AI technologies. As AI becomes more embedded in business operations, the program aims to equip security professionals with the knowledge to assess, manage, and mitigate risks associated with AI applications. CSPAI as a programme aims to promote trustworthy and ethical AI usage by aligning with frameworks from ISO and NIST and ensuring adherence to AI laws globally. The approach is a significant step towards creating a safer digital ecosystem while fostering responsible AI innovation. This certification will significantly impact the healthcare, finance, and defence sectors, where AI is rapidly becoming indispensable. By ensuring that AI applications meet the requirements of security and ethical standards in these sectors, CSPAI can help build public trust and encourage broader AI adoption.
References
- https://pib.gov.in/PressReleasePage.aspx?PRID=2057868
- https://www.sisainfosec.com/training/payment-data-security-programs/cspai/
- https://timesofindia.indiatimes.com/business/india-business/cert-in-and-sisa-launch-ai-security-certification-program-to-integrate-ai-into-business-applications/articleshow/113622067.cms

Introduction
In the age of digital technology, the concept of net neutrality has become more crucial for preserving the equity and openness of the internet. Thanks to net neutrality, all internet traffic is treated equally, without difference or preferential treatment. Thanks to this concept, users can freely access and distribute content, which promotes innovation, competition, and the democratisation of knowledge. India has seen controversy over net neutrality, which has led to a legal battle to protect an open internet. In this blog post, we’ll look at the challenges of the law and the efforts made to safeguard net neutrality in India.
Background on Net Neutrality in India
Net neutrality became a hot topic in India after a major telecom service provider suggested charging various fees for accessing different parts of the internet. Internet users, activists, and organisations in favour of an open internet raised concern over this. Millions of comments were made on the consultation document by the Telecom Regulatory Authority of India (TRAI) published in 2015, highlighting the significance of net neutrality for the country’s internet users.
Legal Battle and Regulatory Interventions
The battle for net neutrality in India acquired notoriety when TRAI released the “Prohibition of Discriminatory Tariffs for Data Services Regulations” in 2016. These laws, often known as the “Free Basics” prohibition, were created to put an end to the usage of zero-rating platforms, which exempt specific websites or services from data expenses. The regulations ensured that all data on the internet would be handled uniformly, regardless of where it originated.
But the legal conflict didn’t end there. The telecom industry challenged TRAI’s regulations, resulting in a flurry of legal conflicts in numerous courts around the country. The Telecom Regulatory Authority of India Act and its provisions of it that control TRAI’s ability to regulate internet services were at the heart of the legal dispute.
The Indian judicial system greatly helped the protection of net neutrality. The importance of non-discriminatory internet access was highlighted in 2018 when the Telecom Disputes Settlement and Appellate Tribunal (TDSAT) upheld the TRAI regulations and ruled in favour of net neutrality. The TDSAT ruling created a crucial precedent for net neutrality in India. In 2019, after several rounds of litigation, the Supreme Court of India backed the principles of net neutrality, declaring that it is a fundamental idea that must be protected. The nation’s legislative framework for preserving a free and open internet was bolstered by the ruling by the top court.
Ongoing Challenges and the Way Forward
Even though India has made great strides towards upholding net neutrality, challenges persist. Because of the rapid advancement of technology and the emergence of new services and platforms, net neutrality must always be safeguarded. Some practices, such as “zero-rating” schemes and service-specific data plans, continue to raise questions about potential violations of net neutrality principles. Regulatory efforts must be proactive and under constant watch to allay these worries. The regulatory organisation, TRAI, is responsible for monitoring for and responding to breaches of the net neutrality principles. It’s crucial to strike a balance between promoting innovation and competition and maintaining a free and open internet.
Additionally, public awareness and education on the issue are crucial for the continuation of net neutrality. By informing users of their rights and promoting involvement in the conversation, a more inclusive and democratic decision-making process is assured. Civil society organisations and advocacy groups may successfully educate the public about net neutrality and gain their support.
Conclusion
The legal battle for net neutrality in India has been a significant turning point in the campaign to preserve an open and neutral internet. A robust framework for net neutrality in the country has been established thanks to legislative initiatives and judicial decisions. However, due to ongoing challenges and the dynamic nature of technology, maintaining net neutrality calls for vigilant oversight and strong actions. An open and impartial internet is crucial for fostering innovation, increasing free speech, and providing equal access to information. India’s attempts to uphold net neutrality should motivate other nations dealing with similar issues. All parties, including politicians, must work together to protect the principles of net neutrality and ensure that the Internet is accessible to everyone.

Introduction
A Reuters investigation has uncovered an elephant in the room regarding Meta Platforms' internal measures to address online fraud and illicit advertising. The confidential documents that Reuters reviewed disclosed that Meta was planning to generate approximately 10% of its 2024 revenue, i.e., USD 16 billion, from ads related to scams and prohibited goods. The findings point out a disturbing paradox: on the one hand, Meta is a vocal advocate for digital safety and platform integrity, while on the other hand, the internal logs of the company indicate the existence of a very large area allowing the shunning of fraudulent advertisement activities that exploit users throughout the world.
The Scale of the Problem
Internal Meta projections show that its platforms, Facebook, Instagram, and WhatsApp, are displaying a staggering 15 billion scam ads per day combined. The advertisements include deceitful e-commerce promotions, fake investment schemes, counterfeit medical products, and unlicensed gambling platforms.
Meta has developed sophisticated detection tools, but even then, the system does not catch the advertisers until they are 95% certain to be fraudsters. By having at least that threshold for removing an ad, the company is unlikely to lose much money. As a result, instead of turning the fraud adjacent advertisers down, it charges them higher ad rates, which is the strategy they call “penalty bids” internally.
Internal Acknowledgements & Business Dependence
Internal documents that date between 2021 and 2025 reveal that the financial, safety, and lobbying divisions of Meta were cognizant of the enormity of revenues generated from scams. One of the 2025 strategic papers even describes this revenue source as "violating revenue," which implies that it includes ads that are against Meta's policies regarding scams, gambling, sexual services, and misleading healthcare products.
The company's top executives consider the cost-benefit scenario of stricter enforcement. According to a 2024 internal projection, Meta's half-yearly earnings from high-risk scam ads were estimated at USD 3.5 billion, whereas regulatory fines for such violations would not exceed USD 1 billion, thus making it a tolerable trade-off from a commercial viewpoint. At the same time, the company intends to scale down scam ad revenue gradually, thus from 10.1% in 2024 to 7.3% by 2025, and 6% by 2026; however, the documents also reveal a planned slowdown in enforcement to avoid "abrupt reductions" that could affect business forecasts.
Algorithmic Amplification of Scams
One of the most alarming situations is the fact that Meta's own advertising algorithms amplify scam content. It has been reported that users who click on fraudulent ads are more likely to see other similar ads, as the platform's personalisation engine assumes user "interest."
This scenario creates a self-reinforcing feedback loop where the user engagement with scam content dictates the amount of such content being displayed. Thus, a digital environment is created which encourages deceptive engagement and consequently, user trust is eroded and systemic risk is amplified.
An internal presentation in May 2025 was said to put a number on how deeply the platform's ad ecosystem was intertwined with the global fraud economy, estimating that one-third of the scams that succeeded in the U.S. were due to advertising on Meta's platforms.
Regulatory & Legal Implications
The disclosures arrived at the same time as the US and UK governments started to closely check the company's activities more than ever before.
- The U.S. Securities and Exchange Commission (SEC) is said to be looking into whether Meta has had any part in the promotion of fraudulent financial ads.
- The UK’s Financial Conduct Authority (FCA) found that Meta’s platforms were the main sources of scams related to online payments and claimed that the amount of money lost was more than all the other social platforms combined in 2023.
Meta’s spokesperson, Andy Stone, at first denied the accusations, stating that the figures mentioned in the leak were “rough and overly-inclusive”; nevertheless, he conceded that the company’s consistent efforts toward enforcement had negatively impacted revenue and would continue to do so.
Operational Challenges & Policy Gaps
The internal documents also reveal the weaknesses in Meta's day-to-day operations when it comes to the implementation of its own policies.
- Because of the large number of employees laid off in 2023, the whole department that dealt with advertiser-brand impersonation was said to have been dissolved.
- Scam ads were categorised as a "low severity" issue, which was more of a "bad user experience" than a critical security risk.
- At the end of 2023, users were submitting around 100,000 legitimate scam reports per week, of which Meta dismissed or rejected 96%.
Human Impact: When Fraud Becomes Personal
The financial and ethical issues have tangible human consequences. The Reuters investigation documented multiple cases of individuals defrauded through hijacked Meta accounts.
One striking example involves a Canadian Air Force recruiter, whose hacked Facebook account was used to promote fake cryptocurrency schemes. Despite over a hundred user reports, Meta failed to act for weeks, during which several victims, including military colleagues, lost tens of thousands of dollars.
The case underscores not just platform negligence, but also the difficulty of law enforcement collaboration. Canadian authorities confirmed that funds traced to Nigerian accounts could not be recovered due to jurisdictional barriers, a recurring issue in transnational cyber fraud.
Ethical and Cybersecurity Implications
The research has questioned extremely important things at least from the perspective of cyber policy:
- Platform Accountability: Meta, by its practice, is giving more importance to the monetary aspect rather than the truth, and in this way, it is going against the principles of responsible digital governance.
- Transparency in Ad Ecosystems: The lack of transparency in digital advertising systems makes it very easy for dishonest actors to use automated processes with very little supervision.
- Algorithmic Responsibility: The use of algorithms that impact the visibility of misleading content and targeting can be considered the direct involvement of the algorithms in the fraud.
- Regulatory Harmonisation: The presence of different and disconnected enforcement frameworks across jurisdictions is a drawback to the efforts in dealing with cross-border cybercrime.
- Public Trust: Users’ trust in the digital world is mainly dependent on the safety level they see and the accountability of the companies.
Conclusion
Meta’s records show a very unpleasant mix of profit, laxity, and failure in the policy area concerning scam-related ads. The platform’s readiness to accept and even profit from fraudulent players, though admitting the damage they cause, calls for an immediate global rethinking of advertising ethics, regulatory enforcement, and algorithmic transparency.
With the expansion of its AI-driven operations and advertising networks, protecting the users of Meta must evolve from being just a public relations goal to being a core business necessity, thus requiring verifiable accountability measures, independent audits, and regulatory oversight. It is an undeniable fact that there are billions of users who count on Meta’s platforms for their right to digital safety, which is why this right must be respected and enforced rather than becoming optional.
References
- https://www.reuters.com/investigations/meta-is-earning-fortune-deluge-fraudulent-ads-documents-show-2025-11-06/?utm_source=chatgpt.com
- https://www.indiatoday.in/technology/news/story/leaked-docs-claim-meta-made-16-billion-from-scam-ads-even-after-deleting-134-million-of-them-2815183-2025-11-07