#FactCheck - "Deepfake Video Falsely Claims Justin Trudeau Endorses Investment Project”
Executive Summary:
A viral online video claims Canadian Prime Minister Justin Trudeau promotes an investment project. However, the CyberPeace Research Team has confirmed that the video is a deepfake, created using AI technology to manipulate Trudeau's facial expressions and voice. The original footage has no connection to any investment project. The claim that Justin Trudeau endorses this project is false and misleading.

Claims:
A viral video falsely claims that Canadian Prime Minister Justin Trudeau is endorsing an investment project.

Fact Check:
Upon receiving the viral posts, we conducted a Google Lens search on the keyframes of the video. The search led us to various legitimate sources featuring Prime Minister Justin Trudeau, none of which included promotion of any investment projects. The viral video exhibited signs of digital manipulation, prompting a deeper investigation.

We used AI detection tools, such as TrueMedia, to analyze the video. The analysis confirmed with 99.8% confidence that the video was a deepfake. The tools identified "substantial evidence of manipulation," particularly in the facial movements and voice, which were found to be artificially generated.



Additionally, an extensive review of official statements and interviews with Prime Minister Trudeau revealed no mention of any such investment project. No credible reports were found linking Trudeau to this promotion, further confirming the video’s inauthenticity.
Conclusion:
The viral video claiming that Justin Trudeau promotes an investment project is a deepfake. The research using various tools such as Google Lens, AI detection tool confirms that the video is manipulated using AI technology. Additionally, there is no information in any official sources. Thus, the CyberPeace Research Team confirms that the video was manipulated using AI technology, making the claim false and misleading.
- Claim: Justin Trudeau promotes an investment project viral on social media.
- Claimed on: Facebook
- Fact Check: False & Misleading
Related Blogs

Introduction
The advent of AI-driven deepfake technology has facilitated the creation of explicit counterfeit videos for sextortion purposes. There has been an alarming increase in the use of Artificial Intelligence to create fake explicit images or videos for sextortion.
What is AI Sextortion and Deepfake Technology
AI sextortion refers to the use of artificial intelligence (AI) technology, particularly deepfake algorithms, to create counterfeit explicit videos or images for the purpose of harassing, extorting, or blackmailing individuals. Deepfake technology utilises AI algorithms to manipulate or replace faces and bodies in videos, making them appear realistic and often indistinguishable from genuine footage. This enables malicious actors to create explicit content that falsely portrays individuals engaging in sexual activities, even if they never participated in such actions.
Background on the Alarming Increase in AI Sextortion Cases
Recently there has been a significant increase in AI sextortion cases. Advancements in AI and deepfake technology have made it easier for perpetrators to create highly convincing fake explicit videos or images. The algorithms behind these technologies have become more sophisticated, allowing for more seamless and realistic manipulations. And the accessibility of AI tools and resources has increased, with open-source software and cloud-based services readily available to anyone. This accessibility has lowered the barrier to entry, enabling individuals with malicious intent to exploit these technologies for sextortion purposes.

The proliferation of sharing content on social media
The proliferation of social media platforms and the widespread sharing of personal content online have provided perpetrators with a vast pool of potential victims’ images and videos. By utilising these readily available resources, perpetrators can create deepfake explicit content that closely resembles the victims, increasing the likelihood of success in their extortion schemes.
Furthermore, the anonymity and wide reach of the internet and social media platforms allow perpetrators to distribute manipulated content quickly and easily. They can target individuals specifically or upload the content to public forums and pornographic websites, amplifying the impact and humiliation experienced by victims.
What are law agencies doing?
The alarming increase in AI sextortion cases has prompted concern among law enforcement agencies, advocacy groups, and technology companies. This is high time to make strong Efforts to raise awareness about the risks of AI sextortion, develop detection and prevention tools, and strengthen legal frameworks to address these emerging threats to individuals’ privacy, safety, and well-being.
There is a need for Technological Solutions, which develops and deploys advanced AI-based detection tools to identify and flag AI-generated deepfake content on platforms and services. And collaboration with technology companies to integrate such solutions.
Collaboration with Social Media Platforms is also needed. Social media platforms and technology companies can reframe and enforce community guidelines and policies against disseminating AI-generated explicit content. And can ensure foster cooperation in developing robust content moderation systems and reporting mechanisms.
There is a need to strengthen the legal frameworks to address AI sextortion, including laws that specifically criminalise the creation, distribution, and possession of AI-generated explicit content. Ensure adequate penalties for offenders and provisions for cross-border cooperation.
Proactive measures to combat AI-driven sextortion
Prevention and Awareness: Proactive measures raise awareness about AI sextortion, helping individuals recognise risks and take precautions.
Early Detection and Reporting: Proactive measures employ advanced detection tools to identify AI-generated deepfake content early, enabling prompt intervention and support for victims.
Legal Frameworks and Regulations: Proactive measures strengthen legal frameworks to criminalise AI sextortion, facilitate cross-border cooperation, and impose offender penalties.
Technological Solutions: Proactive measures focus on developing tools and algorithms to detect and remove AI-generated explicit content, making it harder for perpetrators to carry out their schemes.
International Cooperation: Proactive measures foster collaboration among law enforcement agencies, governments, and technology companies to combat AI sextortion globally.
Support for Victims: Proactive measures provide comprehensive support services, including counselling and legal assistance, to help victims recover from emotional and psychological trauma.
Implementing these proactive measures will help create a safer digital environment for all.

Misuse of Technology
Misusing technology, particularly AI-driven deepfake technology, in the context of sextortion raises serious concerns.
Exploitation of Personal Data: Perpetrators exploit personal data and images available online, such as social media posts or captured video chats, to create AI- manipulation violates privacy rights and exploits the vulnerability of individuals who trust that their personal information will be used responsibly.
Facilitation of Extortion: AI sextortion often involves perpetrators demanding monetary payments, sexually themed images or videos, or other favours under the threat of releasing manipulated content to the public or to the victims’ friends and family. The realistic nature of deepfake technology increases the effectiveness of these extortion attempts, placing victims under significant emotional and financial pressure.
Amplification of Harm: Perpetrators use deepfake technology to create explicit videos or images that appear realistic, thereby increasing the potential for humiliation, harassment, and psychological trauma suffered by victims. The wide distribution of such content on social media platforms and pornographic websites can perpetuate victimisation and cause lasting damage to their reputation and well-being.
Targeting teenagers– Targeting teenagers and extortion demands in AI sextortion cases is a particularly alarming aspect of this issue. Teenagers are particularly vulnerable to AI sextortion due to their increased use of social media platforms for sharing personal information and images. Perpetrators exploit to manipulate and coerce them.
Erosion of Trust: Misusing AI-driven deepfake technology erodes trust in digital media and online interactions. As deepfake content becomes more convincing, it becomes increasingly challenging to distinguish between real and manipulated videos or images.
Proliferation of Pornographic Content: The misuse of AI technology in sextortion contributes to the proliferation of non-consensual pornography (also known as “revenge porn”) and the availability of explicit content featuring unsuspecting individuals. This perpetuates a culture of objectification, exploitation, and non-consensual sharing of intimate material.
Conclusion
Addressing the concern of AI sextortion requires a multi-faceted approach, including technological advancements in detection and prevention, legal frameworks to hold offenders accountable, awareness about the risks, and collaboration between technology companies, law enforcement agencies, and advocacy groups to combat this emerging threat and protect the well-being of individuals online.
![Securing Digital Banking: RBI Mandates Migration to [.]bank[.]in Domains](https://cdn.prod.website-files.com/64b94adadbfa4c824629b337/6818602cfbcc953fcae859a1_POLICY%20TEAM%20COVER%20PAGES%20-21%20(1).webp)
Introduction
The Reserve Bank of India (RBI) has mandated banks to switch their digital banking domains to 'Bank.in' by October 31, 2025, as part of a strategy to modernise the sector and maintain consumer confidence. The move is expected to provide a consistent and secure interface for online banking, as a response to the increasing threats posed by cybercriminals who exploit vulnerabilities in online platforms. The RBI's directive is seen as a proactive measure to address the growing concerns over cybersecurity in the banking sector.
RBI Circular - Migration to '.bank.in' domain
The official circular released by the RBI dated April 22, 2025, read as follows:
“It has now been decided to operationalise the ‘. bank.in’ domain for banks through the Institute for Development and Research in Banking Technology (IDRBT), which has been authorised by National Internet Exchange of India (NIXI), under the aegis of the Ministry of Electronics and Information Technology (MeitY), to serve as the exclusive registrar for this domain. Banks may contact IDRBT at sahyog@idrbt.ac.in to initiate the registration process. IDRBT shall guide the banks on various aspects related to application process and migration to new domain.”
“All banks are advised to commence the migration of their existing domains to the ‘.bank.in’ domain and complete the process at the earliest and in any case, not later than October 31, 2025.”
CyberPeace Outlook
The Reserve Bank of India's directive mandating banks to shift to the 'Bank.in' domain by October 31, 2025, represents a strategic and forward-looking measure to modernise the nation’s digital banking infrastructure. With this initiative, the RBI is setting a new benchmark in cybersecurity by creating a trusted, exclusive domain that banks must adopt. This move will drastically reduce cyber threats, phishing attacks, and fake banking websites, which have been major sources of financial fraud. This fixed domain will simplify verification for consumers and tech platforms to more easily identify legitimate banking websites and apps. Furthermore, a strong drop in online financial fraud will have a long-term effect by this order. Since phishing and domain spoofing are two of the most prevalent forms of cybercrime, a shift to a strictly regulated domain name system will remove the potential for lookalike URLs and fraudulent websites that mimic banks. As India’s digital economy grows, RBI’s move is timely, essential, and future-ready.
References

Introduction
In the evolving landscape of cybercrime, attackers are not only becoming more sophisticated in their approach but also more adept in their infrastructure. The Indian Cybercrime Coordination Centre (I4C) has issued a warning about the use of ‘disposable domains’ by cybercriminals. These are short-lived websites designed tomimic legitimate platforms, deceive users, and then disappear quickly to avoid detection and legal repercussions.
Although they may appear harmless at first glance, disposable domains form the backbone of countless online scams, phishing campaigns, malware distributionschemes, and disinformation networks. Cybercriminals use them to host fake websites, distribute malicious files, send deceptive emails, and mislead unsuspecting users, all while evading detection and takedown efforts.
As India’s digital economy grows and more citizens, businesses, and public services move online, it is crucial to understand this hidden layer of cybercrime infrastructure.Greater awareness among individuals, enterprises, and policymakers is essential to strengthen defences against fraud, protect users from harm, and build trust in thedigital ecosystem
What Are Disposable Domains?
A disposable domain is a website domain that is registered to be used temporarily, usually for hours or days, typically to evade detection or accountability.
These domains are inexpensive, easy to obtain, and can be set up with minimal information. They are often bought in bulk through domain registrars that do not strictly verify ownership information, sometimes using stolen credit cards or cryptocurrencies to remain anonymous. They differ from legitimate temporary domains used for testing or development in one significant aspect, which is ‘purpose’. Cybercriminals use disposable domains to carry out malicious activities such as phishing, sextortion, malware distribution, fake e-commerce sites, spam email campaigns, and disinformation operations.
How Cybercriminals Utilise Disposable Domains
1. Phishing & Credential Stealing: Attackers tend to register lookalike domains that are similar to legitimate websites (e.g., go0gle-login[.]com or sbi-verification[.]online) and trick victims into entering their login credentials. These domains will be active only long enough to deceive, and then they will disappear.
2. Malware Distribution: Disposable domains are widely used for ransomware and spyware operations for hosting malicious files. Because the domains are temporary, threat intelligence systems tend to notice them too late.
3. Fake E-Commerce & Investment Scams: Cyber crooks clone legitimate e-commerce or investment sites, place ad campaigns, and trick victims into "purchasing" goods or investing in scams. The domain vanishes when the scam runs out.
4. Spam and Botnets: Disposable domains assist in botnet command-and-control activities. They make it more difficult for defenders to block static IPs or trace the attacker's infrastructure.
5. Disinformation and Influence Campaigns: State-sponsored actors and coordinated troll networks use disposable domains to host fabricated news articles, fake government documents, and manipulated videos. When these sites are detected and taken down, they are quickly replaced with new domains, allowing the disinformation cycle to continue uninterrupted.
Why Are They Hard to Stop?
Registering a domain is inexpensive and quick, often requiring no more than an email address and payment. The difficulty is the easy domain registrations and the absence of worldwide enforcement. Domain registrars differ in enforcing Know-Your-Customer (KYC) standards stringently. ICANN (Internet Corporation for Assigned Names and Numbers) has certain regulations in place but enforcement is inconsistent. ICANN does require registrars to maintain accurate Who is information (the “Registrant Data Accuracy Policy”) and to act on abuse complaints. However, ICANN is not an enforcement agency. It oversees contracts with registrars but cannot directly police every registration. Cybercriminals exploit services such as:
- Privacy protection shields that conceal actual WHOIS information.
- Bulletproof hosting that evades takedown notices.
- Fast-flux DNS methods to rapidly alter IP addresses
Additionally, utilisation of IDNs ( Internationalised Domain Names) and homoglyph attacks enables the attackers to register visually similar domains to legitimate ones (e.g., using Cyrillic characters to represent Latin ones).
Real-World Example: India and the Rise of Fake Investment Sites
India has witnessed a wave of monetary scams that are connected with disposable domains. Over hundreds of false websites impersonating government loan schemes, banks or investment websites, and crypto-exchanges were found on disposable domains such as gov-loans-apply[.]xyz, indiabonds-secure[.]top, or rbi-invest[.]store. Most of them placed paid advertisements on sites such as Facebook or Google and harvested user information and payments, only to vanish in 48–72 hours. Victims had no avenue of proper recourse, and the authorities were left with a digital ghost trail.
How Disposable Domains Undermine Cybersecurity
- Bypass Blacklists: Dynamic domains constantly shifting evade static blacklists.
- Delay Attribution: Time is wasted pursuing non-existent owners or takedowns.
- Mass Targeting: One actor can register thousands of domains and attack at scale.
- Undermine Trust: Frequent users become targets when genuine sites are duplicated and it looks realistic.
Recommendations Addressing Legal and Policy Gaps in India
1. There is a need to establish a formal coordination mechanism between domain registrars and national CERTs such as CERT-In to enable effective communication and timely response to domain-based threats.
2. There is a need to strengthen the investigative and enforcement capabilities of law enforcement agencies through dedicated resources, training, and technical support to effectively tackle domain-based scams.
3. There is a need to leverage the provisions of the Digital Personal Data Protection Act, 2023 to take action against phishing websites and malicious domains that collect personal data without consent.
4. There is a need to draft and implement specific regulations or guidelines to address the misuse of digital infrastructure, particularly disposable and fraudulent domains, and close existing regulatory gaps.
What Can Be Done: CyberPeace View
1. Stronger KYC for Domain Registrations: Registrars selling domains to Indian users or based in India should conduct verified KYC processes, with legal repercussions for carelessness.
2. Real-Time Domain Blacklists: CERT-In, along with ISPs and hosting companies, should operate and enforce a real-time blacklist of scam domains known.
3. Public Reporting Tools: Observers or victims should be capable of reporting suspicious domains through an easy interface (tied to cybercrime.gov.in).
4. Collaboration with Tech Platforms: Social media services and online ad platforms should filter out ads associated with disposable or spurious domains and report abuse data to CERT-In.
5. User Awareness: Netizens should be educated to check URLs thoroughly, not click on unsolicited links and they must verify the authenticity of websites.
Conclusion
Disposable domains have silently become the foundation of contemporary cybercrime. They are inexpensive, highly anonymous, and short-lived, which makes them a darling weapon for cybercriminals ranging from solo spammers to nation-state operators. In an increasingly connected Indian society where the penetration rate of internet users is high, this poses an expanding threat to economic security, public confidence, and national resilience. Combating this problem will need a combination of technical defences, policy changes, public-private alliances, and end-user sensitisation. As India develops a Cyber Secure Bharat, monitoring and addressing disposable domain abuse must be the utmost concern.
References
- https://www.bitcot.com/disposable-domains
- https://atdata.com/blog/evolution-of-email-fraud-rise-of-hyper-disposable-domains/
- https://www.cyfirma.com/research/scamonomics-the-dark-side-of-stock-crypto-investments-in-india/
- https://knowledgebase.constantcontact.com/lead-gen-crm/articles/KnowledgeBase/50330-Understanding-Blocked-Forbidden-and-Disposable-Domains?lang=en_US
- https://www.meity.gov.in/
- https://intel471.com/blog/bulletproof-hosting-fast-flux-dns-double-flux-vps