#FactCheck - "Deepfake Video Falsely Claims Justin Trudeau Endorses Investment Project”
Executive Summary:
A viral online video claims Canadian Prime Minister Justin Trudeau promotes an investment project. However, the CyberPeace Research Team has confirmed that the video is a deepfake, created using AI technology to manipulate Trudeau's facial expressions and voice. The original footage has no connection to any investment project. The claim that Justin Trudeau endorses this project is false and misleading.

Claims:
A viral video falsely claims that Canadian Prime Minister Justin Trudeau is endorsing an investment project.

Fact Check:
Upon receiving the viral posts, we conducted a Google Lens search on the keyframes of the video. The search led us to various legitimate sources featuring Prime Minister Justin Trudeau, none of which included promotion of any investment projects. The viral video exhibited signs of digital manipulation, prompting a deeper investigation.

We used AI detection tools, such as TrueMedia, to analyze the video. The analysis confirmed with 99.8% confidence that the video was a deepfake. The tools identified "substantial evidence of manipulation," particularly in the facial movements and voice, which were found to be artificially generated.



Additionally, an extensive review of official statements and interviews with Prime Minister Trudeau revealed no mention of any such investment project. No credible reports were found linking Trudeau to this promotion, further confirming the video’s inauthenticity.
Conclusion:
The viral video claiming that Justin Trudeau promotes an investment project is a deepfake. The research using various tools such as Google Lens, AI detection tool confirms that the video is manipulated using AI technology. Additionally, there is no information in any official sources. Thus, the CyberPeace Research Team confirms that the video was manipulated using AI technology, making the claim false and misleading.
- Claim: Justin Trudeau promotes an investment project viral on social media.
- Claimed on: Facebook
- Fact Check: False & Misleading
Related Blogs

Introduction
An age of unprecedented problems has been brought about by the constantly changing technological world, and misuse of deepfake technology has become a reason for concern which has also been discussed by the Indian Judiciary. Supreme Court has expressed concerns about the consequences of this quickly developing technology, citing a variety of issues from security hazards to privacy violations to the spread of disinformation. In general, misuse of deepfake technology is particularly dangerous since it may fool even the sharpest eye because they are almost identical to the actual thing.
SC judge expressed Concerns: A Complex Issue
During a recent speech, Supreme Court Justice Hima Kohli emphasized the various issues that deepfakes present. She conveyed grave concerns about the possibility of invasions of privacy, the dissemination of false information, and the emergence of security threats. The ability of deepfakes to be created so convincingly that they seem to come from reliable sources is especially concerning as it increases the potential harm that may be done by misleading information.
Gender-Based Harassment Enhanced
In this internet era, there is a concerning chance that harassment based on gender will become more severe, as Justice Kohli noted. She pointed out that internet platforms may develop into epicentres for the quick spread of false information by anonymous offenders who act worrisomely and freely. The fact that virtual harassment is invisible may make it difficult to lessen the negative effects of toxic online postings. In response, It is advocated that we can develop a comprehensive policy framework that modifies current legal frameworks—such as laws prohibiting sexual harassment online —to adequately handle the issues brought on by technology breakthroughs.
Judicial Stance on Regulating Deepfake Content
In a different move, the Delhi High Court voiced concerns about the misuse of deepfake and exercised judicial intervention to limit the use of artificial intelligence (AI)-generated deepfake content. The intricacy of the matter was highlighted by a division bench. The bench proposed that the government, with its wider outlook, could be more qualified to handle the situation and come up with a fair resolution. This position highlights the necessity for an all-encompassing strategy by reflecting the court's acknowledgement of the technology's global and borderless character.
PIL on Deepfake
In light of these worries, an Advocate from Delhi has taken it upon himself to address the unchecked use of AI, with a particular emphasis on deepfake material. In the event that regulatory measures are not taken, his Public Interest Litigation (PIL), which is filed at the Delhi High Court, emphasises the necessity of either strict limits on AI or an outright prohibition. The necessity to discern between real and fake information is at the center of this case. Advocate suggests using distinguishable indicators, such as watermarks, to identify AI-generated work, reiterating the demand for openness and responsibility in the digital sphere.
The Way Ahead:
Finding a Balance
- The authorities must strike a careful balance between protecting privacy, promoting innovation, and safeguarding individual rights as they negotiate the complex world of deepfakes. The Delhi High Court's cautious stance and Justice Kohli's concerns highlight the necessity for a nuanced response that takes into account the complexity of deepfake technology.
- Because of the increased complexity with which the information may be manipulated in this digital era, the court plays a critical role in preserving the integrity of the truth and shielding people from the possible dangers of misleading technology. The legal actions will surely influence how the Indian judiciary and legislature respond to deepfakes and establish guidelines for the regulation of AI in the nation. The legal environment needs to change as technology does in order to allow innovation and accountability to live together.
Collaborative Frameworks:
- Misuse of deepfake technology poses an international problem that cuts beyond national boundaries. International collaborative frameworks might make it easier to share technical innovations, legal insights, and best practices. A coordinated response to this digital threat may be ensured by starting a worldwide conversation on deepfake regulation.
Legislative Flexibility:
- Given the speed at which technology is advancing, the legislative system must continue to adapt. It will be required to introduce new legislation expressly addressing developing technology and to regularly evaluate and update current laws. This guarantees that the judicial system can adapt to the changing difficulties brought forth by the misuse of deepfakes.
AI Development Ethics:
- Promoting moral behaviour in AI development is crucial. Tech businesses should abide by moral or ethical standards that place a premium on user privacy, responsibility, and openness. As a preventive strategy, ethical AI practices can lessen the possibility that AI technology will be misused for malevolent purposes.
Government-Industry Cooperation:
- It is essential that the public and commercial sectors work closely together. Governments and IT corporations should collaborate to develop and implement legislation. A thorough and equitable approach to the regulation of deepfakes may be ensured by establishing regulatory organizations with representation from both sectors.
Conclusion
A comprehensive strategy integrating technical, legal, and social interventions is necessary to navigate the path ahead. Governments, IT corporations, the courts, and the general public must all actively participate in the collective effort to combat the misuse of deepfakes, which goes beyond only legal measures. We can create a future where the digital ecosystem is safe and inventive by encouraging a shared commitment to tackling the issues raised by deepfakes. The Government is on its way to come up with dedicated legislation to tackle the issue of deepfakes. Followed by the recently issued government advisory on misinformation and deepfake.
References:

Over the last decade, battlefields have percolated from mountains, deserts, jungles, seas, and the skies into the invisible networks of code and cables. Cyberwarfare is no longer a distant possibility but today’s reality. The cyberattacks of Estonia in 2007, the crippling of Iran’s nuclear program by the Stuxnet virus, the SolarWinds and Colonial Pipeline breaches in recent years have proved one thing: that nations can now paralyze economies and infrastructures without firing a bullet. Cyber operations now fall beyond the traditional threshold of war, allowing aggressors to exploit the grey zone where full-scale retaliation may be unlikely.
At the same time, this ambiguity has also given rise to the concept of cyber deterrence. It is a concept that has been borrowed from the nuclear strategies during the Cold War era and has been adapted to the digital age. At the core, cyber deterrence seeks to alter the adversary’s cost-benefit calculation that makes attacks either too costly or pointless to pursue. While power blocs like the US, Russia, and China continue to build up their cyber arsenals, smaller nations can hold unique advantages, most importantly in terms of their resilience, if not firepower.
Understanding the concept of Cyber Deterrence
Deterrence, in its classic sense, is about preventing action through the fear of consequences. It usually manifests in four mechanisms as follows:
- Punishment by threatening to impose costs on attackers, whether by counter-attacks, economic sanctions, or even conventional forces.
- Denial of attacks by making them futile through hardened defences, and ensuring the systems to resist, recover, and continue to function.
- Entanglement by leveraging interdependence in trade, finance, and technology to make attacks costly for both attackers and defenders.
- Norms can also help shape behaviour by stigmatizing reckless cyber actions by imposing reputational costs that can exceed any gains.
However, great powers have always emphasized the importance of punishment as a tool to showcase their power by employing offensive cyber arsenals to instill psychological pressure on their rivals. Yet in cyberspace, punishment has inherent flaws.
The Advantage of Asymmetry
For small states, smaller geographical size can be utilised as a benefit. Three advantages of this exist, such as:
- With fewer critical infrastructures to protect, resources can be concentrated. For example, Denmark, with a modest population of $40 million cyber budget, is considered to be among the most cyber-secure nations, despite receiving billions of US spending.
- Smaller bureaucracies enable faster response. The centralised cyber command of Singapore allows it to ensure a rapid coordination between the government and the private sector.
- Smaller countries with lesser populations can foster a higher public awareness and participation in cyber hygiene by amplifying national resilience.
In short, defending a small digital fortress can be easier than securing a sprawling empire of interconnected systems.
Lessons from Estonia and Singapore
The 2007 crisis of Estonia remains a case study of cyber resilience. Although its government, bank, and media were targeted in offline mode, Estonia emerged stronger by investing heavily in cyber defense mechanisms. Another effort in this case stood was with the hosting of NATO’s Cooperative Cyber Defence Centre of Excellence to build one of the world’s most resilient e-governance models.
Singapore is another case. Where, recognising its vulnerability as a global financial hub, it has adopted a defense-centric deterrence strategy by focusing on redundancy, cyber education, and international partnership rather than offensive capacity. These approaches can also showcase that deterrence is not always about scaring attackers with retaliation, it is about making the attacks meaningless.
Cyber deterrence and Asymmetric Warfare
Cyber conflict is understood through the lens of asymmetric warfare, where weaker actors exploit the unconventional and stronger foes. As guerrillas get outmanoeuvred by superpowers in Vietnam or Afghanistan, small states hold the capability to frustrate the cyber giants by turning their size into a shield. The essence of asymmetric cyber defence also lies in three principles, which can be mentioned as;
- Resilience over retaliation by ensuring a rapid recovery to neutralise the goals of the attackers.
- Undertaking smart investments focusing on limited budgets over critical assets, not sprawling infrastructures.
- Leveraging norms to shape the international opinions to stigmatize the aggressors and increase the reputational costs.
This also helps to transform the levels of cyber deterrence into a game of endurance rather than escalating it into a domain where small states can excel.
There remain challenges as well, as attribution problems persist, the smaller nations still depend on foreign technology, which the adversaries have sought to exploit. Issues over the shortage of talent have plagued the small states, as cyber professionals have migrated to get lucrative jobs abroad. Moreover, building deterrence capability through norms requires active multilateral cooperation, which may not be possible for all small nations to sustain.
Conclusion
Cyberwarfare represents a new frontier of asymmetric conflict where size does not guarantee safety or supremacy. Great powers have often dominated the offensive cyber arsenals, where small states have carved their own path towards security by focusing on defence, resilience, and international collaboration. The examples of Singapore and Estonia demonstrate the fact that the small size of a state can be its identity of a hidden strength in capabilities like cyberspace, allowing nimbleness, concentration of resources and societal cohesion. In the long run, cyber deterrence for small states will not rest on fearsome retaliation but on making attacks futile and recovery inevitable.
References
- https://bluegoatcyber.com/blog/asymmetric-warfare/
- https://digitalcommons.usf.edu/cgi/viewcontent.cgi?article=2268&context=jss
- https://www.linkedin.com/pulse/rising-tide-cyberwarfare-battle-between-superpowers-hussain/
- https://digitalcommons.odu.edu/cgi/viewcontent.cgi?article=1243&context=gpis_etds
- https://www.scirp.org/journal/paperinformation?paperid=141708
- https://digitalcommons.odu.edu/cgi/viewcontent.cgi?article=1243&context=gpis_etds

Introduction
Global cybersecurity spending is expected to breach USD 210 billion in 2025, a ~10% increase from 2024 (Gartner). This is a result of an evolving and increasingly critical threat landscape enabled by factors such as the proliferation of IoT devices, the adoption of cloud networks, and the increasing size of the internet itself. Yet, breaches, misuse, and resistance persist. In 2025, global attack pressure rose ~21% Y-o-Y ( Q2 averages) (CheckPoint) and confirmed breaches climbed ~15%( Verizon DBIR). This means that rising investment in cybersecurity may not be yielding proportionate reductions in risk. But while mechanisms to strengthen technical defences and regulatory frameworks are constantly evolving, the social element of trust and how to embed it into cybersecurity systems remain largely overlooked.
Human Error and Digital Trust (Individual Trust)
Human error is consistently recognised as the weakest link in cybersecurity. While campaigns focusing on phishing prevention, urging password updates and using two-factor authentication (2FA) exist, relying solely on awareness measures to address human error in cyberspace is like putting a Band-Aid on a bullet wound. Rather, it needs to be examined through the lens of digital trust. As Chui (2022) notes, digital trust rests on security, dependability, integrity, and authenticity. These factors determine whether users comply with cybersecurity protocols. When people view rules as opaque, inconvenient, or imposed without accountability, they are more likely to cut corners, which creates vulnerabilities. Therefore, building digital trust means shifting from blaming people to design: embedding transparency, usability, and shared responsibility towards a culture of cybersecurity so that users are incentivised to make secure choices.
Organisational Trust and Insider Threats (Institutional Trust)
At the organisational level, compliance with cybersecurity protocols is significantly tied to whether employees trust employers/platforms to safeguard their data and treat them with integrity. Insider threats, stemming from both malicious and non-malicious actors, account for nearly 60% of all corporate breaches (Verizon DBIR 2024). A lack of trust in leadership may cause employees to feel disengaged or even act maliciously. Further, a 2022 study by Harvard Business Review finds that adhering to cybersecurity protocols adds to employee workload. When they are perceived as hindering productivity, employees are more likely to intentionally violate these protocols. The stress of working under surveillance systems that feel cumbersome or unreasonable, especially when working remotely, also reduces employee trust and, hence, compliance.
Trust, Inequality, and Vulnerability (Structural Trust)
Cyberspace encompasses a social system of its own since it involves patterned interactions and relationships between human beings. It also reproduces the social structures and resultant vulnerabilities of the physical world. As a result, different sections of society place varying levels of trust in digital systems. Women, rural, and marginalised groups often distrust existing digital security provisions more, and with reason. They are targeted disproportionately by cyber attackers, and yet are underprotected by systems, since these are designed prioritising urban/ male/ elite users. This leads to citizens adopting workarounds like password sharing for “safety” and disengaging from cyber safety discourse, as they find existing systems inaccessible or irrelevant to their realities. Cybersecurity governance that ignores these divides deepens exclusion and mistrust.
Laws and Compliances (Regulatory Trust)
Cybersecurity governance is operationalised in the form of laws, rules, and guidelines. However, these may often backfire due to inadequate design, reducing overall trust in governance mechanisms. For example, CERT-In’s mandate to report breaches within six hours of “noticing” it has been criticised as the steep timeframe being insufficient to generate an effective breach analysis report. Further, the multiplicity of regulatory frameworks in cross-border interactions can be costly and lead to compliance fatigue for organisations. Such factors can undermine organisational and user trust in the regulation’s ability to protect them from cyber attacks, fuelling a check-box-ticking culture for cybersecurity.
Conclusion
Cybersecurity is addressed primarily through code, firewall, and compliance today. But evidence suggests that technological and regulatory fixes, while essential, are insufficient to guarantee secure behaviour and resilient systems. Without trust in institutions, technologies, laws or each other, cybersecurity governance will remain a cat-and-mouse game. Building a trust-based architecture requires mechanisms to improve accountability, reliability, and transparency. It requires participatory designs of security systems and the recognition of unequal vulnerabilities. Thus, unless cybersecurity governance acknowledges that cyberspace is deeply social, investment may not be able to prevent the harms it seeks to curb.
References
- https://www.gartner.com/en/newsroom/press-releases/2025-07-29
- https://blog.checkpoint.com/research/global-cyber-attacks-surge-21-in-q2-2025
- https://www.verizon.com/business/resources/reports/2024-dbir-executive-summary.pdf
- https://www.verizon.com/business/resources/reports/2025-dbir-executive-summary.pdf
- https://insights2techinfo.com/wp-content/uploads/2023/08/Building-Digital-Trust-Challenges-and-Strategies-in-Cybersecurity.pdf
- https://www.coe.int/en/web/cyberviolence/cyberviolence-against-women
- https://www.upguard.com/blog/indias-6-hour-data-breach-reporting-rule