#FactCheck: Viral Video of Chandra Arya Speaking Kannada Unrelated to Canadian PM Nomination
Executive Summary:
Recently, our team encountered a post on X (formerly Twitter) pretending Chandra Arya, a Member of Parliament of Canada is speaking in Kannada and this video surfaced after he filed his nomination for the much-coveted position of Prime Minister of Canada. The video has taken the internet by storm and is being discussed as much as words can be. In this report, we shall consider the legitimacy of the above claim by examining the content of the video, timing and verifying information from reliable sources.

Claim:
The viral video claims Chandra Arya spoke Kannada after filing his nomination for the Canadian Prime Minister position in 2025, after the resignation of Justin Trudeau.

Fact Check:
Upon receiving the video, we performed a reverse image search of the key frames extracted from the video, we found that the video has no connection to any nominations for the Canadian Prime Minister position.Instead, we found that it was an old video of his speech in the Canadian Parliament in 2022. Simultaneously, an old post from the X (Twitter) handle of Mr. Arya’s account was posted at 12:19 AM, May 20, 2022, which clarifies that the speech has no link with the PM Candidature post in the Canadian Parliament.
Further our research led us to a YouTube video posted on a verified channel of Hindustan Times dated 20th May 2022 with a caption -
“India-born Canadian MP Chandra Arya is winning hearts online after a video of his speech at the Canadian Parliament in Kannada went viral. Arya delivered a speech in his mother tongue - Kannada. Arya, who represents the electoral district of Nepean, Ontario, in the House of Commons, the lower house of Canada, tweeted a video of his address, saying Kannada is a beautiful language spoken by about five crore people. He said that this is the first time when Kannada is spoken in any Parliament outside India. Netizens including politicians have lauded Arya for the video.”

Conclusion:
The viral video claiming that Chandra Arya spoke in Kannada after filing his nomination for the Canadian Prime Minister position in 2025 is completely false. The video, dated May 2022, shows Chandra Arya delivering an address in Kannada in the Canadian Parliament, unrelated to any political nominations or events concerning the Prime Minister's post. This incident highlights the need for thorough fact-checking and verifying information from credible sources before sharing.
- Claim: Misleading Claim About Chandra Arya’s PM Candidacy
- Claimed on: X (Formerly Known As Twitter)
- Fact Check: False and Misleading
Related Blogs

Introduction
Advanced deepfake technology blurs the line between authentic and fake. To ascertain the credibility of the content it has become important to differentiate between genuine and manipulated or curated online content highly shared on social media platforms. AI-generated fake voice clone, videos are proliferating on the Internet and social media. There is the use of sophisticated AI algorithms that help manipulate or generate synthetic multimedia content such as audio, video and images. As a result, it has become increasingly difficult to differentiate between genuine, altered, or fake multimedia content. McAfee Corp., a well-known or popular global leader in online protection, has recently launched an AI-powered deepfake audio detection technology under Project “Mockingbird” intending to safeguard consumers against the surging threat of fabricated or AI-generated audio or voice clones to dupe people for money or unauthorisly obtaining their personal information. McAfee Corp. announced its AI-powered deepfake audio detection technology, known as Project Mockingbird, at the Consumer Electronics Show, 2024.
What is voice cloning?
To create a voice clone of anyone's, audio can be deeplyfaked, too, which closely resembles a real voice but, in actuality, is a fake voice created through deepfake technology.
Emerging Threats: Cybercriminal Exploitation of Artificial Intelligence in Identity Fraud, Voice Cloning, and Hacking Acceleration
AI is used for all kinds of things from smart tech to robotics and gaming. Cybercriminals are misusing artificial intelligence for rather nefarious reasons including voice cloning to commit cyber fraud activities. Artificial intelligence can be used to manipulate the lips of an individual so it looks like they're saying something different, it could also be used for identity fraud to make it possible to impersonate someone for a remote verification for your bank and it also makes traditional hacking more convenient. Cybercriminals have been misusing advanced technologies such as artificial intelligence, which has led to an increase in the speed and volume of cyber attacks, and that's been the theme in recent times.
Technical Analysis
To combat Audio cloning fraudulent activities, McAfee Labs has developed a robust AI model that precisely detects artificially generated audio used in videos or otherwise.
- Context-Based Recognition: Contextual assessment is used by technological devices to examine audio components in the overall setting of an audio. It improves the model's capacity to recognise discrepancies suggestive of artificial intelligence-generated audio by evaluating its surroundings information.
- Conductual Examination: Psychological detection techniques examine linguistic habits and subtleties, concentrating on departures from typical individual behaviour. Examining speech patterns, tempo, and pronunciation enables the model to identify artificially or synthetically produced material.
- Classification Models: Auditory components are categorised by categorisation algorithms for detection according to established traits of human communication. The technology differentiates between real and artificial intelligence-synthesized voices by comparing them against an extensive library of legitimate human speech features.
- Accuracy Outcomes: McAfee Labs' deepfake voice recognition solution, which boasts an impressive ninety per cent success rate, is based on a combined approach incorporating psychological, context-specific, and categorised identification models. Through examining audio components in the larger video context and examining speech characteristics, such as intonation, rhythm, and pronunciation, the system can identify discrepancies that could be signs of artificial intelligence-produced audio. Categorical models make an additional contribution by classifying audio information according to characteristics of known human speech. This all-encompassing strategy is essential for precisely recognising and reducing the risks connected to AI-generated audio data, offering a strong barrier against the growing danger of deepfake situations.
- Application Instances: The technique protects against various harmful programs, such as celebrity voice-cloning fraud and misleading content about important subjects.
Conclusion
It is important to foster ethical and responsible consumption of technology. Awareness of common uses of artificial intelligence is a first step toward broader public engagement with debates about the appropriate role and boundaries for AI. Project Mockingbird by Macafee employs AI-driven deepfake audio detection to safeguard against cyber criminals who are using fabricated AI-generated audio for scams and manipulating the public image of notable figures, protecting consumers from financial and personal information risks.
References:
- https://www.cnbctv18.com/technology/mcafee-deepfake-audio-detection-technology-against-rise-in-ai-generated-misinformation-18740471.htm
- https://www.thehindubusinessline.com/info-tech/mcafee-unveils-advanced-deepfake-audio-detection-technology/article67718951.ece
- https://lifestyle.livemint.com/smart-living/innovation/ces-2024-mcafee-ai-technology-audio-project-mockingbird-111704714835601.html
- https://news.abplive.com/fact-check/audio-deepfakes-adding-to-cacophony-of-online-misinformation-abpp-1654724

Introduction
The Government of India has initiated a cybercrime crackdown that has resulted in the blocking of 781,000 SIM cards and 208,469 IMEI (International Mobile Equipment Identity) numbers that are associated with digital fraud as of February 2025. This data was released as a written response by the Union Minister of State for Home Affairs, Bandi Sanjay Kumar, with respect to a query presented in the Lok Sabha. A significant jump from the 669,000 SIM cards blocked in the past year, efforts aimed at combating digital fraud are in full swing, considering the increasing cases. The Indian Cyber Crime Coordination Centre (I4C) is proactively blocking other platform accounts found suspicious, such as WhatsApp Accounts (83,668) and Skype IDs (3,962) on its part, aiding in eliminating identified threat actors.
Increasing Digital Fraud And The Current Combative Measures
According to the data tabled by the Ministry of Finance in the Rajya Sabha, the first 10 months of the Financial year 2024-2025 have recorded around 2.4 million incidents covering an amount of Rs. 4,245 crore involving cases of digital Financial Fraud cases. Apart from the evident financial loss, such incidents also take an emotional toll as people are targeted regardless of their background and age, leaving everyone equally vulnerable. To address this growing problem, various government departments have dedicated measures to combat and reduce such incidents. Some of the notable initiatives/steps are as follows:
- The Citizen Financial Cyber Fraud Reporting and Management System- This includes reporting Cybercrimes through the nationwide toll-free (1930) number and registration on the National Cyber Crime Reporting Portal. On being a victim of digital fraud, one can call the toll-free number, describing details of the incident, which would further help in the investigation. After reporting the incident, the complainant receives a generated login ID/acknowledgement number that they can use for further reference.
- International Incoming Spoofed Calls Prevention System- This is a mechanism developed to counter fraudulent calls that appear to originate from within India but are actually made from international locations. This system prevents the misuse of the Calling Line Identity (CLI), which is manipulated to deceive recipients in order to carry out financial crimes like digital arrests, among other things. Coordinating with the Department of Telecommunication (DoT), private telecommunication service providers (TSPs) are being encouraged to check with their ILD (International Long-Distance) network as a measure. Airtel has recently started categorising such numbers as International numbers on their part.
- Chakshu Facility at Sanchar Saathi platform- A citizen-centric initiative, created by the Department of Telecommunications, to empower mobile subscribers. It focuses on reporting unsolicited commercial communication (spam messages) and reporting suspected fraudulent communication. (https://sancharsaathi.gov.in/).
- Aadhaar-based verification of SIM cards- A directive issued by the Prime Minister's Office to the Department of Telecommunications mandates an Aadhaar-based biometric verification for the issuance of new SIM cards. This has been done so in an effort to prevent fraud and cybercrime through mobile connections obtained using fake documents. Legal action against non-compliant retailers in the form of FIRs is also being taken.
On the part of the public, awareness of the following steps could encourage them on how to deal with such situations:
- Awareness regarding types of crimes and the tell-tale signs of the modus operandi of a criminal: A general awareness and a cautionary approach to how such crimes take place could help better prepare and respond to such malicious scams. Some important signs on the part of the offender include pressuring the victim into immediate action, insistence on video calls, and the threat of arrest in case of non-compliance. It is also important to note that no official authority, in any legal capacity, allows for enabling a digital/online arrest.
- Knowing the support channels: Awareness regarding reporting mechanisms and cyber safety hygiene tips can help in building cyber resilience amongst netizens.
Conclusion
As cybercrooks continue to find new ways of duping people of their hard-earned money, both government and netizens must make efforts to combat such crimes and increase awareness on both ends (systematic and public). Increasing developments in AI, deepfakes, and other technology often render the public inept at assessing the veracity of the source, making them susceptible to such crime. A cautionary yet proactive approach is need of the hour.
References
- https://mobileidworld.com/india-blocks-781000-sim-cards-in-major-cybercrime-crackdown/
- https://www.storyboard18.com/how-it-works/over-83k-whatsapp-accounts-used-for-digital-arrest-blocked-home-ministry-60292.htm
- https://www.business-standard.com/finance/news/digital-financial-frauds-touch-rs-4-245-crore-in-the-apr-jan-period-of-fy25-125032001214_1.html
- https://www.business-standard.com/india-news/govt-blocked-781k-sims-3k-skype-ids-83k-whatsapp-accounts-till-feb-125032500965_1.html
- https://pib.gov.in/PressReleasePage.aspx?PRID=2042130
- https://mobileidworld.com/india-mandates-aadhaar-biometric-verification-for-new-sim-cards-to-combat-fraud/
- https://pib.gov.in/PressReleaseIframePage.aspx?PRID=2067113

Introduction:
The G7 Summit is an international forum that includes member states from France, the United States, the United Kingdom, Germany, Japan, Italy, Canada and the European Union (EU). The annual G7 meeting that is held every year was hosted by Japan this year in May 2023. It took place in Hiroshima. Artificial Intelligence (AI) was the major theme of this G7 summit. Key takeaways from this G7 summit highlight that leaders together focused on escalating the adoption of AI for beneficial use cases across the economy and the government and improving the governing structure to mitigate the potential risks of AI.
Need for fair and responsible use of AI:
The G7 recognises that they really need to work together to ensure the responsible and fair use of AI to help establish technical standards for the same. Members of the G7 countries agreed to adopt an open and enabling environment for the development of AI technologies. They also emphasized that AI regulations should be based on democratic values. G7 summit calls for the responsible use of AI. The ministers discussed the risks involved in AI technology programs like ChatGPT. They came up with an action plan for promoting responsible use of AI with human beings leading the efforts.
Further Ministers from the Group of Seven (G7) countries (Canada, France, Germany, Italy, Japan, the UK, the US, and the EU) met virtually on 7 September 2023 and committed to creating ‘international guiding principles applicable for all AI actors’, and a code of conduct for organisations developing ‘advanced’ AI systems.
What is HAP (Hiroshima AI Process)
Hiroshima AI Process (HAP) aims to establish trustworthy AI technical standards at the international level. The G7 agreed on creating a ministerial forum to prompt the fair use of AI. Hiroshima AI Process (HAP) is an effort by G7 to determine a way forward to regulate AI. The HAP establishes a forum for international discussions on inclusive AI governance and interoperability to achieve a common vision and goal of trustworthy AI at the global level.
The HAP will be operating in close connection with organisations including the Organisation for Economic Co-operation and Development (OECD) and the Global Partnership on AI (GPAI).
This Hiroshima AI Process (HAP) initiated at the Annual G7 Summit held in Hiroshima, Japan is a significant step towards regulating AI and the Hiroshima AI Process (HAP) is likely to conclude by December 2023.
G7 leaders emphasized fostering an environment where trustworthy AI systems are designed, developed and deployed for the common good worldwide. They advocated for international standards and interoperable tools for trustworthy AI that enable Innovation by creating a comprehensive policy framework, including overall guiding principles for all AI actors in the AI ecosystem.
Stressing upon fair use of advanced technologies:
The impact and misuse of generative AI was also discussed by the G7 leaders. The G7 members also stressed misinformation and disinformation in the realm of generative AI models. As they are capable of creating synthetic content such as deepfakes. In particular, they noted that the next generation of interactive generative media will leverage targeted influence content that is highly personalized, localized, and conversational.
In the digital landscape, there is a rapid advancement of technologies such as generative
Artificial Intelligence (AI), deepfake, machine learning, etc. Such technologies offer convenience to users in performing several tasks and are capable of assisting individuals and business entities. Since these technologies are easily accessible, cyber-criminals leverage AI tools and technologies for malicious activities, hence certain regulatory mechanisms at the global level will ensure and advocate for the ethical, reasonable and fair use of such advanced technologies.
Conclusion:
The G7 summit held in May 2023 focused on advanced international discussions on inclusive AI governance and interoperability to achieve a common vision and goal of trustworthy AI, in line with shared democratic values. AI governance has become a global issue, countries around the world are coming forward and advocating for the responsible and fair use of AI and influence on global AI governance and standards. It is significant to establish a regulatory framework that defines AI capabilities and identifies areas prone to misuse. And set forth reasonable technical standards while also fostering innovations. Hence overall prioritizing data privacy, integrity, and security in the evolving nature of advanced technologies.
References:
- https://www.politico.eu/wp-content/uploads/2023/09/07/3e39b82d-464d-403a-b6cb-dc0e1bdec642-230906_Ministerial-clean-Draft-Hiroshima-Ministers-Statement68.pdf
- https://www.g7hiroshima.go.jp/en/summit/about/
- https://www.drishtiias.com/daily-updates/daily-news-analysis/the-hiroshima-ai-process-for-global-ai-governance
- https://www.businesstoday.in/technology/news/story/hiroshima-ai-process-g7-calls-for-adoption-of-international-technical-standards-for-ai-382121-2023-05-20