#FactCheck - Edited Video of ‘India-India’ Chants at Republican National Convention
Executive Summary:
A video online alleges that people are chanting "India India" as Ohio Senator J.D. Vance meets them at the Republican National Convention (RNC). This claim is not correct. The CyberPeace Research team’s investigation showed that the video was digitally changed to include the chanting. The unaltered video was shared by “The Wall Street Journal” and confirmed via the YouTube channel of “Forbes Breaking News”, which features different music performing while Mr. and Mrs. Usha Vance greeted those present in the gathering. So the claim that participants chanted "India India" is not real.

Claims:
A video spreading on social media shows attendees chanting "India-India" as Ohio Senator J.D. Vance and his wife, Usha Vance greet them at the Republican National Convention (RNC).


Fact Check:
Upon receiving the posts, we did keyword search related to the context of the viral video. We found a video uploaded by The Wall Street Journal on July 16, titled "Watch: J.D. Vance Is Nominated as Vice Presidential Nominee at the RNC," at the time stamp 0:49. We couldn’t hear any India-India chants whereas in the viral video, we can clearly hear it.
We also found the video on the YouTube channel of Forbes Breaking News. In the timestamp at 3:00:58, we can see the same clip as the viral video but no “India-India” chant could be heard.

Hence, the claim made in the viral video is false and misleading.
Conclusion:
The viral video claiming to show "India-India" chants during Ohio Senator J.D. Vance's greeting at the Republican National Convention is altered. The original video, confirmed by sources including “The Wall Street Journal” and “Forbes Breaking News” features different music without any such chants. Therefore, the claim is false and misleading.
Claim: A video spreading on social media shows attendees chanting "India-India" as Ohio Senator J.D. Vance and his wife, Usha Vance greet them at the Republican National Convention (RNC).
Claimed on: X
Fact Check: Fake & Misleading
Related Blogs
.webp)
Introduction
The rapid advancement of technology, including generative AI, offers immense benefits but also raises concerns about misuse. The Internet Watch Foundation reported that, as of July 2024, over 3,500 new AI-generated child sexual abuse images appeared on the dark web. The UK’s National Crime Agency records 800 monthly arrests for online child threats and estimates 840,000 adults as potential offenders. In response, the UK is introducing legislation to criminalise AI-generated child exploitation imagery, which will be a part of the Crime and Policing Bill when it comes to parliament in the next few weeks, aligning with global AI regulations like the EU AI Act and the US AI Initiative Act. This policy shift strengthens efforts to combat online child exploitation and sets a global precedent for responsible AI governance.
Current Legal Landscape and the Policy Gap
The UK’s Online Safety Act 2023 aims to combat CSAM and deepfake pornography by holding social media and search platforms accountable for user safety. It mandates these platforms to prevent children from accessing harmful content, remove illegal material, and offer clear reporting mechanisms. For adults, major platforms must be transparent about harmful content policies and provide users control over what they see.
However, the Act has notable limitations, including concerns over content moderation overreach, potential censorship of legitimate debates, and challenges in defining "harmful" content. It may disproportionately impact smaller platforms and raise concerns about protecting journalistic content and politically significant discussions. While intended to enhance online safety, these challenges highlight the complexities of balancing regulation with digital rights and free expression.
The Proposed Criminalisation of AI-Generated Sexual Abuse Content
The proposed law by the UK criminalises the creation, distribution, and possession of AI-generated CSAM and deepfake pornography. It mandates enforcement agencies and digital platforms to identify and remove such content, with penalties for non-compliance. Perpetrators may face up to two years in prison for taking intimate images without consent or installing equipment to facilitate such offences. Currently, sharing or threatening to share intimate images, including deepfakes, is an offence under the Sexual Offences Act 2003, amended by the Online Safety Act 2023. The government plans to repeal certain voyeurism offences, replacing them with broader provisions covering unauthorised intimate recordings. This aligns with its September 2024 decision to classify sharing intimate images as a priority offence under the Online Safety Act, reinforcing its commitment to balancing free expression with harm prevention.
Implications for AI Regulation and Platform Responsibility
The UK's move aligns with its AI Safety Summit commitments, placing responsibility on platforms to remove AI-generated sexual abuse content or face Ofcom enforcement. The Crime and Policing Bill is expected to tighten AI regulations, requiring developers to integrate safeguards against misuse, and the licensing frameworks may enforce ethical AI standards, restricting access to synthetic media tools. Given AI-generated abuse's cross-border nature, enforcement will necessitate global cooperation with platforms, law enforcement, and regulators. Bilateral and multilateral agreements could help harmonise legal frameworks, enabling swift content takedown, evidence sharing, and extradition of offenders, strengthening international efforts against AI-enabled exploitation.
Conclusion and Policy Recommendations
The Crime and Policing Bill marks a crucial step in criminalising AI-generated CSAM and deepfake pornography, strengthening online safety and platform accountability. However, balancing digital rights and enforcement remains a challenge. For effective implementation, industry cooperation is essential, with platforms integrating detection tools and transparent reporting systems. AI ethics frameworks should prevent misuse while allowing innovation, and victim support mechanisms must be prioritised. Given AI-driven abuse's global nature, international regulatory alignment is key for harmonised laws, evidence sharing, and cross-border enforcement. This legislation sets a global precedent, emphasising proactive regulation to ensure digital safety, ethical AI development, and the protection of human dignity.
References
- https://www.iwf.org.uk/about-us/why-we-exist/our-research/how-ai-is-being-abused-to-create-child-sexual-abuse-imagery/
- https://www.reuters.com/technology/artificial-intelligence/uk-makes-use-ai-tools-create-child-abuse-material-crime-2025-02-01/
- https://www.financialexpress.com/life/technology-uk-set-to-ban-ai-tools-for-creating-child-sexual-abuse-images-with-new-laws-3735296/
- https://www.gov.uk/government/publications/national-crime-agency-annual-report-and-accounts-2023-to-2024/national-crime-agency-annual-report-and-accounts-2023-to-2024-accessible#part-1--performance-report

Introduction
The Pahalgam terror attack, which took place on April 22, 2025, was a tragic incident that shook the nation. The National Investigation Agency (NIA) formally took over the Pahalgam terrorist attack case on Sunday, April 27, 2025. Following India's strikes on Pakistan, tensions between the two countries have heightened, leading to concerns about potential escalation, including the risk of cyber attacks and the spread of misinformation that could further complicate the situation. It is crucial for corporations, critical sectors, and all netizens in India to stay proactive and vigilant against cyber attacks, while also being cautious of the risks of misinformation. This includes protecting themselves from being affected and avoiding the inadvertent or deliberate spread of false information.
Be Careful with the Information You Consume and Share
It is crucial to note that the Press Information Bureau (PIB) has alerted citizens to stay cautious of fake narratives being circulated by Pakistani handles. Through an official fact check, PIB debunked several misleading claims aimed at undermining India’s internal stability and security forces. Citizens are urged to verify any suspicious content via PIB Fact Check before sharing it further. As social media becomes a hub for viral content, netizens must be cautious about the information they consume and share. Misleading information, old videos, false claims, and misinformation flood the platform, making it essential to be mindful of the content you consume and share, as spreading unverified content can have severe consequences.
CyberPeace Recommends Following Crucial Cyber Safety Tips to Stay Vigilant Against Potential Digital Threats:
- Do not open/download any video file you receive in social media groups or from unknown sources.
- As per several media reports, a video file named "Dance of the Hilary" is being circulated, which may be intended for a cyber attack on India. Please refrain from clicking, downloading, or sharing any such file. Additionally, there are reports of suspicious files circulating on WhatsApp, including tasksche.exe, OperationSindoor.ppt, and OperationSindhu.pptx. Do not download or open any of these files, as they may pose a serious cyber threat.
- To receive accurate alerts, you can enable government notifications on your iPhone. Go to Settings > Notifications and scroll down to Government Alerts. Make sure all the toggles under Government Alerts are turned on. This will allow you to receive timely information and important alerts from government agencies, and your device will display critical notifications to keep you informed and safe.
- Turn off automatic media download in WhatsApp to reduce the risk of downloading potentially harmful files.
- To protect your privacy, disable location services on apps like WhatsApp, Instagram, Snapchat, and X unless absolutely necessary.
- Refrain from sharing sensitive information like government data, confidential details, or personal records on unsecured devices or networks.
- To avoid misinformation and manipulation during conflict, verifying and cross-checking the news before sharing it with anyone is crucial. Stay updated with official news updates, and be cautious while sharing information.
Conclusion
In times of heightened tensions, all of us need to stay vigilant, protect our digital spaces, and verify the information we encounter. Together, we can safeguard ourselves from cyber threats and misinformation, ensuring the safety, stability, and digital security of our nation. As proud citizens, let us unite to protect both our physical and digital well-being.
References
- https://www.thehindu.com/news/national/pakistan-has-unleashed-propaganda-machine-in-response-to-successful-operation-sindoor-ib-ministry/article69549084.ece
- https://sambadenglish.com/national-international-news/india/centre-asks-people-to-stay-alert-against-misinformation-in-social-media-9048169
- https://www.youtube.com/watch?v=gLHo_Vd1_H0&t=19s

Introduction
In the ever-evolving world of technological innovation, a new chapter is being inscribed by the bold visionaries at Figure AI, a startup that is not merely capitalising on artificial intelligence rage but seeking to crest its very pinnacle. With the recent influx of a staggering $675 million in funding, this Sunnyvale, California-based enterprise has captured the imagination of industry giants and venture capitalists alike, all betting on a future where humanoid robots transcend the realm of science fiction to become an integral part of our daily lives.
The narrative of Figure AI's ascent is punctuated by the names of tech luminaries and corporate giants. Jeff Bezos, through his firm Explore Investments LLC, has infused a hefty $100 million into the venture. Microsoft, not to be outdone, has contributed a cool $95 million. Nvidia and an Amazon-affiliated fund have each bestowed $50 million upon Figure AI's ambitious endeavours. This surge of capital is a testament to the potential seen in the company's mission to develop general-purpose humanoid robots that promise to revolutionise industries and redefine human labour.
The Catalyst for Change
This investment craze can be traced back to the emergence of OpenAI's ChatGPT, a chatbot that caught the public eye in November 2022. Its success has not only ushered in a new era for AI but has also sparked a race among investors eager to stake their claim in startups determined to outshine their more established counterparts. OpenAI itself, once mulling over the acquisition of Figure AI, has now joined the ranks of its benefactors with a $5 million investment.
The roster of backers reads like a who's who of the tech and venture capital world. Intel's venture capital arm, LG Innotek, Samsung's investment group, Parkway Venture Capital, Align Ventures, ARK Venture Fund, Aliya Capital Partners, and Tamarack—all have invested their lot with Figure AI, signalling a broad consensus on the startup's potential to disrupt and innovate.
Yet, when probed for insights, these major players—Amazon, Nvidia, Microsoft, and Intel—have maintained a Sphinx-like silence, while Figure AI and other entities mentioned in the report have refrained from immediate responses to inquiries. This veil of secrecy only adds to the intrigue surrounding the company's prospects and the transformative impact its technology may have on society.
Need For AI Robots
Figure AI's robots are not mere assemblages of metal and circuitry; they are envisioned as versatile beings capable of navigating a multitude of environments and executing a diverse array of tasks. From working at aisles of warehouses to the bustling corridors of retail spaces, these humanoid automatons are being designed to fill the void of millions of jobs projected to remain vacant due to a shrinking human labour force.
The company's long-term mission statement is as audacious as it is altruistic: 'to develop general-purpose humanoids that make a positive impact on humanity and create a better life for future generations.' This noble pursuit is not just about engineering efficiency; it is about reshaping the very fabric of work, liberating humans from hazardous and menial tasks, and propelling us towards a future where our lives are enriched with purpose and fulfilment.
Conclusion
As we stand on the cusp of a new digital world, the strides of Figure AI serve as a beacon, illuminating the path towards machine and human symbiosis. The investment frenzy that has enveloped the company is a clarion call to all dreamers, pragmatists and innovators alike that the age of humanoid helpers is upon us, and the possibilities are as endless as our collective imagination.
Figure AI is forging a future where robots walk among us, not as novelties or overlords but as partners in forging a world where technology and humanity work together to unlock untold potential. The story of Figure AI is not just one of investment and innovation; it is a narrative of hope, a testament to the indomitable spirit of human ingenuity, and a preview of the wondrous epoch that lies just beyond the horizon.
References
- https://cybernews.com/tech/openai-bezos-nvidia-fund-robot-startup-figure-ai/
- https://www.thedailystar.net/business/news/bezos-nvidia-join-openai-funding-humanoid-robot-startup-3551476
- https://www.bloomberg.com/news/articles/2024-02-23/bezos-nvidia-join-openai-microsoft-in-funding-humanoid-robot-startup-figure-ai
- https://economictimes.indiatimes.com/tech/technology/bezos-nvidia-join-openai-in-funding-humanoid-robot-startup-report/articleshow/107967102.cms?from=mdr