#Fact Old image of Hindu Priest with Donald trump at White house goes viral as recent.
Executive Summary:
Our Team recently came across a post on X (formerly twitter) where a photo widely shared with misleading captions was used about a Hindu Priest performing a vedic prayer at Washington after recent elections. After investigating, we found that it shows a ritual performed by a Hindu priest at a private event in White House to bring an end to the Covid-19 Pandemic. Always verify claims before sharing.

Claim:
An image circulating after Donald Trump’s win in the US election shows Pujari Harish Brahmbhatt at the White House recently.

Fact Check:
The analysis was carried out and found that the video is from an old post that was uploaded in May 2020. By doing a Reverse Image Search we were able to trace the sacred Vedic Shanti Path or peace prayer was recited by a Hindu priest in the Rose Garden of the White House on the occasion of National Day of Prayer Service with other religious leaders to pray for the health, safety and well-being of everyone affected by the coronavirus pandemic during those difficult days, and to bring an end to Covid-19 Pandemic.

Conclusion:
The viral claim mentioning that a Hindu priest performed a Vedic prayer at the White House during Donald Trump’s presidency isn’t true. The photo is actually from a private event in 2020 and provides misleading information.
Before sharing viral posts, take a brief moment to verify the facts. Misinformation spreads quickly and it’s far better to rely on trusted fact-checking sources.
- Claim: Hindu priest held a Vedic prayer at the White House under Trump
- Claimed On:Instagram and X (Formerly Known As Twitter)
- Fact Check: False and Misleading
Related Blogs

Introduction
Today, on the International Day of UN Peacekeepers, we honour the brave individuals who risk their lives to uphold peace in the world’s most fragile and conflict-ridden regions. These peacekeepers are symbols of hope, diplomacy, and resilience. But as the world changes, so do the arenas of conflict. In today’s interconnected age, peace and safety are no longer confined to physical spaces—they extend to the digital realm. As we commemorate their service, we must also reflect on the new frontlines of peacekeeping: the internet, where misinformation, cyberattacks, and digital hate threaten stability every day.
The Legacy of UN Peacekeepers
Since 1948, UN Peacekeepers have served in over 70 missions, protecting civilians, facilitating political processes, and rebuilding societies. From conflict zones in Africa to the Balkans, they’ve worked in the toughest terrains to keep the peace. Their role is built on neutrality, integrity, and international cooperation. But as hybrid warfare becomes more prominent and digital threats increasingly influence real-world violence, the peacekeeping mandate must evolve. Traditional missions are now accompanied by the need to understand and respond to digital disruptions that can escalate local tensions or undermine democratic institutions.
The Digital Battlefield
In recent years, we’ve seen how misinformation, deepfakes, online radicalisation, and coordinated cyberattacks can destabilise peace processes. Disinformation campaigns can polarise communities, hinder humanitarian efforts, and provoke violence. Peacekeepers now face the added challenge of navigating conflict zones where digital tools are weaponised. The line between physical and virtual conflict is blurring. Cybersecurity has gone beyond being just a technical issue and is now a peace and security issue as well. From securing communication systems to monitoring digital hate speech that could incite violence, peacekeeping must now include digital vigilance and strategic digital diplomacy.
Building a Culture of Peace Online
Safeguarding peace today also means protecting people from harm in the digital space. Governments, tech companies, civil society, and international organisations must come together to build digital resilience. This includes investing in digital literacy, combating online misinformation, and protecting human rights in cyberspace. Peacekeepers may not wear blue helmets online, but their spirit lives on in every effort to make the internet a safer, kinder, and more truthful place. The role of youth, educators, and responsible digital citizens has never been more crucial. A culture of peace must be cultivated both offline and online.
Conclusion: A Renewed Pledge for Peace
On this UN Peacekeepers’ Day, let us not only honour those who have served and sacrificed but also renew our commitment to peace in all its dimensions. The world’s conflicts are evolving, and so must our response. As we support peacekeepers on the ground, let’s also become peacebuilders in the digital world, amplifying truth, rejecting hate, and building safer, inclusive communities online. Peace today is not just about silencing guns but also silencing disinformation. The call for peace is louder than ever. Let’s answer it, both offline and online.

Introduction
The integration of Artificial Intelligence into our daily workflows has compelled global policymakers to develop legislative frameworks to govern its impact efficiently. The question that we arrive at here is: While AI is undoubtedly transforming global economies, who governs the transformation? The EU AI Act was the first of its kind legislation to govern Artificial Intelligence, making the EU a pioneer in the emerging technology regulation space. This blog analyses the EU's Draft AI Rules and Code of Practice, exploring their implications for ethics, innovation, and governance.
Background: The Need for AI Regulation
AI adoption has been happening at a rapid pace and is projected to contribute $15.7 trillion to the global economy by 2030. The AI market size is expected to grow by at least 120% year-over-year. Both of these statistics have been stated in arguments citing concrete examples of AI risks (e.g., bias in recruitment tools, misinformation spread through deepfakes). Unlike the U.S., which relies on sector-specific regulations, the EU proposes a unified framework to address AI's challenges comprehensively, especially with the vacuum that exists in the governance of emerging technologies such as AI. It should be noted that the GDPR or the General Data Protection Regulation has been a success with its global influence on data privacy laws and has started a domino effect for the creation of privacy regulations all over the world. This precedent emphasises the EU's proactive approach towards regulations which are population-centric.
Overview of the Draft EU AI Rules
This Draft General Purpose AI Code of Practice details the AI rules for the AI Act rules and the providers of general-purpose AI models with systemic risks. The European AI Office facilitated the drawing up of the code, and was chaired by independent experts and involved nearly 1000 stakeholders and EU member state representatives and observers both European and international observers.
14th November 2024 marks the publishing of the first draft of the EU’s General-Purpose AI Code of Practice, established by the EU AI Act. As per Article 56 of the EU AI Act, the code outlines the rules that operationalise the requirements, set out for General-Purpose AI (GPAI) model under Article 53 and GPAI models with systemic risks under Article 55. The AI Act is legislation that finds its base in product safety and relies on setting harmonised standards in order to support compliance. These harmonised standards are essentially sets of operational rules that have been established by the European Standardisation bodies, such as the European Committee for Standardisation (CEN), the European Committee for Electrotechnical Standardisation (CENELEC) and the European Telecommunications Standards Institute. Industry experts, civil society and trade unions are translating the requirements set out by the EU sectoral legislation into the specific mandates set by the European Commission. The AI Act obligates the developers, deployers and users of AI on mandates for transparency, risk management and compliance mechanisms
The Code of Practice for General Purpose AI
The most popular applications of GPAI include ChatGPT and other foundational models such as CoPilot from Microsoft, BERT from Google, Llama from Meta AI and many others and they are under constant development and upgradation. The 36-pages long draft Code of Practice for General Purpose AI is meant to serve as a roadmap for tech companies to comply with the AI Act and avoid paying penalties. It focuses on transparency, copyright compliance, risk assessment, and technical/governance risk mitigation as the core areas for the companies that are developing GPAIs. It also lays down guidelines that look to enable greater transparency on what goes into developing GPAIs.
The Draft Code's provisions for risk assessment focus on preventing cyber attacks, large-scale discrimination, nuclear and misinformation risks, and the risk of the models acting autonomously without oversight.
Policy Implications
The EU’s Draft AI Rules and Code of Practice represent a bold step in shaping the governance of general-purpose AI, positioning the EU as a global pioneer in responsible AI regulation. By prioritising harmonised standards, ethical safeguards, and risk mitigation, these rules aim to ensure AI benefits society while addressing its inherent risks. While the code is a welcome step, the compliance burdens on MSMEs and startups could hinder innovation, whereas, the voluntary nature of the Code raises concerns about accountability. Additionally, harmonising these ambitious standards with varying global frameworks, especially in regions like the U.S. and India, presents a significant challenge to achieving a cohesive global approach.
Conclusion
The EU’s initiative to regulate general-purpose AI aligns with its legacy of proactive governance, setting the stage for a transformative approach to balancing innovation with ethical accountability. However, challenges remain. Striking the right balance is crucial to avoid stifling innovation while ensuring robust enforcement and inclusivity for smaller players. Global collaboration is the next frontier. As the EU leads, the world must respond by building bridges between regional regulations and fostering a unified vision for AI governance. This demands active stakeholder engagement, adaptive frameworks, and a shared commitment to addressing emerging challenges in AI. The EU’s Draft AI Rules are not just about regulation, they are about leading a global conversation.
References
- https://indianexpress.com/article/technology/artificial-intelligence/new-eu-ai-code-of-practice-draft-rules-9671152/
- https://digital-strategy.ec.europa.eu/en/policies/ai-code-practice
- https://www.csis.org/analysis/eu-code-practice-general-purpose-ai-key-takeaways-first-draft#:~:text=Drafting%20of%20the%20Code%20of%20Practice%20is%20taking%20place%20under,the%20drafting%20of%20the%20code.
- https://copyrightblog.kluweriplaw.com/2024/12/16/first-draft-of-the-general-purpose-ai-code-of-practice-has-been-released/

Introduction
The Computer Emergency Response Team (CERT-in) is a nodal agency of the government established and appointed as a national agency in respect of cyber incidents and cyber security incidents in terms of the provisions of section 70B of the Information Technology (IT) Act, 2000. CERT-In has issued a cautionary note to Microsoft Edge, Adobe and Google Chrome users. Users have been alerted to many vulnerabilities by the government's cybersecurity agency, which hackers might use to obtain private data and run arbitrary code on the targeted machine. Users are advised by CERT-In to apply a security update right away in order to guard against the problem.
Vulnerability note
Vulnerability notes CIVN-2023-0361, CIVN-2023-0362 and CIVN-2023-0364 for Google Chrome for Desktop, Microsoft Edge and Adobe respectively, include more information on the alert. The problems have been categorized as high-severity issues by CERT-In, which suggests applying a security upgrade right now. According to the warning, there is a security risk if you use Google Chrome versions earlier than v120.0.6099.62 on Linux and Mac, or earlier than 120.0.6099.62/.63 on Windows. Similar to this, the vulnerability may also impact users of Microsoft Edge browser versions earlier than 120.0.2210.61.
Cause of the Problem
These vulnerabilities are caused by "Use after release in Media Stream, Side Panel Search, and Media Capture; Inappropriate implementation in Autofill and Web Browser UI, “according to the explanation in the issue note on the CERT-In website. The alert further warns that individuals who use the susceptible Microsoft Edge and Google Chrome browsers could end up being targeted by a remote attacker using these vulnerabilities to send a specially crafted request.” Once these vulnerabilities are effectively exploited, hackers may obtain higher privileges, obtain sensitive data, and run arbitrary code on the system of interest.
High-security issues: consequences
CERT-In has brought attention to vulnerabilities in Google Chrome, Microsoft Edge, and Adobe that might have serious repercussions and put users and their systems at risk. The vulnerabilities found in widely used browsers, like Adobe, Microsoft Edge, and Google Chrome, present serious dangers that might result in data breaches, unauthorized code execution, privilege escalation, and remote attacks. If these vulnerabilities are taken advantage of, private information may be violated, money may be lost, and reputational harm may result.
Additionally, the confidentiality and integrity of sensitive information may be compromised. The danger also includes the potential to interfere with services, cause outages, reduce productivity, and raise the possibility of phishing and social engineering assaults. Users may become less trusting of the impacted software as a result of the urgent requirement for security upgrades, which might make them hesitant to utilize these platforms until guarantees of thorough security procedures are provided.
Advisory
- Users should update their Google Chrome, Microsoft Edge, and Adobe software as soon as possible to protect themselves against the vulnerabilities that have been found. These updates are supplied by the individual software makers. Furthermore, use caution when browsing and refrain from downloading things from unidentified sites or clicking on dubious links.
- Make use of reliable ad-blockers and strong, often updated antivirus and anti-malware software. Maintain regular backups of critical data to reduce possible losses in the event of an attack, and keep up with best practices for cybersecurity. Maintaining current security measures with vigilance and proactiveness can greatly lower the likelihood of becoming a target for prospective vulnerabilities.