#FactCheck - False Claim of Italian PM Congratulating on Ram Temple, Reveals Birthday Thanks
Executive Summary:
A number of false information is spreading across social media networks after the users are sharing the mistranslated video with Indian Hindus being congratulated by Italian Prime Minister Giorgia Meloni on the inauguration of Ram Temple in Ayodhya under Uttar Pradesh state. Our CyberPeace Research Team’s investigation clearly reveals that those allegations are based on false grounds. The true interpretation of the video that actually is revealed as Meloni saying thank you to those who wished her a happy birthday.
Claims:
A X (Formerly known as Twitter) user’ shared a 13 sec video where Italy Prime Minister Giorgia Meloni speaking in Italian and user claiming to be congratulating India for Ram Mandir Construction, the caption reads,
“Italian PM Giorgia Meloni Message to Hindus for Ram Mandir #RamMandirPranPratishta. #Translation : Best wishes to the Hindus in India and around the world on the Pran Pratistha ceremony. By restoring your prestige after hundreds of years of struggle, you have set an example for the world. Lots of love.”

Fact Check:
The CyberPeace Research team tried to translate the Video in Google Translate. First, we took out the transcript of the Video using an AI transcription tool and put it on Google Translate; the result was something else.

The Translation reads, “Thank you all for the birthday wishes you sent me privately with posts on social media, a lot of encouragement which I will treasure, you are my strength, I love you.”
With this we are sure that it was not any Congratulations message but a thank you message for all those who sent birthday wishes to the Prime Minister.
We then did a reverse Image Search of frames of the Video and found the original Video on the Prime Minister official X Handle uploaded on 15 Jan, 2024 with caption as, “Grazie. Siete la mia” Translation reads, “Thank you. You are my strength!”

Conclusion:
The 13 Sec video shared by a user had a great reach at X as a result many users shared the Video with Similar Caption. A Misunderstanding starts from one Post and it spreads all. The Claims made by the X User in Caption of the Post is totally misleading and has no connection with the actual post of Italy Prime Minister Giorgia Meloni speaking in Italian. Hence, the Post is fake and Misleading.
- Claim: Italian Prime Minister Giorgia Meloni congratulated Hindus in the context of Ram Mandir
- Claimed on: X
- Fact Check: Fake
Related Blogs

Introduction
The Indian Ministry of Information and Broadcasting has proposed a new legislation. On the 10th of November, 2023, a draft bill emerged, a parchment of governance seeking to sculpt the contours of the nation's broadcasting landscape. The Broadcasting Services (Regulation) Bill, 2023, is not merely a legislative doctrine; it is a harbinger of change, an attestation to the storm of technology and the diversification of media in the age of the internet.
The bill, slated to replace the Cable Television Networks (Regulation) Act of 1995, acknowledges the paradigm shifts that have occurred in the media ecosystem. The emergence of Internet Protocol Television (IPTV), over-the-top (OTT) platforms and other digital broadcasting services has rendered the previous legislation a relic, ill-suited to the dynamism of the current milieu. The draft bill, therefore, stands at the precipice of the future, inviting stakeholders and the vox populi to weigh in on its provisions, to shape the edifice of regulation that will govern the airwaves and the digital streams.
Defining the certain Clauses of the bill
Clause 1 (dd) - The Programme
In the intricate tapestry of the bill's clauses, certain threads stand out, demanding scrutiny and careful consideration. Clause 1(dd), for instance, grapples with the definition of 'Programme,' a term that, in its current breadth, could ensnare the vast expanse of audio, visual, and written content transmitted through broadcasting networks. The implications are profound: content disseminated via YouTube or any website could fall within the ambit of this regulation, a prospect that raises questions about the scope of governmental oversight in the digital realm.
Clause 2(v) - The news and current affairs
Clause 2(v) delves into the murky waters of 'news and current affairs programmes,' a definition that, as it stands, is a maelstrom of ambiguity. The phrases 'newly-received or noteworthy audio, visual or audio-visual programmes' and 'about recent events primarily of socio-political, economic or cultural nature' are a siren's call, luring the unwary into a vortex of subjective interpretation. The threat of potential abuse looms larger, threatening the right to freedom of expression enshrined in Article 19 of the Indian Constitution. It is a clarion call for stakeholders to forge a definition that is objective and clear, one that is in accordance with the Supreme Court's decision in Shreya Singhal v. Union of India, which upheld the sanctity of digital expression while advocating for responsible content creation.
Clause 2(y) Over the Top Broadcasting Services
Clause 2(y) casts its gaze upon OTT broadcasting services, entities that operate in a realm distinct from traditional broadcasting. The one-to-many paradigm of broadcast media justifies a degree of governmental control, but OTT streaming is a more intimate affair, a one-on-one engagement with content on personal devices. The draft bill's attempt to umbrella OTT services under the broadcasting moniker is a conflation that could stifle the diversity and personalised nature of these platforms. It is a conundrum that other nations, such as Australia and Singapore, have approached with nuanced regulatory frameworks that recognise the unique characteristics of OTT services.
Clause 4(4) - Requirements for Broadcasters and Network Operators
The bill's journey through the labyrinth of regulation is fraught with other challenges. The definition of 'Person' in Clause 2(z), the registration exemptions in Clause 4(4), the prohibition on state governments and political parties from engaging in broadcasting in Clause 6, and the powers of inspection and seizure in Clauses 30(2) and 31, all present a complex puzzle. Each clause, each sub-section, is a cog in the machinery of governance that must be calibrated with precision to balance the imperatives of regulation with the freedoms of expression and innovation.
Clause 27 - Advisory Council
The Broadcast Advisory Council, envisioned in Clause 27, is yet another crucible where the principles of impartiality and independence must be tempered. The composition of this council, the public consultations that inform its establishment, and the alignment with constitutional principles are all vital to its legitimacy and efficacy.
A Way Forward
It is up to us, as participants in the democratic process and citizens, to interact with the bill's provisions as it makes its way through the halls of public discourse and legislative examination. To guarantee that the ultimate version of the Broadcasting Services (Regulation) Bill, 2023, is a symbol of advancement and a charter that upholds our most valued liberties while welcoming the opportunities presented by the digital era, we must employ the instruments of study and discussion.
The draft bill is more than just a document in this turbulent time of transition; it is a story of India's dreams, a testament to its dedication to democracy, and a roadmap for its digital future. Therefore, let us take this duty with the seriousness it merits, as the choices we make today will have a lasting impact on the history of our country and the media environment for future generations.
References
- https://scroll.in/article/1059881/why-indias-new-draft-broadcast-bill-has-raised-fears-of-censorship-and-press-suppression#:~:text=The%20bill%20extends%20the%20regulatory,regulation%20through%20content%20evaluation%20committees.
- https://pib.gov.in/PressReleasePage.aspx?PRID=1976200
- https://www.hindustantimes.com/india-news/new-broadcast-bill-may-also-cover-those-who-put-up-news-content-online-101701023054502.html

Brief Overview of the EU AI Act
The EU AI Act, Regulation (EU) 2024/1689, was officially published in the EU Official Journal on 12 July 2024. This landmark legislation on Artificial Intelligence (AI) will come into force just 20 days after publication, setting harmonized rules across the EU. It amends key regulations and directives to ensure a robust framework for AI technologies. The AI Act, a set of EU rules governing AI, has been in development for two years and now, the EU AI Act enters into force across all 27 EU Member States on 1 August 2024, with certain future deadlines tied up and the enforcement of the majority of its provisions will commence on 2 August 2026. The law prohibits certain uses of AI tools, including those that threaten citizens' rights, such as biometric categorization, untargeted scraping of faces, and systems that try to read emotions are banned in the workplace and schools, as are social scoring systems. It also prohibits the use of predictive policing tools in some instances. The law takes a phased approach to implementing the EU's AI rulebook, meaning there are various deadlines between now and then as different legal provisions will start to apply.
The framework puts different obligations on AI developers, depending on use cases and perceived risk. The bulk of AI uses will not be regulated as they are considered low-risk, but a small number of potential AI use cases are banned under the law. High-risk use cases, such as biometric uses of AI or AI used in law enforcement, employment, education, and critical infrastructure, are allowed under the law but developers of such apps face obligations in areas like data quality and anti-bias considerations. A third risk tier also applies some lighter transparency requirements for makers of tools like AI chatbots.
In case of failure to comply with the Act, the companies in the EU providing, distributing, importing, and using AI systems and GPAI models, are subject to fines of up to EUR 35 million or seven per cent of the total worldwide annual turnover, whichever is higher.
Key highlights of EU AI Act Provisions
- The AI Act classifies AI according to its risk. It prohibits Unacceptable risks such as social scoring systems and manipulative AI. The regulation mostly addresses high-risk AI systems.
- Limited-risk AI systems are subject to lighter transparency obligations and according to the act, the developers and deployers must ensure that the end-users are aware that the interaction they are having is with AI such as Chatbots and Deepfakes. The AI Act allows the free use of minimal-risk AI. This includes the majority of AI applications currently available in the EU single market like AI-enabled video games, and spam filters, but with the advancement of Gen AI changes with regards to this might be done. The majority of obligations fall on providers (developers) of high-risk AI systems that intend to place on the market or put into service high-risk AI systems in the EU, regardless of whether they are based in the EU or a third country. And also, a third-country provider where the high-risk AI system’s output is used in the EU.
- Users are natural or legal persons who deploy an AI system in a professional capacity, not affected end-users. Users (deployers) of high-risk AI systems have some obligations, though less than providers (developers). This applies to users located in the EU, and third-country users where the AI system’s output is used in the EU.
- General purpose AI or GPAI model providers must provide technical documentation, and instructions for use, comply with the Copyright Directive, and publish a summary of the content used for training. Free and open license GPAI model providers only need to comply with copyright and publish the training data summary, unless they present a systemic risk. All providers of GPAI models that present a systemic risk – open or closed – must also conduct model evaluations, and adversarial testing, and track and report serious incidents and ensure cybersecurity protections.
- The Codes of Practice will account for international approaches. It will cover but not necessarily be limited to the obligations, particularly the relevant information to include in technical documentation for authorities and downstream providers, identification of the type and nature of systemic risks and their sources, and the modalities of risk management accounting for specific challenges in addressing risks due to the way they may emerge and materialize throughout the value chain. The AI Office may invite GPAI model providers, and relevant national competent authorities to participate in drawing up the codes, while civil society, industry, academia, downstream providers and independent experts may support the process.
Application & Timeline of Act
The EU AI Act will be fully applicable 24 months after entry into force, but some parts will be applicable sooner, for instance the ban on AI systems posing unacceptable risks will apply six months after the entry into force. The Codes of Practice will apply nine months after entry into force. Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force. High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force. The expected timeline for the same is:
- August 1st, 2024: The AI Act will enter into force.
- February 2025: Prohibition of certain AI systems - Chapters I (general provisions) & II (prohibited AI systems) will apply; Prohibition of certain AI systems.
- August 2025: Chapter III Section 4 (notifying authorities), Chapter V (general purpose AI models), Chapter VII (governance), Chapter XII (confidentiality and penalties), and Article 78 (confidentiality) will apply, except for Article 101 (fines for General Purpose AI providers); Requirements for new GPAI models.
- August 2026: The whole AI Act applies, except for Article 6(1) & corresponding obligations (one of the categories of high-risk AI systems);
- August 2027: Article 6(1) & corresponding obligations apply.
The AI Act sets out clear definitions for the different actors involved in AI, such as the providers, deployers, importers, distributors, and product manufacturers. This means all parties involved in the development, usage, import, distribution, or manufacturing of AI systems will be held accountable. Along with this, the AI Act also applies to providers and deployers of AI systems located outside of the EU, e.g., in Switzerland, if output produced by the system is intended to be used in the EU. The Act applies to any AI system within the EU that is on the market, in service, or in use, covering both AI providers (the companies selling AI systems) and AI deployers (the organizations using those systems).
In short, the AI Act will apply to different companies across the AI distribution chain, including providers, deployers, importers, and distributors (collectively referred to as “Operators”). The EU AI Act also has extraterritorial application and can also apply to companies not established in the EU, or providers outside the EU if they -make an AI system or GPAI model available on the EU market. Even if only the output generated by the AI system is used in the EU, the Act still applies to such providers and deployers.
CyberPeace Outlook
The EU AI Act, approved by EU lawmakers in 2024, is a landmark legislation designed to protect citizens' health, safety, and fundamental rights from potential harm caused by AI systems. The AI Act will apply to AI systems and GPAI models. The Act creates a tiered risk categorization system with various regulations and stiff penalties for noncompliance. The Act adopts a risk-based approach to AI governance, categorizing potential risks into four tiers: unacceptable, high, limited, and low. Violations of banned systems carry the highest fine: €35 million, or 7 percent of global annual revenue. It establishes transparency requirements for general-purpose AI systems. The regulation also provides specific rules for general-purpose AI (GPAI) models and lays down more stringent requirements for GPAI models with 'high-impact capabilities' that could pose a systemic risk and have a significant impact on the internal market. For high-risk AI systems, the AI Act addresses the issues of fundamental rights impact assessment and data protection impact assessment.
The EU AI Act aims to enhance trust in AI technologies by establishing clear regulatory standards governing AI. We encourage regulatory frameworks that strive to balance the desire to foster innovation with the critical need to prevent unethical practices that may cause user harm. The legislation can be seen as strengthening the EU's position as a global leader in AI innovation and developing regulatory frameworks for emerging technologies. It sets a global benchmark for regulating AI. The companies to which the act applies will need to make sure their practices align with the same. The act may inspire other nations to develop their own legislation contributing to global AI governance. The world of AI is complex and challenging, the implementation of regulatory checks, and compliance by the concerned companies, all pose a conundrum. However, in the end, balancing innovation with ethical considerations is paramount.
At the same hand, the tech sector welcomes regulatory progress but warns that overly-rigid regulations could stifle innovation. Hence flexibility and adaptability are key to effective AI governance. The journey towards robust AI regulation has begun in major countries, and it is important that we find the right balance between safety and innovation and also take into consideration the industry reactions.
References:
- https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401689
- https://www.theverge.com/2024/7/12/24197058/eu-ai-act-regulations-bans-deadline
- https://techcrunch.com/2024/07/12/eus-ai-act-gets-published-in-blocs-official-journal-starting-clock-on-legal-deadlines/
- https://www.wsgr.com/en/insights/eu-ai-act-to-enter-into-force-in-august.html
- https://www.techtarget.com/searchenterpriseai/tip/Is-your-business-ready-for-the-EU-AI-Act
- https://www.simmons-simmons.com/en/publications/clyimpowh000ouxgkw1oidakk/the-eu-ai-act-a-quick-guide

Introduction
The geographical world has physical boundaries, but the digital one has a different architecture and institutions are underprepared when it comes to addressing cybersecurity breaches. Cybercrime, which may lead to economic losses, privacy violations, national security threats and have psycho-social consequences, is forecast to continuously increase between 2024 and 2029, reaching an estimated cost of at least 6.4 trillion U.S. dollars (Statista). As cyber threats become persistent and ubiquitous, they are becoming a critical governance challenge. Lawmakers around the world need to collaborate on addressing this emerging issue.
Cybersecurity Governance and its Structural Elements
Cybersecurity governance refers to the strategies, policies, laws, and institutional frameworks that guide national and international preparedness and responses to cyber threats to governments, private entities, and individuals. Effective cybersecurity governance ensures that digital risks are managed proactively while balancing security with fundamental rights like privacy and internet freedom. It includes, but is not limited to :
- Policies and Legal Frameworks: Laws that define the scope of cybercrime, cybersecurity responsibilities, and mechanisms for data protection. Eg: India’s National Cybersecurity Policy (NCSP) of 2013, Information Technology Act, 2000, and Digital Personal Data Protection Act, 2023, EU’s Cybersecurity Act (2019), Cyber Resilience Act (2024), Cyber Solidarity Act (2025), and NIS2 Directive (2022), South Africa’s Cyber Crimes Act (2021), etc.
- Regulatory Bodies: Government agencies such as data protection authorities, cybersecurity task forces, and other sector-specific bodies. Eg: India’s Computer Emergency Response Team (CERT-In), Indian Cyber Crime Coordination Centre (I4C), Europe’s European Union Agency for Cybersecurity (ENISA), and others.
- Public-Private Knowledge Sharing: The sharing of the private sector’s expertise and the government’s resources plays a crucial role in improving enforcement and securing critical infrastructure. This model of collaboration is followed in the EU, Japan, Turkey, and the USA.
- Research and Development: Apart from the technical, the cyber domain also includes military, politics, economy, law, culture, society, and other elements. Robust, multi-sectoral research is necessary for formulating international and regional frameworks on cybersecurity.
Challenges to Cybersecurity Governance
Governments face several challenges in securing cyberspace and protecting critical assets and individuals despite the growing focus on cybersecurity. This is because so far the focus has been on cybersecurity management, which, considering the scale of attacks in the recent past, is not enough. Stakeholders must start deliberating on the aspect of governance in cyberspace while ensuring that this process is multi-consultative. (Savaş & Karataş 2022). Prominent challenges which need to be addressed are:
- Dynamic Threat Landscape: The threat landscape in cyberspace is ever-evolving. Bad actors are constantly coming up with new ways to carry out attacks, using elements of surprise, adaptability, and asymmetry aided by AI and quantum computing. While cybersecurity measures help mitigate risks and minimize damage, they can’t always provide definitive solutions. E.g., the pace of malware development is much faster than that of legal norms, legislation, and security strategies for the protection of information technology (IT). (Efe and Bensghir 2019).
- Regulatory Fragmentation and Compliance Challenges: Different countries, industries, or jurisdictions may enforce varying or conflicting cybersecurity laws and standards, which are still evolving and require rapid upgrades. This makes it harder for businesses to comply with regulations, increases compliance costs, and jeopardizes the security posture of the organization.
- Trans-National Enforcement Challenges: Cybercriminals operate across jurisdictions, making threat intelligence collection, incident response, evidence-gathering, and prosecution difficult. Without cross-border agreements between law enforcement agencies and standardized compliance frameworks for organizations, bad actors have an advantage in getting away with attacks.
- Balancing Security with Digital Rights: Striking a balance between cybersecurity laws and privacy concerns (e.g., surveillance laws vs. data protection) remains a profound challenge, especially in areas of CSAM prevention and identifying terrorist activities. Without a system of checks and balances, it is difficult to prevent government overreach into domains like journalism, which are necessary for a healthy democracy, and Big Tech’s invasion of user privacy.
The Road Ahead: Strengthening Cybersecurity Governance
All domains of human life- economy, culture, politics, and society- occur in digital and cyber environments now. It follows naturally, that governance in the physical world translates into governance in cyberspace. It must be underpinned by features consistent with the principles of openness, transparency, participation, and accountability, while also protecting human rights. In cyberspace, the world is stateless and threats are rapidly evolving with innovations in modern computing. Thus, cybersecurity governance requires a global, multi-sectoral approach utilizing the rules of international law, to chart out problems, and solutions, and carry out detailed risk analyses. (Savaş & Karataş 2022).
References
- https://www.statista.com/forecasts/1280009/cost-cybercrime-worldwide#statisticContainer
- https://link.springer.com/article/10.1365/s43439-021-00045-4#citeas
- https://digital-strategy.ec.europa.eu/en/policies/cybersecurity-policies#ecl-inpage-cybersecurity-strategy