#FactCheck - AI Generated Photo Circulating Online Misleads About BARC Building Redesign
Executive Summary:
A photo circulating on the web that claims to show the future design of the Bhabha Atomic Research Center, BARC building, has been found to be fake after fact checking has been done. Nevertheless, there is no official notice or confirmation from BARC on its website or social media handles. Through the AI Content Detection tool, we have discovered that the image is a fake as it was generated by an AI. In short, the viral picture is not the authentic architectural plans drawn up for the BARC building.

Claims:
A photo allegedly representing the new outlook of the Bhabha Atomic Research Center (BARC) building is reigning over social media platforms.


Fact Check:
To begin our investigation, we surfed the BARC's official website to check out their tender and NITs notifications to inquire for new constructions or renovations.
It was a pity that there was no corresponding information on what was being claimed.

Then, we hopped on their official social media pages and searched for any latest updates on an innovative building construction, if any. We looked on Facebook, Instagram and X . Again, there was no information about the supposed blueprint. To validate the fact that the viral image could be generated by AI, we gave a search on an AI Content Detection tool by Hive that is called ‘AI Classifier’. The tool's analysis was in congruence with the image being an AI-generated computer-made one with 100% accuracy.

To be sure, we also used another AI-image detection tool called, “isitai?” and it turned out to be 98.74% AI generated.

Conclusion:
To conclude, the statement about the image being the new BARC building is fake and misleading. A detailed investigation, examining BARC's authorities and utilizing AI detection tools, proved that the picture is more probable an AI-generated one than an original architectural design. BARC has not given any information nor announced anything for such a plan. This makes the statement untrustworthy since there is no credible source to support it.
Claim: Many social media users claim to show the new design of the BARC building.
Claimed on: X, Facebook
Fact Check: Misleading
Related Blogs

Introduction:
Apple is known for its unique innovations and designs. Apple, with the introduction of the iPhone 15 series, now will come up with the USB-C by complying with European Union(EU) regulations. The standard has been set by the European Union’s rule for all mobile devices. The new iPhone will now come up with USB-C. However there is a little caveat here, you will be able to use any USB-C cable to charge or transfer from your iPhone. European Union approved new rules to make it compulsory for tech companies to ensure a universal charging port is introduced for electronic gadgets like mobile phones, tablets, cameras, e-readers, earbuds and other devices by the end of next year.
The new iPhone will now come up with USB-C. However, Apple being Apple, will limit third-party USB-C cables. This means Apple-owned MFI-certified cable will have an optimised charging speed and a faster data transfer speed. MFI stands for 'Made for iPhone/iPad' and is a quality mark or testing program from Apple for Lightning cables and other products. The MFI-certified product ensures safety and improved performance.
European Union's regulations on common charging port:
The new iPhone will have a type-c USB port. EU rules have made it mandatory that all phones and laptops need to have one USB-C charging port. IPhone will be switching to USB-C from the lightning port. European Union's mandate for all mobile device makers to adopt this technology. EU has set a deadline for all new phones to use USB-C for wired charging by the end of 2024. These EU rules will be applicable to all devices, such as tablets, digital cameras, headphones, handheld video game consoles, etc. And will apply to devices that offer wired charging. The EU rules require that phone manufacturers adopt a common charging connection. The mobile manufacturer or relevant industry has to comply with these rules by the end of 2024. The rules are enacted with the intent to save consumers money and cut waste. EU stated that these rules will save consumers from unnecessary charger purchases and tonnes of cut waste per year. With the implementation of these rules, the phone manufacturers have to comply with it, and customers will be able to use a single charger for their different devices. It will strengthen the speed of data transfer in new iPhone models. The iPhone will also be compatible with chargers used by non-apple users, i.e. USB-C.
Indian Standards on USB-C Type Charging Ports in India
The Bureau of Indian Standards (BIS) has also issued standards for USB-C-type chargers. The standards aim to provide a solution of a common charger for all different charging devices. Consumers will not need to purchase multiple chargers for their different devices, ultimately leading to a reduction in the number of chargers per consumer. This would contribute to the Government of India's goal of reducing e-waste and moving toward sustainable development.
Conclusion:
New EU rules require all mobile phone devices, including iPhones, to have a USB-C connector for their charging ports. Notably, now you can see the USB-C port on the upcoming iPhone 15. These rules will enable the customers to use a single charger for their different Apple devices, such as iPads, Macs and iPhones. Talking about the applicability of these rules, the EU common-charger rule will cover small and medium-sized portable electronics, which will include mobile phones, tablets, e-readers, mice and keyboards, digital cameras, handheld videogame consoles, portable speakers, etc. Such devices are mandated to have USB-C charging ports if they offer the wired charging option. Laptops will also be covered under these rules, but they are given more time to adopt the changes and abide by these rules. Overall, this step will help in reducing e-waste and moving toward sustainable development.
References:
https://www.bbc.com/news/technology-66708571
.webp)
The 2020s mark the emergence of deepfakes in general media discourse. The rise in deepfake technology is defined by a very simple yet concerning fact: it is now possible to create perfect imitations of anyone using AI tools that can create audio in any person's voice and generate realistic images and videos of almost anyone doing pretty much anything. The proliferation of deepfake content in the media poses great challenges to the functioning of democracies. especially as such materials can deprive the public of the accurate information it needs to make informed decisions in elections. Deepfakes are created using AI, which combines different technologies to produce synthetic content.
Understanding Deepfakes
Deepfakes are synthetically generated content created using artificial intelligence (AI). This technology works on an advanced algorithm that creates hyper-realistic videos by using a person’s face, voice or likeness utilising techniques such as machine learning. The utilisation and progression of deepfake technology holds vast potential, both benign and malicious.
An example is when the NGO Malaria No More which had used deepfake technology in 2019 to sync David Beckham’s lip movements with different voices in nine languages, amplified its anti-malaria message.
Deepfakes have a dark side too. They have been used to spread false information, manipulate public opinion, and damage reputations. They can harm mental health and have significant social impacts. The ease of creating deepfakes makes it difficult to verify media authenticity, eroding trust in journalism and creating confusion about what is true and what is not. Their potential to cause harm has made it necessary to consider legal and regulatory approaches.
India’s Legal Landscape Surrounding Deepfakes
India presently lacks a specific law dealing with deepfakes, but the existing legal provisions offer some safeguards against mischief caused.
- Deepfakes created with the intent of spreading misinformation or damaging someone’s reputation can be prosecuted under the Bharatiya Nyaya Sanhita of 2023. It deals with the consequences of such acts under Section 356, governing defamation law.
- The Information Technology Act of 2000, the primary law that regulates Indian cyberspace. Any unauthorised disclosure of personal information which is used to create deepfakes for harassment or voyeurism is a violation of the act.
- The unauthorised use of a person's likeness in a deepfake can become a violation of their intellectual property rights and lead to copyright infringement.
- India’s privacy law, the Digital Personal Data Protection Act, regulates and limits the misuse of personal data. It has the potential to address deepfakes by ensuring that individuals’ likenesses are not used without their consent in digital contexts.
India, at present, needs legislation that can specifically address the challenges deepfakes pose. The proposed legislation, aptly titled, ‘the Digital India Act’ aims to tackle various digital issues, including the misuse of deepfake technology and the spread of misinformation. Additionally, states like Maharashtra have proposed laws targeting deepfakes used for defamation or fraud, highlighting growing concerns about their impact on the digital landscape.
Policy Approaches to Regulation of Deepfakes
- Criminalising and penalising the making, creation and distribution of harmful deepfakes as illegal will act as a deterrent.
- There should be a process that mandates the disclosures for synthetic media. This would be to inform viewers that the content has been created using AI.
- Encouraging tech companies to implement stricter policies on deepfake content moderation can enhance accountability and reduce harmful misinformation.
- The public’s understanding of deepfakes should be promoted. Especially, via awareness campaigns that will empower citizens to critically evaluate digital content and make informed decisions.
Deepfake, Global Overview
There has been an increase in the momentum to regulate deepfakes globally. In October 2023, US President Biden signed an executive order on AI risks instructing the US Commerce Department to form labelling standards for AI-generated content. California and Texas have passed laws against the dangerous distribution of deepfake images that affect electoral contexts and Virginia has targeted a law on the non-consensual distribution of deepfake pornography.
China promulgated regulations requiring explicit marking of doctored content. The European Union has tightened its Code of Practice on Disinformation by requiring social media to flag deepfakes, otherwise they risk facing hefty fines and proposed transparency mandates under the EU AI Act. These measures highlight a global recognition of the risks that deepfakes pose and the need for a robust regulatory framework.
Conclusion
With deepfakes being a significant source of risk to trust and democratic processes, a multi-pronged approach to regulation is in order. From enshrining measures against deepfake technology in specific laws and penalising the same, mandating transparency and enabling public awareness, the legislators have a challenge ahead of them. National and international efforts have highlighted the urgent need for a comprehensive framework to enable measures to curb the misuse and also promote responsible innovation. Cooperation during these trying times will be important to shield truth and integrity in the digital age.
References
- https://digitalcommons.usf.edu/cgi/viewcontent.cgi?article=2245&context=jss
- https://www.thehindu.com/news/national/regulating-deepfakes-generative-ai-in-india-explained/article67591640.ece
- https://www.brennancenter.org/our-work/research-reports/regulating-ai-deepfakes-and-synthetic-media-political-arena
- https://www.responsible.ai/a-look-at-global-deepfake-regulation-approaches/
- https://thesecretariat.in/article/wake-up-call-for-law-making-on-deepfakes-and-misinformation

Introduction
The G7 nations, a group of the most powerful economies, have recently turned their attention to the critical issue of cybercrimes and (AI) Artificial Intelligence. G7 summit has provided an essential platform for discussing the threats and crimes occurring from AI and lack of cybersecurity. These nations have united to share their expertise, resources, diplomatic efforts and strategies to fight against cybercrimes. In this blog, we shall investigate the recent development and initiatives undertaken by G7 nations, exploring their joint efforts to combat cybercrime and navigate the evolving landscape of artificial intelligence. We shall also explore the new and emerging trends in cybersecurity, providing insights into ongoing challenges and innovative approaches adopted by the G7 nations and the wider international community.
G7 Nations and AI
Each of these nations have launched cooperative efforts and measures to combat cybercrime successfully. They intend to increase their collective capacities in detecting, preventing, and responding to cyber assaults by exchanging intelligence, best practices, and experience. G7 nations are attempting to develop a strong cybersecurity architecture capable of countering increasingly complex cyber-attacks through information-sharing platforms, collaborative training programs, and joint exercises.
The G7 Summit provided an important forum for in-depth debates on the role of artificial intelligence (AI) in cybersecurity. Recognising AI’s transformational potential, the G7 nations have participated in extensive discussions to investigate its advantages and address the related concerns, guaranteeing responsible research and use. The nation also recognises the ethical, legal, and security considerations of deploying AI cybersecurity.
Worldwide Rise of Ransomware
High-profile ransomware attacks have drawn global attention, emphasising the need to combat this expanding threat. These attacks have harmed organisations of all sizes and industries, leading to data breaches, operational outages, and, in some circumstances, the loss of sensitive information. The implications of such assaults go beyond financial loss, frequently resulting in reputational harm, legal penalties, and service delays that affect consumers, clients, and the public. The increase in high-profile ransomware incidents has garnered attention worldwide, Cybercriminals have adopted a multi-faceted approach to ransomware attacks, combining techniques such as phishing, exploit kits, and supply chain Using spear-phishing, exploit kits, and supply chain hacks to obtain unauthorised access to networks and spread the ransomware. This degree of expertise and flexibility presents a substantial challenge to organisations attempting to protect against such attacks.

Focusing On AI and Upcoming Threats
During the G7 summit, one of the key topics for discussion on the role of AI (Artificial Intelligence) in shaping the future, Leaders and policymakers discuss the benefits and dangers of AI adoption in cybersecurity. Recognising AI’s revolutionary capacity, they investigate its potential to improve defence capabilities, predict future threats, and secure vital infrastructure. Furthermore, the G7 countries emphasise the necessity of international collaboration in reaping the advantages of AI while reducing the hazards. They recognise that cyber dangers transcend national borders and must be combated together. Collaboration in areas such as exchanging threat intelligence, developing shared standards, and promoting best practices is emphasised to boost global cybersecurity defences. The G7 conference hopes to set a global agenda that encourages responsible AI research and deployment by emphasising the role of AI in cybersecurity. The summit’s sessions present a path for maximising AI’s promise while tackling the problems and dangers connected with its implementation.
As the G7 countries traverse the complicated convergence of AI and cybersecurity, their emphasis on collaboration, responsible practices, and innovation lays the groundwork for international collaboration in confronting growing cyber threats. The G7 countries aspire to establish robust and secure digital environments that defend essential infrastructure, protect individuals’ privacy, and encourage trust in the digital sphere by collaboratively leveraging the potential of AI.
Promoting Responsible Al development and usage
The G7 conference will focus on developing frameworks that encourage ethical AI development. This includes fostering openness, accountability, and justice in AI systems. The emphasis is on eliminating biases in data and algorithms and ensuring that AI technologies are inclusive and do not perpetuate or magnify existing societal imbalances.
Furthermore, the G7 nations recognise the necessity of privacy protection in the context of AI. Because AI systems frequently rely on massive volumes of personal data, summit speakers emphasise the importance of stringent data privacy legislation and protections. Discussions centre around finding the correct balance between using data for AI innovation, respecting individuals’ privacy rights, and protecting data security. In addition to responsible development, the G7 meeting emphasises the importance of responsible AI use. Leaders emphasise the importance of transparent and responsible AI governance frameworks, which may include regulatory measures and standards to ensure AI technology’s ethical and legal application. The goal is to defend individuals’ rights, limit the potential exploitation of AI, and retain public trust in AI-driven solutions.
The G7 nations support collaboration among governments, businesses, academia, and civil society to foster responsible AI development and use. They stress the significance of sharing best practices, exchanging information, and developing international standards to promote ethical AI concepts and responsible practices across boundaries. The G7 nations hope to build the global AI environment in a way that prioritises human values, protects individual rights, and develops trust in AI technology by fostering responsible AI development and usage. They work together to guarantee that AI is a force for a good while reducing risks and resolving social issues related to its implementation.
Challenges on the way
During the summit, the nations, while the G7 countries are committed to combating cybercrime and developing responsible AI development, they confront several hurdles in their efforts. Some of them are:
A Rapidly Changing Cyber Threat Environment: Cybercriminals’ strategies and methods are always developing, as is the nature of cyber threats. The G7 countries must keep up with new threats and ensure their cybersecurity safeguards remain effective and adaptable.
Cross-Border Coordination: Cybercrime knows no borders, and successful cybersecurity necessitates international collaboration. On the other hand, coordinating activities among nations with various legal structures, regulatory environments, and agendas can be difficult. Harmonising rules, exchanging information, and developing confidence across states are crucial for effective collaboration.
Talent Shortage and Skills Gap: The field of cybersecurity and AI knowledge necessitates highly qualified personnel. However, skilled individuals in these fields need more supply. The G7 nations must attract and nurture people, provide training programs, and support research and innovation to narrow the skills gap.
Keeping Up with Technological Advancements: Technology changes at a rapid rate, and cyber-attacks become more complex. The G7 nations must ensure that their laws, legislation, and cybersecurity plans stay relevant and adaptive to keep up with future technologies such as AI, quantum computing, and IoT, which may both empower and challenge cybersecurity efforts.
Conclusion
To combat cyber threats effectively, support responsible AI development, and establish a robust cybersecurity ecosystem, the G7 nations must constantly analyse and adjust their strategy. By aggressively tackling these concerns, the G7 nations can improve their collective cybersecurity capabilities and defend their citizens’ and global stakeholders’ digital infrastructure and interests.