#FactCheck: Old clip of Greenland tsunami depicts as tsunami in Japan
Executive Summary:
A viral video depicting a powerful tsunami wave destroying coastal infrastructure is being falsely associated with the recent tsunami warning in Japan following an earthquake in Russia. Fact-checking through reverse image search reveals that the footage is from a 2017 tsunami in Greenland, triggered by a massive landslide in the Karrat Fjord.

Claim:
A viral video circulating on social media shows a massive tsunami wave crashing into the coastline, destroying boats and surrounding infrastructure. The footage is being falsely linked to the recent tsunami warning issued in Japan following an earthquake in Russia. However, initial verification suggests that the video is unrelated to the current event and may be from a previous incident.

Fact Check:
The video, which shows water forcefully inundating a coastal area, is neither recent nor related to the current tsunami event in Japan. A reverse image search conducted using keyframes extracted from the viral footage confirms that it is being misrepresented. The video actually originates from a tsunami that struck Greenland in 2017. The original footage is available on YouTube and has no connection to the recent earthquake-induced tsunami warning in Japan

The American Geophysical Union (AGU) confirmed in a blog post on June 19, 2017, that the deadly Greenland tsunami on June 17, 2017, was caused by a massive landslide. Millions of cubic meters of rock were dumped into the Karrat Fjord by the landslide, creating a wave that was more than 90 meters high and destroying the village of Nuugaatsiaq. A similar news article from The Guardian can be found.

Conclusion:
Videos purporting to depict the effects of a recent tsunami in Japan are deceptive and repurposed from unrelated incidents. Users of social media are urged to confirm the legitimacy of such content before sharing it, particularly during natural disasters when false information can exacerbate public anxiety and confusion.
- Claim: Recent natural disasters in Russia are being censored
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

The World Economic Forum reported that AI-generated misinformation and disinformation are the second most likely threat to present a material crisis on a global scale in 2024 at 53% (Sept. 2023). Artificial intelligence is automating the creation of fake news at a rate disproportionate to its fact-checking. It is spurring an explosion of web content mimicking factual articles that instead disseminate false information about grave themes such as elections, wars and natural disasters.
According to a report by the Centre for the Study of Democratic Institutions, a Canadian think tank, the most prevalent effect of Generative AI is the ability to flood the information ecosystem with misleading and factually-incorrect content. As reported by Democracy Reporting International during the 2024 elections of the European Union, Google's Gemini, OpenAI’s ChatGPT 3.5 and 4.0, and Microsoft’s AI interface ‘CoPilot’ were inaccurate one-third of the time when engaged for any queries regarding the election data. Therefore, a need for an innovative regulatory approach like regulatory sandboxes which can address these challenges while encouraging responsible AI innovation is desired.
What Is AI-driven Misinformation?
False or misleading information created, amplified, or spread using artificial intelligence technologies is AI-driven misinformation. Machine learning models are leveraged to automate and scale the creation of false and deceptive content. Some examples are deep fakes, AI-generated news articles, and bots that amplify false narratives on social media.
The biggest challenge is in the detection and management of AI-driven misinformation. It is difficult to distinguish AI-generated content from authentic content, especially as these technologies advance rapidly.
AI-driven misinformation can influence elections, public health, and social stability by spreading false or misleading information. While public adoption of the technology has undoubtedly been rapid, it is yet to achieve true acceptance and actually fulfill its potential in a positive manner because there is widespread cynicism about the technology - and rightly so. The general public sentiment about AI is laced with concern and doubt regarding the technology’s trustworthiness, mainly due to the absence of a regulatory framework maturing on par with the technological development.
Regulatory Sandboxes: An Overview
Regulatory sandboxes refer to regulatory tools that allow businesses to test and experiment with innovative products, services or businesses under the supervision of a regulator for a limited period. They engage by creating a controlled environment where regulators allow businesses to test new technologies or business models with relaxed regulations.
Regulatory sandboxes have been in use for many industries and the most recent example is their use in sectors like fintech, such as the UK’s Financial Conduct Authority sandbox. These models have been known to encourage innovation while allowing regulators to understand emerging risks. Lessons from the fintech sector show that the benefits of regulatory sandboxes include facilitating firm financing and market entry and increasing speed-to-market by reducing administrative and transaction costs. For regulators, testing in sandboxes informs policy-making and regulatory processes. Looking at the success in the fintech industry, regulatory sandboxes could be adapted to AI, particularly for overseeing technologies that have the potential to generate or spread misinformation.
The Role of Regulatory Sandboxes in Addressing AI Misinformation
Regulatory sandboxes can be used to test AI tools designed to identify or flag misinformation without the risks associated with immediate, wide-scale implementation. Stakeholders like AI developers, social media platforms, and regulators work in collaboration within the sandbox to refine the detection algorithms and evaluate their effectiveness as content moderation tools.
These sandboxes can help balance the need for innovation in AI and the necessity of protecting the public from harmful misinformation. They allow the creation of a flexible and adaptive framework capable of evolving with technological advancements and fostering transparency between AI developers and regulators. This would lead to more informed policymaking and building public trust in AI applications.
CyberPeace Policy Recommendations
Regulatory sandboxes offer a mechanism to predict solutions that will help to regulate the misinformation that AI tech creates. Some policy recommendations are as follows:
- Create guidelines for a global standard for including regulatory sandboxes that can be adapted locally and are useful in ensuring consistency in tackling AI-driven misinformation.
- Regulators can propose to offer incentives to companies that participate in sandboxes. This would encourage innovation in developing anti-misinformation tools, which could include tax breaks or grants.
- Awareness campaigns can help in educating the public about the risks of AI-driven misinformation and the role of regulatory sandboxes can help manage public expectations.
- Periodic and regular reviews and updates to the sandbox frameworks should be conducted to keep pace with advancements in AI technology and emerging forms of misinformation should be emphasized.
Conclusion and the Challenges for Regulatory Frameworks
Regulatory sandboxes offer a promising pathway to counter the challenges that AI-driven misinformation poses while fostering innovation. By providing a controlled environment for testing new AI tools, these sandboxes can help refine technologies aimed at detecting and mitigating false information. This approach ensures that AI development aligns with societal needs and regulatory standards, fostering greater trust and transparency. With the right support and ongoing adaptations, regulatory sandboxes can become vital in countering the spread of AI-generated misinformation, paving the way for a more secure and informed digital ecosystem.
References
- https://www.thehindu.com/sci-tech/technology/on-the-importance-of-regulatory-sandboxes-in-artificial-intelligence/article68176084.ece
- https://www.oecd.org/en/publications/regulatory-sandboxes-in-artificial-intelligence_8f80a0e6-en.html
- https://www.weforum.org/publications/global-risks-report-2024/
- https://democracy-reporting.org/en/office/global/publications/chatbot-audit#Conclusions

Introduction
In the age of digital advancement, where technology continually grows, so does the method of crime. The rise of cybercrime has created various threats to individuals and organizations, businesses, and government agencies. To combat such crimes law enforcement agencies are looking out for innovative solutions against these challenges. One such innovative solution is taken by the Surat Police in Gujarat, who have embraced the power of Artificial Intelligence (AI) to bolster their efforts in reducing cybercrimes.
Key Highlights
Surat, India, has launched an AI-based WhatsApp chatbot called "Surat Police Cyber Mitra Chatbot" to tackle growing cybercrime. The chatbot provides quick assistance to individuals dealing with various cyber issues, ranging from reporting cyber crimes to receiving safety tips. The initiative is the first of its kind in the country, showcasing Surat Police's dedication to using advanced technology for public safety. Surat Police Commissioner-in-Charge commended the use of AI in crime control as a positive step forward, while also stressing the need for continuous improvements in various areas, including technological advancements, data acquisition related to cybercrime, and training for police personnel.
The Surat Cyber Mitra Chatbot, available on WhatsApp number 9328523417, offers round-the-clock assistance to citizens, allowing them to access crucial information on cyber fraud and legal matters.
Surat Police's AI Chatbot: Cyber Mitra
- Surat Police in Gujarat, India, has launched an AI-based WhatsApp chatbot, "Surat Police Cyber Mitra Chatbot," to combat growing cybercrime.
- The chatbot provides assistance to individuals dealing with various cyber issues, from reporting cyber crimes to receiving safety tips.
- The initiative is the first of its kind in the country, showcasing Surat Police's dedication to using advanced technology for public safety.
- The Surat Cyber Mitra Chatbot, available on WhatsApp number 9328523417, offers round-the-clock assistance to citizens, providing crucial information on cyber fraud.
The Growing Cybercrime Threat
With the advancement of technology, cybercrime has become more complex due to the interconnectivity of digital devices and the internet. The criminals exploit vulnerabilities in software, networks, and human behavior to perpetrate a wide range of malicious activities to fulfill their illicit gains. Individuals and organizations face a wide range of cyber risks that can cause significant financial, reputational, and emotional harm.
Surat Police’s Strategic Initiative
Surat Police Cyber Mitra Chatbot is an AI-powered tool for instant problem resolution. This innovative approach allows citizens to address any issue or query at their doorstep, providing immediate and accurate responses to concerns. The chatbot is accessible 24/7, 24 hours a day, and serves as a reliable resource for obtaining legal information related to cyber fraud.
The use of AI in police initiatives has been a topic of discussion for some time, and the Surat City Police has taken this step to leverage technology for the betterment of society. The chatbot promises to boost public trust towards law enforcement and improve the legal system by addressing citizen issues within seconds, ranging from financial disputes to cyber fraud incidents.
This accessibility extends to inquiries such as how to report financial crimes or cyber-fraud incidents and understand legal procedures. The availability of accurate information will not only enhance citizens' trust in the police but also contribute to the efficiency of law enforcement operations. The availability of accurate information will lead to more informed interactions between citizens and the police, fostering a stronger sense of community security and collaboration.
The utilisation of this chatbot will facilitate access to information and empower citizens to engage more actively with the legal system. As trust in the police grows and legal processes become more transparent and accessible, the overall integrity and effectiveness of the legal system are expected to improve significantly.
Conclusion
The Surat Police Cyber Mitra Chatbot is an AI-powered tool that provides round-the-clock assistance to citizens, enhancing public trust in law enforcement and streamlining access to legal information. This initiative bridges the gap between law enforcement and the community, fostering a stronger sense of security and collaboration, and driving improvements in the efficiency and integrity of the legal process.
References:
- https://www.ahmedabadmirror.com/surat-first-city-in-india-to-launch-ai-chatbot-to-tackle-cybercrime/81861788.html
- https://government.economictimes.indiatimes.com/news/secure-india/gujarat-surat-police-adopts-ai-to-check-cyber-crimes/107410981
- https://www.timesnownews.com/india/chatbot-and-advanced-analytics-surat-police-utilising-ai-technology-to-reduce-cybercrime-article-107397157
- https://www.grownxtdigital.in/technology/surat-police-ai-cyber-mitra-chatbot-gujarat/

Introduction
Monopolies in any sector can have a great impact on economic efficiency and, by extension, on the market and the larger economy. Data monopolies hurt both small startups and large, established companies, and it is typically the biggest corporate players who have the biggest data advantage. Google has recently lost a major antitrust case filed by the U.S. Department of Justice, which focused on the company's search engine dominance and expensive partnerships to promote its products. The lawsuit accused Google of using its dominant position in the search engine market to maintain a monopoly. The case has had a significant impact on consumers and the tech industry as a whole. This dominance allowed Google to raise prices on advertisers without consequences, and delay innovations and privacy features that consumers want when they search online.
Antitrust Allegations Against Google in the US and EU
In the case filed by the US Department of Justice, US District Judge Amit Mehta ruled that Google was monopolistic. In the 10-week-long trial, Google lost the major antitrust lawsuit, and it was established that the tech giant had a monopoly in the web search and advertising sectors. The lawsuit accused Google of using its dominant position in the search engine market to elbow out rivals and maintain a monopoly. The tech giant’s exclusive deals with handset makers were brought before the court as evidence. Additionally, the European Commission has fined Google €1.49 billion for breaching EU antitrust rules in 2019.
The Impact of Big Tech Monopolies on the Digital Ecosystem and Beyond
- Big-tech companies collect vast amounts of personal data, raising concerns about how this data is used and protected. The concentration of data in the hands of a few companies can lead to privacy breaches and misuse of personal information.
- The dominance of a few tech giants in digital advertising markets can stifle competition, leading to higher prices for advertisers and fewer choices for consumers. This concentration also allows these companies to exert major control over what ads are shown and to whom.
- Big-tech platforms have substantial power over the dissemination of information. Their algorithms and policies on content moderation can influence public discourse and may spread misinformation. The lack of competition means fewer alternatives are accessible for users seeking different content moderation policies. In 2021 Google paid $26.3 billion to ensure its search engine is the default on smartphones and browsers and to keep control of its dominant market share.
Regulatory Mechanisms in the Indian Context
In India, antitrust issues are governed by the Competition Act of 2002 and the Competition Commission of India (CCI) checks monopolistic practices. In 2022, the CCI imposed a penalty of Rs 1,337.76 crore on Google for abusing its dominant position in multiple markets for 'anti-competitive practices' in the Android mobile device ecosystem. The Draft Digital Competition Bill, 2024, has been proposed as a legislative reform to regulate a wide range of digital services, including online search engines, social networking platforms, video-sharing sites, interpersonal communication services, operating systems, web browsers, cloud services, advertising services, and online intermediation services. The bill aims to promote competition and fairness in the digital market by addressing anti-competitive practices and dominant position abuses in the digital business space.
Conclusion
Big-tech companies are increasingly under scrutiny from regulators due to concerns over their monopolistic practices, data privacy issues, and the immense influence on markets and public discourse. The U.S. Department of Justice's victory against Google and the European Commission's hefty fines are indicators of a global paradigm shift towards more aggressive regulation to foster competition and protect consumer interests. The combined efforts of regulators across different jurisdictions underscore the recognition that monopolistic practices by such big tech giants can stifle innovation, harm consumers’ interests, and create barriers for new entrants, thus necessitating strong legal frameworks to ensure fair and contestable markets. Overall, the increasing regulatory pressure signifies a pivotal moment for big-tech companies, as they face the challenge of adapting to a more tightly controlled environment where their market dominance and business practices are under intense examination.
References
- https://www.livemint.com/technology/tech-news/googles-future-siege-u-s-court-explores-breaking-up-company-after-landmark-ruling-11723648047735.html
- https://www.thehindu.com/sci-tech/technology/what-is-the-google-monopoly-antitrust-case-and-how-does-it-affect-consumers/article68495551.ece
- https://indianexpress.com/article/business/google-has-an-illegal-monopoly-on-search-us-judge-finds-9497318/