#FactCheck - "Viral Video Misleadingly Claims Surrender to Indian Army, Actually Shows Bangladesh Army”
Executive Summary:
A viral video has circulated on social media, wrongly showing lawbreakers surrendering to the Indian Army. However, the verification performed shows that the video is of a group surrendering to the Bangladesh Army and is not related to India. The claim that it is related to the Indian Army is false and misleading.

Claims:
A viral video falsely claims that a group of lawbreakers is surrendering to the Indian Army, linking the footage to recent events in India.



Fact Check:
Upon receiving the viral posts, we analysed the keyframes of the video through Google Lens search. The search directed us to credible news sources in Bangladesh, which confirmed that the video was filmed during a surrender event involving criminals in Bangladesh, not India.

We further verified the video by cross-referencing it with official military and news reports from India. None of the sources supported the claim that the video involved the Indian Army. Instead, the video was linked to another similar Bangladesh Media covering the news.

No evidence was found in any credible Indian news media outlets that covered the video. The viral video was clearly taken out of context and misrepresented to mislead viewers.
Conclusion:
The viral video claiming to show lawbreakers surrendering to the Indian Army is footage from Bangladesh. The CyberPeace Research Team confirms that the video is falsely attributed to India, misleading the claim.
- Claim: The video shows miscreants surrendering to the Indian Army.
- Claimed on: Facebook, X, YouTube
- Fact Check: False & Misleading
Related Blogs

The World Economic Forum reported that AI-generated misinformation and disinformation are the second most likely threat to present a material crisis on a global scale in 2024 at 53% (Sept. 2023). Artificial intelligence is automating the creation of fake news at a rate disproportionate to its fact-checking. It is spurring an explosion of web content mimicking factual articles that instead disseminate false information about grave themes such as elections, wars and natural disasters.
According to a report by the Centre for the Study of Democratic Institutions, a Canadian think tank, the most prevalent effect of Generative AI is the ability to flood the information ecosystem with misleading and factually-incorrect content. As reported by Democracy Reporting International during the 2024 elections of the European Union, Google's Gemini, OpenAI’s ChatGPT 3.5 and 4.0, and Microsoft’s AI interface ‘CoPilot’ were inaccurate one-third of the time when engaged for any queries regarding the election data. Therefore, a need for an innovative regulatory approach like regulatory sandboxes which can address these challenges while encouraging responsible AI innovation is desired.
What Is AI-driven Misinformation?
False or misleading information created, amplified, or spread using artificial intelligence technologies is AI-driven misinformation. Machine learning models are leveraged to automate and scale the creation of false and deceptive content. Some examples are deep fakes, AI-generated news articles, and bots that amplify false narratives on social media.
The biggest challenge is in the detection and management of AI-driven misinformation. It is difficult to distinguish AI-generated content from authentic content, especially as these technologies advance rapidly.
AI-driven misinformation can influence elections, public health, and social stability by spreading false or misleading information. While public adoption of the technology has undoubtedly been rapid, it is yet to achieve true acceptance and actually fulfill its potential in a positive manner because there is widespread cynicism about the technology - and rightly so. The general public sentiment about AI is laced with concern and doubt regarding the technology’s trustworthiness, mainly due to the absence of a regulatory framework maturing on par with the technological development.
Regulatory Sandboxes: An Overview
Regulatory sandboxes refer to regulatory tools that allow businesses to test and experiment with innovative products, services or businesses under the supervision of a regulator for a limited period. They engage by creating a controlled environment where regulators allow businesses to test new technologies or business models with relaxed regulations.
Regulatory sandboxes have been in use for many industries and the most recent example is their use in sectors like fintech, such as the UK’s Financial Conduct Authority sandbox. These models have been known to encourage innovation while allowing regulators to understand emerging risks. Lessons from the fintech sector show that the benefits of regulatory sandboxes include facilitating firm financing and market entry and increasing speed-to-market by reducing administrative and transaction costs. For regulators, testing in sandboxes informs policy-making and regulatory processes. Looking at the success in the fintech industry, regulatory sandboxes could be adapted to AI, particularly for overseeing technologies that have the potential to generate or spread misinformation.
The Role of Regulatory Sandboxes in Addressing AI Misinformation
Regulatory sandboxes can be used to test AI tools designed to identify or flag misinformation without the risks associated with immediate, wide-scale implementation. Stakeholders like AI developers, social media platforms, and regulators work in collaboration within the sandbox to refine the detection algorithms and evaluate their effectiveness as content moderation tools.
These sandboxes can help balance the need for innovation in AI and the necessity of protecting the public from harmful misinformation. They allow the creation of a flexible and adaptive framework capable of evolving with technological advancements and fostering transparency between AI developers and regulators. This would lead to more informed policymaking and building public trust in AI applications.
CyberPeace Policy Recommendations
Regulatory sandboxes offer a mechanism to predict solutions that will help to regulate the misinformation that AI tech creates. Some policy recommendations are as follows:
- Create guidelines for a global standard for including regulatory sandboxes that can be adapted locally and are useful in ensuring consistency in tackling AI-driven misinformation.
- Regulators can propose to offer incentives to companies that participate in sandboxes. This would encourage innovation in developing anti-misinformation tools, which could include tax breaks or grants.
- Awareness campaigns can help in educating the public about the risks of AI-driven misinformation and the role of regulatory sandboxes can help manage public expectations.
- Periodic and regular reviews and updates to the sandbox frameworks should be conducted to keep pace with advancements in AI technology and emerging forms of misinformation should be emphasized.
Conclusion and the Challenges for Regulatory Frameworks
Regulatory sandboxes offer a promising pathway to counter the challenges that AI-driven misinformation poses while fostering innovation. By providing a controlled environment for testing new AI tools, these sandboxes can help refine technologies aimed at detecting and mitigating false information. This approach ensures that AI development aligns with societal needs and regulatory standards, fostering greater trust and transparency. With the right support and ongoing adaptations, regulatory sandboxes can become vital in countering the spread of AI-generated misinformation, paving the way for a more secure and informed digital ecosystem.
References
- https://www.thehindu.com/sci-tech/technology/on-the-importance-of-regulatory-sandboxes-in-artificial-intelligence/article68176084.ece
- https://www.oecd.org/en/publications/regulatory-sandboxes-in-artificial-intelligence_8f80a0e6-en.html
- https://www.weforum.org/publications/global-risks-report-2024/
- https://democracy-reporting.org/en/office/global/publications/chatbot-audit#Conclusions

Introduction
Twitter Inc.’s appeal against barring orders for specific accounts issued by the Ministry of Electronics and Information Technology was denied by a single judge on the Karnataka High Court. Twitter Inc. was also given an Rs. 50 lakh fine by Justice Krishna Dixit, who claimed the social media corporation had approached the court defying government directives.
As a foreign corporation, Twitter’s locus standi had been called into doubt by the government, which said they were ineligible to apply Articles 19 and 21 to their situation. Additionally, the government claimed that because Twitter was only designed to serve as an intermediary, there was no “jural relationship” between Twitter and its users.
The Issue
In accordance with Section 69A of the Information Technology Act, the Ministry issued the directives. Nevertheless, Twitter had argued in its appeal that the orders “fall foul of Section 69A both substantially and procedurally.” Twitter argued that in accordance with 69A, account holders were to be notified before having their tweets and accounts deleted. However, the Ministry failed to provide these account holders with any notices.
On June 4, 2022, and again on June 6, 2022, the government sent letters to Twitter’s compliance officer requesting that they come before them and provide an explanation for why the Blocking Orders were not followed and why no action should be taken against them.
Twitter replied on June 9 that the content against which it had not followed the blocking orders does not seem to be a violation of Section 69A. On June 27, 2022, the Government issued another notice stating Twitter was violating its directions. On June 29, Twitter replied, asking the Government to reconsider the direction on the basis of the doctrine of proportionality. On June 30, 2022, the Government withdrew blocking orders on ten account-level URLs but gave an additional list of 27 URLs to be blocked. On July 10, more accounts were blocked. Compiling the orders “under protest,” Twitter approached the HC with the petition challenging the orders.
Legality
Additionally, the government claimed that because Twitter was only designed to serve as an intermediary, there was no “jural relationship” between Twitter and its users.
Government attorney Additional Solicitor General R Sankaranarayanan argued that tweets mentioning “Indian Occupied Kashmir” and the survival of LTTE commander Velupillai Prabhakaran were serious enough to undermine the integrity of the nation.
Twitter, on the other hand, claimed that its users have pushed for these rights. Additionally, Twitter maintained that under Article 14 of the Constitution, even as a foreign company, they were entitled to certain rights, such as the right to equality. They also argued that the reason for the account blocking in each case was not stated and that Section 69a’s provision for blocking a URL should only apply to the offending URL rather than the entire account because blocking the entire account would prevent the creation of information while blocking the offending tweet only applied to already-created information.
Conclusion
The evolution of cyberspace has been substantiated by big tech companies like Facebook, Google, Twitter, Amazon and many more. These companies have been instrumental in leading the spectrum of emerging technologies and creating a blanket of ease and accessibility for users. Compliance with laws and policies is of utmost priority for the government, and the new bills and policies are empowering the Indian cyberspace. Non Compliance will be taken very seriously, and the same is legalised under the Intermediary Guidelines 2021 and 2022 by Meity. Referring to Section 79 of the Information Technology Act, which pertains to an exemption from liability of intermediary in some instances, it was said, “Intermediary is bound to obey the orders which the designate authority/agency which the government fixes from time to time.”

Cyber attacks in India besides becoming common are also getting deadlier. Each strike has taken proportions to drive home the fact that no one is safe.
Hacker ‘John Wick’, hasn’t spared India’s PM or Paytm. Cyber intelligence firm Cyble which dredges the Dark Web has red-flagged hacking episodes at Truecaller, Dunzo, Unacademy, Naukri.com, Bharat Earth Movers Limited (BEML), LimeRoad and IndiaBulls.Picture this, Mumbai-based cybersecurity firm Sequretek, says in Covid-hit 2020, India has seen a 4000% spike in phishing emails and a 400% uptake in the number of policy violations that have grown over 400% as per the latest statistics.Besides the threat to crucial data, the cost suffered by companies is phenomenal. According to a report by IBM’s ‘Cost of a Data Breach Report 2020’ report, Indian companies witnessed an average $2 Mn total cost of data breach in 2020, this is an increase of 9.4% from 2019.
Another survey by Barracuda Networks revealed that 66% of Indian organisations have had at least one data breach or cybersecurity incident since shifting to a remote working model during the pandemic.
Indian Startups At Mercy Of Cyber Attacks
More recently personal data of 2.8 Lakh WhiteHat Jr students and teachers were exposed, where crucial details of minors have been made available on the dark web. Another major breach that took place this week and exclusively reported by Inc42 was when data of 1.4 Mn job seekers was leaked when jobs portal IIMjobs was hacked.
Vineet Kumar, the founder of Cyber Peace Foundation (CPF), a think tank of cybersecurity and policy experts, said that with the increased digitisation of companies and their processes, data has become the new oil.
“You get good money when you sell users data on the dark web. Hackers discovering vulnerabilities and using SQL injections to pull entire databases remains a common practice for hacking,” Kumar told Inc42.
The CyberPeace Foundation says from mid-April to the end of June it noticed 8,98,7841 attacks, July and August saw 64,52,898 attacks. Whereas September and October saw 1,37,37,516 attacks and 18,149,233 attacks respectively.
Speaking to Inc42, Pankit Desai, cofounder and CEO, Sequretek says, “Originally only a limited set of systems were being exposed, now with WFH all systems have to be exposed to the internet as all your processes are enabled remotely. WFH also creates an additional challenge where ‘personal assets are being used for professional purposes’ and ‘professional assets are being used for personal purposes.”
Malwares like SpyMax, Blackwater are being used as a combination of phishing mails and poorly secured home computers to harvest credentials. These credentials are then used for carrying out attacks. The number of attacks with harvested credentials is already up 30%, the company revealed.
Government data shows that in 2019 alone, India witnessed 3.94 lakh instances of cybersecurity breaches. In terms of hacking of state and central government websites, Indian Computer Emergency Response Team (CERT-In) data shows that a total of 336 websites belonging to central ministries, departments, and state governments were hacked between 2017 and 2019.
According to Nasscom’s Data Security Council of India (DSCI) report 2019, India witnessed the second-highest number of cyber attacks in the world between 2016 and 2018. This comes at a time when digitisation of the Indian economy is predicted to result in a $435 Bn opportunity by 2025.On September 22, the Ministry of Electronics and Information Technology (MeITY) told the Parliament that Indian citizens, commercial and legal entities faced almost 7 lakh cyberattacks till August this year.
The Indian Computer Emergency Response Team (CERT-In) has “reported 49,455, 50,362, 53,117, 208,456, 394,499 and 696,938 cybersecurity incidents during the year 2015, 2016, 2017, 2018, 2019 and 2020 (till August) respectively,” the MeITY said while responding to an unstarred question in the Lok Sabha regarding cyberattacks on Indian citizens and India-based commercial and legal entities.“
India also lacks a cohesive nation-wide cyber-strategy, policies, and procedures. Regulations around data privacy, protection, and penalty should be enacted and enforced as these measures will help businesses evaluate their cybersecurity posture and seek ways to improve. Currently, incident reporting is not mandatory. By making it compulsory, there will be a body of research data that can provide insights on threats to India and inform the government on strategies it can undertake to strengthen the nation’s cyber posture,” said Kumar Ritesh, founder and CEO, Cyfirma.The Internet Crime Report for 2019, released by the USA’s Internet Crime Complaint Centre of the Federal Bureau of Investigation (FBI), has revealed that India stands third in the world among top 20 countries that are victims of internet crimes.
Kumar attributes these numbers to Indian’s lack of basic cyber awareness. However, a poignant point is also the lack of a robust cybersecurity policy in India. Though the issue was touched upon by Prime Minister Narendra Modi during his Independence Day speech on Aug 15, 2020, not much movement has happened on that front.
“Cybersecurity is a very important aspect, which cannot be ignored. The government is alert on this and is working on a new, robust policy,” Modi said.The PM’s announcement was made in the backdrop of the government’s initiative to connect 1.5 lakh gram panchayats through an optical fiber network, thereby increasing the country’s internet connectivity.
With India pipped to take on the world with its IT prowess and increased digital integration the need for a robust policy is now more than ever.
Source: https://inc42.com/buzz/3-94-lakhs-and-counting-how-cyberattacks-are-a-worry-for-digital-india/