#FactCheck - Deepfake Video Falsely Claims visuals of a massive rally held in Manipur
Executive Summary:
A viral online video claims visuals of a massive rally organised in Manipur for stopping the violence in Manipur. However, the CyberPeace Research Team has confirmed that the video is a deep fake, created using AI technology to manipulate the crowd into existence. There is no original footage in connection to any similar protest. The claim that promotes the same is therefore, false and misleading.
Claims:
A viral post falsely claims of a massive rally held in Manipur.


Fact Check:
Upon receiving the viral posts, we conducted a Google Lens search on the keyframes of the video. We could not locate any authentic sources mentioning such event held recently or previously. The viral video exhibited signs of digital manipulation, prompting a deeper investigation.
We used AI detection tools, such as TrueMedia and Hive AI Detection tool, to analyze the video. The analysis confirmed with 99.7% confidence that the video was a deepfake. The tools identified "substantial evidence of manipulation," particularly in the crowd and colour gradience , which were found to be artificially generated.



Additionally, an extensive review of official statements and interviews with Manipur State officials revealed no mention of any such rally. No credible reports were found linking to such protests, further confirming the video’s inauthenticity.
Conclusion:
The viral video claims visuals of a massive rally held in Manipur. The research using various tools such as truemedia.org and other AI detection tools confirms that the video is manipulated using AI technology. Additionally, there is no information in any official sources. Thus, the CyberPeace Research Team confirms that the video was manipulated using AI technology, making the claim false and misleading.
- Claim: Massive rally held in Manipur against the ongoing violence viral on social media.
- Claimed on: Instagram and X(Formerly Twitter)
- Fact Check: False & Misleading
Related Blogs

Introduction
All citizens are using tech to their advantage, and so we see a lot of upskilling among the population leading to innovation in India. As we go deeper into cyberspace, we must maintain our cyber security efficiently and effectively. When bad actors use technology to their advantage, we often see data loss or financial loss of the victim, In this blog, we will shine light upon two new forms of cyber attacks, causing havoc upon the innocent. The “Daam” Malware and a new malicious app are the two new issues.
Daam Botnet
Since 2021, the DAAM Android botnet has been used to acquire unauthorised access to targeted devices. Cybercriminals use it to carry out different destructive actions. Using the DAAM Android botnet’s APK binding service, threat actors can combine malicious code with a legitimate application. Keylogging, ransomware, VOIP call records, runtime code execution, browser history collecting, incoming call recording, PII data theft, phishing URL opening, photo capture, clipboard data theft, WiFi and data status switching, and browser history gathering are just a few of the functions offered by the DAAM Android botnet. The DAAM botnet tracks user activity using the Accessibility Service and stores keystrokes it has recorded together with the name of the programme package in a database. It also contains a ransomware module that encrypts and decrypts data on the infected device using the AES method.
Additionally, the botnet uses the Accessibility service to monitor the VOIP call-making features of social media apps like WhatsApp, Skype, Telegram, and others. When a user engages with these elements, the virus begins audio recording.
The Malware
CERT-IN, the central nodal institution that reacts to computer security-related issues, claims that Daam connects with various Android APK files to access a phone. The files on the phone are encrypted using the AES encryption technique, and it is distributed through third-party websites.
It is claimed that the malware can damage call recordings and contacts, gain access to the camera, change passwords, take screenshots, steal SMS, download/upload files, and perform a variety of other things.

Safeguards and Guidelines by Cert-In
Cert-In has released the guideline for combating malware. These were issued in the public interest. The recommendations by Cert-In are as follows-
Only download from official app stores to limit the risk of potentially harmful apps.
Before downloading an app, always read the details and user reviews; likewise, always give permissions that are related to the program’s purpose.
Install Android updates solely from Android device vendors as they become available.
Avoid visiting untrustworthy websites or clicking on untrustworthy
Install and keep anti-virus and anti-spyware software up to date.
Be cautious if you see mobile numbers that appear to be something other than genuine/regular mobile numbers.
Conduct sufficient investigation Before clicking on a link supplied in a communication.
Only click on URLs that clearly display the website domain; avoid abbreviated URLs, particularly those employing bit.ly and tinyurl.
Use secure browsing technologies and filtering tools in antivirus, firewall, and filtering services.
Before providing sensitive information, look for authentic encryption certificates by looking for the green lock in your browser’s URL information, look for authentic encryption certificates by looking for the green lock in your browser’s URL bar.
Any ‘strange’ activity in a user’s bank account must be reported immediately to the appropriate bank.
New Malicious App
From the remote parts of Jharkhand, a new form of malicious application has been circulated among people on the pretext of a bank account closure. The bad actors have always used messaging platforms like Whatsapp and Telegram to circulate malicious links among unaware and uneducated people to dupe them of their hard-earned money.
They send an ordinary-looking message on Whatsapp or Telegram where they mention that the user has a bank account at ICICI bank and, due to irregularity with the credentials, their account is being deactivated. Further, they ask users to update their PAN card to reactivate their account by uploading the PAN card on an application. This app, in turn, is a malicious app that downloads all the user’s personal credentials and shares them with the bad actors via text message, allowing them to bypass banks’ two-factor authentication and drain the money from their accounts. The Jharkhand Police Cyber Cells have registered numerous FIRs pertaining to this type of cybercrime and are conducting full-scale investigations to apprehend the criminals.
Conclusion
Malware and phishing attacks have gained momentum in the previous years and have become a major contributor to the tally of cybercrimes in the country. DaaM malware is one of the examples brought into light due to the timely action by Cert-In, but still, a lot of such malware are deployed by bad actors, and we as netizens need to use our best practices to keep such criminals at bay. Phishing crimes are often substantiated by exploiting vulnerabilities and social engineering. Thus working towards a rise in awareness is the need of the hour to safeguard the population by and large.

Introduction
The 2025 Delhi Legislative Assembly election is just around the corner, scheduled for February 5, 2025, with all 70 constituencies heading to the polls. The eagerly awaited results will be announced on February 8, bringing excitement as the people of Delhi prepare to see their chosen leader take the helm as Chief Minister. As the election season unfolds, social media becomes a buzzing hub of activity, with information spreading rapidly across platforms. However, this period also sees a surge in online mis/disinformation, making elections a hotspot for misleading content. It is crucial for citizens to exercise caution and remain vigilant against false or deceptive online posts, videos, or content. Empowering voters to distinguish facts from fiction and recognize the warning signs of misinformation is essential to ensure informed decision-making. By staying alert and well-informed, we can collectively safeguard the integrity of the democratic process.
Risks of Mis/Disinformation
According to the 2024 survey report titled ‘Truth Be Told’ by ‘The 23 Watts’, 90% of Delhi’s youth (Gen Z) report witnessing a spike in fake news during elections, and 91% believe it influences voting patterns. Furthermore, the research highlights that 14% of Delhi’s youth tend to share sensational news without fact-checking, relying solely on conjecture.
Recent Measures by ECI
Recently the Election Commission of India (EC) has issued a fresh advisory to political parties to ensure responsible use of AI-generated content in their campaigns. The EC has issued guidelines to curb the potential use of "deepfakes" and AI-generated distorted content by political parties and their representatives to disturb the level playing field. EC has mandated the labelling of all AI-generated content used in election campaigns to enhance transparency, combat misinformation, ensuring a fair electoral process in the face of rapidly advancing AI technologies.
Best Practices to Avoid Electoral Mis/Disinformation
- Seek Information from Official Sources: Voters should rely on authenticated sources for information. These include reading official manifestos, following verified advisory notifications from the Election Commission, and avoiding unverified claims or rumours.
- Consume News Responsibly: Voters must familiarize themselves with dependable news channels and make use of reputable fact-checking organizations that uphold the integrity of news content. It is crucial to refrain from randomly sharing or forwarding any news post, video, or message without verifying its authenticity. Consume responsibly, fact-check thoroughly, and share cautiously.
- Role of Fact-Checking: Cross-checking and verifying information from credible sources are indispensable practices. Reliable and trustworthy fact-checking tools are vital for assessing the authenticity of information in the digital space. Voters are encouraged to use these tools to validate information from authenticated sources and adopt a habit of verification on their own. This approach fosters a culture of critical thinking, empowering citizens to counter deceptive deepfakes and malicious misinformation effectively. It also helps create a more informed and resilient electorate.
- Be Aware of Electoral Deepfakes: In the era of artificial intelligence, synthetic media presents significant challenges. Just as videos can be manipulated, voices can also be cloned. It is essential to remain vigilant against the misuse of deepfake audio and video content by malicious actors. Recognize the warning signs, such as inconsistencies or unnatural details, and stay alert to misleading multimedia content. Proactively question and verify such material to avoid falling prey to deception.
References
- https://www.financialexpress.com/business/brandwagon-90-ofnbsp-delhi-youth-witness-spike-in-fake-news-during-elections-91-believe-it-influences-voting-patterns-revealed-the-23-watts-report-3483166/
- https://timesofindia.indiatimes.com/india/election-commission-urges-parties-to-disclose-ai-generated-campaign-content-in-interest-of-transparency/articleshow/117306865.cms
- https://www.thehindu.com/news/national/election-commission-issues-advisory-on-use-of-ai-in-poll-campaigning/article69103888.ece
- https://indiaai.gov.in/article/election-commission-of-india-embraces-ai-ethics-in-campaigning-advisory-on-labelling-ai-generated-content

AI systems have grown in both popularity and complexity on which they operate. They are enhancing accessibility for all, including people with disabilities, by revolutionising sectors including healthcare, education, and public services. We are at the stage where AI-powered solutions that can help people with mental, physical, visual or hearing impairments perform everyday and complex tasks are being created.
Generative AI is now being used to amplify human capability. The development of tools for speech-to-text and image recognition is helping in facilitating communication and interaction for visually or hearing-impaired individuals, and smart prosthetics are providing tailored support. Unfortunately, even with these developments, PWDs have continued to face challenges. Therefore, it is important to balance innovation with ethical considerations aand ensuring that these technologies are designed with qualities like privacy, equity, and inclusivity in mind.
Access to Tech: the Barriers Faced by PWDs
PWDs face several barriers while accessing technology. Identifying these challenges is important as they lack computer accessibility, in the use of hardware and software, which has become a norm in life nowadays. Website functions that only work when users click with a mouse, self-service kiosks without accessibility features, touch screens without screen reader software or tactile keyboards, and out-of-order equipment, such as lifts, captioning mirrors and description headsets, are just some difficulties that they face in their day-to-day life.
While they are helpful, much of the current technology doesn’t fully address all disabilities. For example, many assistive devices focus on visual or mobility impairments, but they fall short of addressing cognitive or sensory conditions. In addition to this, these solutions often lack personalisation, making them less effective for individuals with diverse needs. AI has significant potential to bridge this gap. With adaptive systems like voice assistants, real-time translation, and personalised features, AI can create more inclusive solutions, improving access to both digital and physical spaces for everyone.
The Importance of Inclusive AI Design
Creating an Inclusive AI design is important. It ensures that PWDs are not excluded from technological advancements because of the impairments that they are suffering from. The concept of an ‘inclusive or universal’ design promotes creating products and services that are usable for the widest possible range of people. Tech Developers have an ethical responsibility to create advancements in AI that serve everyone. Accessibility features should be built into the core design. They should be treated as a practice rather than an afterthought. However, bias in AI development often stems from data of a non-representative nature, or assumptions can lead to systems that overlook or poorly serve PWDs. If AI algorithms are trained on limited or biased data, they risk excluding marginalised groups, making ethical, inclusive design a necessity for equity and accessibility.
Regulatory Efforts to Ensure Accessible AI
In India, the Rights of Persons with Disabilities Act of 2016 impresses upon the need to provide PWDs with equal accessibility to technology. Subsequently, the DPDP Act of 2023 highlights data privacy concerns for the disabled under section 9 to process their data.
On the international level, the newly incorporated EU’s AI Act mandates measures for transparent, safe, and fair access to AI systems along with including measures that are related to accessibility.
In the US, the Americans with Disabilities Act of 1990 and Section 508 of the 1998 amendment to the Rehabilitation Act of 1973 are the primary legislations that work on promoting digital accessibility in public services.
Challenges in implementing Regulations for AI Accessibility for PWDs
Defining the term ‘inclusive AI’ is a challenge. When working on implementing regulations and compliance for the accessibility of AI, if the primary work is left undefined, it makes the task of creating tools to address the issue an issue. The rapid pace of tech and AI development has more often outpaced legal frameworks in development. This leads to the creation of enforcement gaps. Countries like Canada and tech industry giants like Microsoft and Google are leading forces behind creating accessible AI innovations. Their regulatory frameworks focus on developing AI ethics with inclusivity and collaboration with disability rights groups.
India’s efforts in creating an inclusive AI include the redesign of the Sugamya Bharat app. The app had been created to assist PWDs and the elderly. It will now be incorporating AI features specifically to assist the intended users.
Though AI development has opportunities for inclusivity, unregulated development can be risky. Regulation plays a critical role in ensuring that AI-driven solutions prioritise inclusivity, fairness, and accessibility, harnessing AI’s potential to empower PWDs and contribute to a more inclusive society.
Conclusion
AI development can offer PWDs unprecedented independence and accessibility in leading their lives. The development of AI while keeping inclusivity and fairness in mind is needed to be prioritised. AI that is free from bias, combined with robust regulatory frameworks, together are essential in ensuring that AI serves equitably. Collaborations between tech developers, policymakers, and disability advocates need to be supported and promoted to build AI systems. This will in turn work towards bridging the accessibility gaps for PWDs. As AI continues to evolve, maintaining a steadfast commitment to inclusivity will be crucial in preventing marginalisation and advancing true technological progress for all.
References
- https://www.business-standard.com/india-news/over-1-4k-accessibility-related-complaints-filed-on-govt-app-75-solved-124090800118_1.html
- https://www.forbes.com/councils/forbesbusinesscouncil/2023/06/16/empowering-individuals-with-disabilities-through-ai-technology/ .
- https://hbr.org/2023/08/designing-generative-ai-to-work-for-people-with-disabilities
- Thehttps://blogs.microsoft.com/on-the-issues/2018/05/07/using-ai-to-empower-people-with-disabilities/andensur,personalization