#FactCheck: Viral video of Unrest in Kenya is being falsely linked with J&K
Executive Summary:
A video of people throwing rocks at vehicles is being shared widely on social media, claiming an incident of unrest in Jammu and Kashmir, India. However, our thorough research has revealed that the video is not from India, but from a protest in Kenya on 25 June 2025. Therefore, the video is misattributed and shared out of context to promote false information.

Claim:
The viral video shows people hurling stones at army or police vehicles and is claimed to be from Jammu and Kashmir, implying ongoing unrest and anti-government sentiment in the region.

Fact Check:
To verify the validity of the viral statement, we did a reverse image search by taking key frames from the video. The results clearly demonstrated that the video was not sourced from Jammu and Kashmir as claimed, but rather it was consistent with footage from Nairobi, Kenya, where a significant protest took place on 25 June 2025. Protesters in Kenya had congregated to express their outrage against police brutality and government action, which ultimately led to violent clashes with police.


We also came across a YouTube video with similar news and frames. The protests were part of a broader anti-government movement to mark its one-year time period.

To support the context, we did a keyword search of any mob violence or recent unrest in J&K on a reputable Indian news source, But our search did not turn up any mention of protests or similar events in J&K around the relevant time. Based on this evidence, it is clear that the video has been intentionally misrepresented and is being circulated with false context to mislead viewers.

Conclusion:
The assertion that the viral video shows a protest in Jammu and Kashmir is incorrect. The video appears to be taken from a protest in Nairobi, Kenya, in June 2025. Labeling the video incorrectly only serves to spread misinformation and stir up uncalled for political emotions. Always be sure to verify where content is sourced from before you believe it or share it.
- Claim: Army faces heavy resistance from Kashmiri youth — the valley is in chaos.
 - Claimed On: Social Media
 - Fact Check: False and Misleading
 
Related Blogs

Introduction
In 2025, the internet is entering a new paradigm and it is hard not to witness it. The internet as we know it is rapidly changing into a treasure trove of hyper-optimised material over which vast bot armies battle to the death, thanks to the amazing advancements in artificial intelligence. All of that advancement, however, has a price, primarily in human lives. It turns out that releasing highly personalised chatbots on a populace that is already struggling with economic stagnation, terminal loneliness, and the ongoing destruction of our planet isn’t exactly a formula for improved mental health. This is the truth of 75% of the kids and teen population who have had chats with chatbot-generated fictitious characters. AI, or artificial intelligence, Chatbots are becoming more and more integrated into our daily lives, assisting us with customer service, entertainment, healthcare, and education. But as the impact of these instruments grows, accountability and moral behaviour become more important. An investigation of the internal policies of a major international tech firm last year exposed alarming gaps: AI chatbots were allowed to create content with child romantic roleplaying, racially discriminatory reasoning, and spurious medical claims. Although the firm has since amended aspects of these rules, the exposé underscores an underlying global dilemma - how can we regulate AI to maintain child safety, guard against misinformation, and adhere to ethical considerations without suppressing innovation?
The Guidelines and Their Gaps
The tech giants like Meta and Google are often reprimanded for overlooking Child Safety and the overall increase in Mental health issues in children and adolescents. According to reports, Google introduced Gemini AI kids, a kid-friendly version of its Gemini AI chatbot, which represents a major advancement in the incorporation of generative artificial intelligence (Gen-AI) into early schooling. Users under the age of thirteen can use supervised accounts on the Family Link app to access this version of Gemini AI Kids.
AI operates on the premise of data collection and analysis. To safeguard children’s personal information in the digital world, the Digital Personal Data Protection Act, 2023 (DPDP Act) introduces particular safeguards. According to Section 9, before processing the data of children, who are defined as people under the age of 18, Data Fiduciaries, entities that decide the goals and methods of processing personal data, must get verified consent from a parent or legal guardian. Furthermore, the Act expressly forbids processing activities that could endanger a child’s welfare, such as behavioural surveillance and child-targeted advertising. According to court interpretations, a child's well-being includes not just medical care but also their moral, ethical, and emotional growth.
While the DPDP Act is a big start in the right direction, there are still important lacunae in how it addresses AI and Child Safety. Age-gating systems, thorough risk rating, and limitations specific to AI-driven platforms are absent from the Act, which largely concentrates on consent and damage prevention in data protection. Furthermore, it ignores the threats to children’s emotional safety or the long-term psychological effects of interacting with generative AI models. Current safeguards are self-regulatory in nature and dispersed across several laws, such as the Bhartiya Nyaya Sanhita, 2023. These include platform disclaimers, technology-based detection of child-sexual abuse content, and measures under the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
Child Safety and AI
- The Risks of Romantic Roleplay - Enabling chatbots to engage in romantic roleplaying with youngsters is among the most concerning discoveries. These interactions can result in grooming, psychological trauma, and relaxation to inappropriate behaviour, even if they are not explicitly sexual. Having illicit or sexual conversations with kids in cyberspace is unacceptable, according to child protection experts. However, permitting even "flirtatious" conversation could normalise risky boundaries.
 - International Standards and Best Practices - The concept of "safety by design" is highly valued in child online safety guidelines from around the world, including UNICEF's Child Online Protection Guidelines and the UK's Online Safety Bill. This mandating of platforms and developers to proactively remove risks, not reactively to respond to harms, is the bare minimum standard that any AI guidelines must meet if they provide loopholes for child-directed roleplay.
 
Misinformation and Racism in AI Outputs
- The Disinformation Dilemma - The regulations also allowed AI to create fictional narratives with disclaimers. For example, chatbots were able to write articles promulgating false health claims or smears against public officials, as long as they were labelled as "untrue." While disclaimers might give thin legal cover, they add to the proliferation of misleading information. Indeed, misinformation tends to spread extensively because users disregard caveat labels in favour of provocative assertions.
 
- Ethical Lines and Discriminatory Content - It is ethically questionable to allow AI systems to generate racist arguments, even when requested. Though scholarly research into prejudice and bias may necessitate such examples, unregulated generation has the potential to normalise damaging stereotypes. Researchers warn that such practice brings platforms from being passive hosts of offensive speech to active generators of discriminatory content. It is a difference that makes a difference, as it places responsibility squarely on developers and corporations.
 
The Broader Governance Challenge
- Corporate Responsibility and AI Material generated by AI is not equivalent to user speech—it is a direct reflection of corporate training, policy decisions, and system engineering. This fact requires a greater level of accountability. Although companies can update guidelines following public criticism, that there were such allowances in the first place indicates a lack of strong ethical regulation.
 - Regulatory Gaps Regulatory regimes for AI are currently in disarray. The EU AI Act, the OECD AI Principles, and national policies all emphasise human rights, transparency, and accountability. The few, though, specify clear guidelines for content risks such as child roleplay or hate narratives. This absence of harmonised international rules leaves companies acting in the shadows, establishing their own limits until contradicted.
 
An active way forward would include
- Express Child Protection Requirements: AI systems must categorically prohibit interactions with children involving flirting or romance.
 - Misinformation Protections: Generative AI must not be allowed to generate knowingly false material, disclaimers being irrelevant.
 - Bias Reduction: Developers need to proactively train systems against generating discriminatory accounts, not merely tag them as optional outputs.
 - Independent Regulation: External audit and ethics review boards can supply transparency and accountability independent of internal company regulations.
 
Conclusion
The guidelines that are often contentious are more than the internal folly of just one firm; they point to a deeper systemic issue in AI regulation. The stakes rise as generative AI becomes more and more integrated into politics, healthcare, education, and social interaction. Racism, false information, and inadequate child safety measures are severe issues that require quick resolution. Corporate regulation is only one aspect of the future; other elements include multi-stakeholder participation, stronger global systems, and ethical standards. In the end, rather than just corporate interests, trust in artificial neural networks will be based on their ability to preserve the truth, protect the weak, and represent universal human values.
References
- https://www.esafety.gov.au/newsroom/blogs/ai-chatbots-and-companions-risks-to-children-and-young-people
 - https://www.lakshmisri.com/insights/articles/ai-for-children/#
 - https://the420.in/meta-ai-chatbot-guidelines-child-safety-racism-misinformation/
 - https://www.unicef.org/documents/guidelines-industry-online-child-protection
 - https://www.oecd.org/en/topics/sub-issues/ai-principles.html
 - https://artificialintelligenceact.eu/
 

Executive Summary:
A viral video claiming to show Israelis pleading with Iran to "stop the war" is not authentic. As per our research the footage is AI-generated, created using tools like Google’s Veo, and not evidence of a real protest. The video features unnatural visuals and errors typical of AI fabrication. It is part of a broader wave of misinformation surrounding the Israel-Iran conflict, where AI-generated content is widely used to manipulate public opinion. This incident underscores the growing challenge of distinguishing real events from digital fabrications in global conflicts and highlights the importance of media literacy and fact-checking.
Claim:
A X verified user with the handle "Iran, stop the war, we are sorry" posted a video featuring people holding placards and the Israeli flag. The caption suggests that Israeli citizens are calling for peace and expressing remorse, stating, "Stop the war with Iran! We apologize! The people of Israel want peace." The user further claims that Israel, having allegedly initiated the conflict by attacking Iran, is now seeking reconciliation.

Fact Check:
The bottom-right corner of the video displays a "VEO" watermark, suggesting it was generated using Google's AI tool, VEO 3. The video exhibits several noticeable inconsistencies such as robotic, unnatural speech, a lack of human gestures, and unclear text on the placards. Additionally, in one frame, a person wearing a blue T-shirt is seen holding nothing, while in the next frame, an Israeli flag suddenly appears in their hand, indicating possible AI-generated glitches.

We further analyzed the video using the AI detection tool HIVE Moderation, which revealed a 99% probability that the video was generated using artificial intelligence technology. To validate this finding, we examined a keyframe from the video separately, which showed an even higher likelihood of 99% probability of being AI generated. These results strongly indicate that the video is not authentic and was most likely created using advanced AI tools.

Conclusion:
The video is highly likely to be AI-generated, as indicated by the VEO watermark, visual inconsistencies, and a 99% probability from HIVE Moderation. This highlights the importance of verifying content before sharing, as misleading AI-generated media can easily spread false narratives.
- Claim: AI generated video of Israelis saying "Stop the War, Iran We are Sorry". 
 - Claimed On: Social Media
 - Fact Check:AI Generated Mislead
 

Introduction
A bill requiring social media companies, providers of encrypted communications, and other online services to report drug activity on their platforms to the U.S. The Drug Enforcement Administration (DEA) advanced to the Senate floor, alarming privacy advocates who claim the legislation transforms businesses into de facto drug enforcement agents and exposes many of them to liability for providing end-to-end encryption.
Why is there a requirement for online companies to report drug activity?
The reason behind the bill is that there was a Kansas teenager died after unknowingly taking a fentanyl-laced pill he purchased on Snapchat. The bill requires social media companies and other web communication providers to provide the DEA with users’ names and other information when the companies have “actual knowledge” that illicit drugs are being distributed on their platforms.
There is an urgent need to look into this matter as platforms like Snapchat and Instagram are the constant applications that netizens use. If these kinds of apps promote the selling of drugs, then it will result in major drug-selling vehicles and become drug-selling platforms.
Threat to end to end encryption
End-to-end encryption has long been criticised by law enforcement for creating a “lawless space” that criminals, terrorists, and other bad actors can exploit for their illicit purposes. End- to end encryption is important for privacy, but it has been criticised as criminals also use it for bad purposes that result in cyber fraud and cybercrimes.
Cases of drug peddling on social media platforms
It is very easy to get drugs on social media, just like calling an Uber. It is that simple to get the drugs. The survey discovered that access to illegal drugs is “staggering” on social media applications, which has contributed to the rising number of fentanyl overdoses, which has resulted in suicide, gun violence, and accidents.
According to another survey, drug dealers use slang, emoticons, QR codes, and disappearing messages to reach customers while avoiding content monitoring measures on social networking platforms. Drug dealers are frequently active on numerous social media platforms, advertising their products on Instagram while providing their WhatApps or Snapchat names for queries, making it difficult for law officials to crack down on the transactions.
There is a need for social media platforms to report these kinds of drug-selling activity on specific platforms to the Drug enforcement administration. The bill requires online companies to report drug cases going on websites, such as the above-mentioned Snapchat case. There are so many other cases where drug dealers sell the drug through Instagram, Snapchat etc. Usually, if Instagram blocks one account, they create another account for the drug selling. Just by only blocking the account does not help to stop drug trafficking on social media platforms.
Will this put the privacy of users at risk?
It is important to report the cybercrime activities of selling drugs on social media platforms. The companies will only detect the activity regarding the drugs which are being sold through social media platforms which are able to detect bad actors and cyber criminals. The detection will be on the particular activities on the applications where it is happening because the social media platforms lack regulations to govern them, and their convenience becomes the major vehicle for the drugs sale.
Conclusion
Social media companies are required to report these kinds of activities happening on their platforms immediately to the Drugs enforcement Administration so that the DEA will take the required steps instead of just blocking the account. Because just blocking does not stop these drug markets from happening online. There must be proper reporting for that. And there is a need for social media regulations. Social media platforms mostly influence people.