#FactCheck - Viral Video Showing Man Frying Bhature on His Stomach Is AI-Generated
A video circulating on social media shows a man allegedly rolling out bhature on his stomach and then frying them in a pan. The clip is being shared with a communal narrative, with users making derogatory remarks while falsely linking the act to a particular community.
CyberPeace Foundation’s research found the viral claim to be false. Our probe confirms that the video is not real but has been created using artificial intelligence (AI) tools and is being shared online with a misleading and communal angle.
Claim
On January 5, 2025, several users shared the viral video on social media platform X (formerly Twitter). One such post carried a communal caption suggesting that the person shown in the video does not belong to a particular community and making offensive remarks about hygiene and food practices..
- The post link and archived version can be viewed here: https://x.com/RightsForMuslim/status/2008035811804291381
- Archive Link: https://archive.ph/lKnX5

Fact Check:
Upon closely examining the viral video, several visual inconsistencies and unnatural movements were observed, raising suspicion about its authenticity. These anomalies are commonly associated with AI-generated or digitally manipulated content.
To verify this, the video was analysed using the AI detection tool HIVE Moderation. According to the tool’s results, the video was found to be 97 percent AI-generated, strongly indicating that it was not recorded in real life but synthetically created.

Conclusion
CyberPeace Foundation’s research clearly establishes that the viral video is AI-generated and does not depict a real incident. The clip is being deliberately shared with a false and communal narrative to mislead users and spread misinformation on social media. Users are advised to exercise caution and verify content before sharing such sensational and divisive material online.
Related Blogs
.webp)
Introduction
The rise of unreliable social media newsgroups on online platforms has significantly altered the way people consume and interact with news, contributing to the spread of misinformation and leading to sources of unverified and misleading content. Unlike traditional news outlets that adhere to journalistic standards, these newsgroups often lack proper fact-checking and editorial oversight, leading to the rapid dissemination of false or distorted information. Social media transformed individuals into active content creators. Social media newsgroups (SMNs) are social media platforms used as sources of news and information. According to a survey by the Pew Research Center (July-August 2024), 54% of U.S. adults now rely on social media for news. This rise in SMNs has raised concerns over the integrity of online news and undermines trust in legitimate news sources. Social media users are advised to consume information and news from authentic sources or channels available on social media platforms.
The Growing Issue of Misinformation in Social Media Newsgroups
Social media newsgroups have become both a source of vital information and a conduit for misinformation. While these platforms allow rapid news sharing and facilitate political and social campaigns, they also pose significant risks of unverified information. Misleading information, often driven by algorithms designed to maximise user engagement, proliferates in these spaces. This has led to increasing challenges, as SMNs cater to diverse communities with varying political affiliations, gender demographics, and interests. This sometimes results in the creation of echo chambers where information is not critically assessed, amplifying the confirmation bias and enabling the unchecked spread of misinformation. A prominent example is the false narratives surrounding COVID-19 vaccines that spread across SMNs, contributing to widespread vaccine hesitancy and public health risks.
Understanding the Susceptibility of Online Newsgroups to Misinformation
Several factors make social media newsgroups particularly susceptible to misinformation. Some of the factors are listed below:
- The lack of robust fact-checking mechanisms in social media news groups can lead to false narratives which can spread easily.
- The lack of expertise from admins of online newsgroups, who are often regular users without journalism knowledge, can result in the spreading of inaccurate information. Their primary goal of increasing engagement may overshadow concerns about accuracy and credibility.
- The anonymity of users exacerbates the problem of misinformation. It allows users to share unverified or misleading content without accountability.
- The viral nature of social media also leads to the vast spread of misinformation to audiences instantly, often outpacing efforts to correct it.
- Unlike traditional media outlets, online newsgroups often lack formal fact-checking processes. This absence allows misinformation to circulate without verification, making it easier for inaccuracies to go unchallenged.
- The sheer volume of user engagement in the form of posts has created the struggle to moderate content effectively imposing significant challenges.
- Social Media Platforms have algorithms designed to enhance user engagement and inadvertently amplify sensational or emotionally charged content, which is more likely to be false.
Consequences of Misinformation in Newsgroups
The societal impacts of misinformation in SMNs are profound. Political polarisation can fuel one-sided views and create deep divides in democratic societies. Health risks emerge when false information spreads about critical issues, such as the anti-vaccine movements or misinformation related to public health crises. Misinformation has dire long-term implications and has the potential to destabilise governments and erode trust in media, in both traditional and social media leading to undermining democracy. If unaddressed, the consequences could continue to ripple through society, perpetuating false narratives that shape public opinion.
Steps to Mitigate Misinformation in Social Media Newsgroups
- Educating users in social media literacy education can empower critical assessment of the information encountered, reducing the spread of false narratives.
- Introducing stricter platform policies, including penalties for deliberately sharing misinformation, may act as a deterrent against sharing unverified information.
- Collaborative fact-checking initiatives with involvement from social media platforms, independent journalists, and expert organisations can provide a unified front against the spread of false information.
- From a policy perspective, a holistic approach that combines platform responsibility with user education and governmental and industry oversight is essential to curbing the spread of misinformation in social media newsgroups.
Conclusion
The emergence of Social media newsgroups has revolutionised the dissemination of information. This rapid spread of misinformation poses a significant challenge to the integrity of news in the digital age. It gets further amplified by algorithmic echo chambers unchecked user engagement and profound societal implications. A multi-faceted approach is required to tackle these issues, combining stringent platform policies, AI-driven moderation, and collaborative fact-checking initiatives. User empowerment concerning media literacy is an important factor in promoting critical thinking and building cognitive defences. By adopting these measures, we can better navigate the complexities of consuming news from social media newsgroups and preserve the reliability of online information. Furthermore, users need to consume news from authoritative sources available on social media platforms.
References

Introduction
Fundamentally, artificial intelligence (AI) is the greatest extension of human intelligence. It is the culmination of centuries of logic, reasoning, math, and creativity, machines trained to reflect cognition. However, such intelligence no longer resembles intelligence at all when it is put in the hands of the irresponsible, the one with malice, or the perverse, unleashed into the wild with minimal safeguards. Instead, distortion seems as a tool of debasement rather than enlightenment.
Recent incidents involving sexually explicit photographs created by AI on social media sites reveal an extremely unsettling reality. When intelligence is detached from accountability, morality, and governance, it corrodes society rather than elevates it. We are seeing a failure of stewardship rather than just a failure of technology.
The Cost of Unchecked Intelligence
The AI chatbot Grok, which operates under Elon Musk’s X (formerly Twitter), is the subject of a debate that goes beyond a single platform or product. The romanticisation of “unfiltered” knowledge and the perilous notion that innovation should come before accountability are signs of a bigger lapse in the digital ecosystem. We have allowed mechanisms that can be used as weapons against human dignity, especially the dignity of women and children, in the name of freedom.
We are no longer discussing artistic expression or experimental AI when a machine can digitally undress women, morph photos, or produce sexualised portrayals of kids with a few keystrokes. We stand in the face of algorithmic violence. Even if the physical touch is absent, the harm caused by it is genuine, long-lasting, and extremely personal.
The Regulatory Red Line
A major inflexion was reached when the Indian government responded by ordering a thorough technical, procedural, and governance-level audit. It acknowledges that AI systems are not isolated entities. Platforms that use them are not neutral pipes, but rather intermediaries with responsibilities. The Bhartiya Nyay Sanhita, the IT Act, the IT Rules 2021, and the possible removal of Section 79 safe-harbour safeguards all make it quite evident that innovation is not automatic immunity.
However, the fundamental dilemma cannot be resolved by legislation alone. AI is hailed as a force multiplier for innovation, productivity, and advancement, but when incentives are biased towards engagement, virality, and shock value, its misuse shows how easily intelligence can turn into ugliness. The output receives greater attention the more provocative it is. Profit increases with attention. Restraint turns into a business disadvantage in this ecology.
The Aftermath
Grok’s own acknowledgement that “safeguard lapses” enabled the creation of pictures showing children wearing skimpy attire underscores a troubling reality, safety was not absent due to impossibility, but due to insufficiency. It was always possible to implement sophisticated filtering, more robust monitoring, and stricter oversight. They were simply not prioritised. When a system asserts that “no system is 100% foolproof,” it must also acknowledge that there is no acceptable margin of error when it comes to child protection.
The casual normalisation of such lapses is what is most troubling. By characterising these instances as “isolated cases,” systemic design decisions run the risk of being trivialised. In addition to intelligence, AI systems that have been taught on enormous amounts of human data also inherit bias, misogyny, and power imbalances.
Conclusion
What is required today is recalibration. Platforms need to shift from reactive compliance to proactive accountability. Safeguards must be incorporated at the architectural level; they cannot be cosmetic or post-facto. Governance must encompass enforced ethical boundaries in addition to terms of service. The idea that “edgy” AI is a sign of advancement must also be rejected by society.
Artificial Intelligence has never promised freedom under the guise of vulgarity. It was improvement, support, and augmentation. The fundamental core of intelligence is lost when it is used as a tool for degradation.So what’s left is a decision between principled innovation and unbridled novelty. Between responsibility and spectacle, between intelligence as purpose and intellect as power.
References
https://www.rediff.com/news/report/govt-orders-x-review-of-grok-over-explicit-content/20260103.htm

Executive Summary:
A viral video that has gone viral is purportedly of mass cheating during the UPSC Civil Services Exam conducted in Uttar Pradesh. This video claims to show students being filmed cheating by copying answers. But, when we did a thorough research, it was noted that the incident happened during an LLB exam, not the UPSC Civil Services Exam. This is a representation of misleading content being shared to promote misinformation.

Claim:
Mass cheating took place during the UPSC Civil Services Exam in Uttar Pradesh, as shown in a viral video.

Fact Check:
Upon careful verification, it has been established that the viral video being circulated does not depict the UPSC Civil Services Examination, but rather an incident of mass cheating during an LLB examination. Reputable media outlets, including Zee News and India Today, have confirmed that the footage is from a law exam and is unrelated to the UPSC.
The video in question was reportedly live-streamed by one of the LLB students, held in February 2024 at City Law College in Lakshbar Bajha, located in the Safdarganj area of Barabanki, Uttar Pradesh.
The misleading attempt to associate this footage with the highly esteemed Civil Services Examination is not only factually incorrect but also unfairly casts doubt on a process that is known for its rigorous supervision and strict security protocols. It is crucial to verify the authenticity and context of such content before disseminating it, in order to uphold the integrity of our institutions and prevent unnecessary public concern.

Conclusion:
The viral video purportedly showing mass cheating during the UPSC Civil Services Examination in Uttar Pradesh is misleading and not genuine. Upon verification, the footage has been found to be from an LLB examination, not related to the UPSC in any manner. Spreading such misinformation not only undermines the credibility of a trusted examination system but also creates unwarranted panic among aspirants and the public. It is imperative to verify the authenticity of such claims before sharing them on social media platforms. Responsible dissemination of information is crucial to maintaining trust and integrity in public institutions.
- Claim: A viral video shows UPSC candidates copying answers.
- Claimed On: Social Media
- Fact Check: False and Misleading