#FactCheck-Fake Video of Mass Cheating at UPSC Exam Circulates Online
Executive Summary:
A viral video that has gone viral is purportedly of mass cheating during the UPSC Civil Services Exam conducted in Uttar Pradesh. This video claims to show students being filmed cheating by copying answers. But, when we did a thorough research, it was noted that the incident happened during an LLB exam, not the UPSC Civil Services Exam. This is a representation of misleading content being shared to promote misinformation.

Claim:
Mass cheating took place during the UPSC Civil Services Exam in Uttar Pradesh, as shown in a viral video.

Fact Check:
Upon careful verification, it has been established that the viral video being circulated does not depict the UPSC Civil Services Examination, but rather an incident of mass cheating during an LLB examination. Reputable media outlets, including Zee News and India Today, have confirmed that the footage is from a law exam and is unrelated to the UPSC.
The video in question was reportedly live-streamed by one of the LLB students, held in February 2024 at City Law College in Lakshbar Bajha, located in the Safdarganj area of Barabanki, Uttar Pradesh.
The misleading attempt to associate this footage with the highly esteemed Civil Services Examination is not only factually incorrect but also unfairly casts doubt on a process that is known for its rigorous supervision and strict security protocols. It is crucial to verify the authenticity and context of such content before disseminating it, in order to uphold the integrity of our institutions and prevent unnecessary public concern.

Conclusion:
The viral video purportedly showing mass cheating during the UPSC Civil Services Examination in Uttar Pradesh is misleading and not genuine. Upon verification, the footage has been found to be from an LLB examination, not related to the UPSC in any manner. Spreading such misinformation not only undermines the credibility of a trusted examination system but also creates unwarranted panic among aspirants and the public. It is imperative to verify the authenticity of such claims before sharing them on social media platforms. Responsible dissemination of information is crucial to maintaining trust and integrity in public institutions.
- Claim: A viral video shows UPSC candidates copying answers.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
In a world where Artificial Intelligence (AI) is already changing the creation and consumption of content at a breathtaking pace, distinguishing between genuine media and false or doctored content is a serious issue of international concern. AI-generated content in the form of deepfakes, synthetic text and photorealistic images is being used to disseminate misinformation, shape public opinion and commit fraud. As a response, governments, tech companies and regulatory bodies are exploring ‘watermarking’ as a key mechanism to promote transparency and accountability in AI-generated media. Watermarking embeds identifiable information into content to indicate its artificial origin.
Government Strategies Worldwide
Governments worldwide have pursued different strategies to address AI-generated media through watermarking standards. In the US, President Biden's 2023 Executive Order on AI directed the Department of Commerce and the National Institute of Standards and Technology (NIST) to establish clear guidelines for digital watermarking of AI-generated content. This action puts a big responsibility on large technology firms to put identifiers in media produced by generative models. These identifiers should help fight misinformation and address digital trust.
The European Union, in its Artificial Intelligence Act of 2024, requires AI-generated content to be labelled. Article 50 of the Act specifically demands that developers indicate whenever users engage with synthetic content. In addition, the EU is a proponent of the Coalition for Content Provenance and Authenticity (C2PA), an organisation that produces secure metadata standards to track the origin and changes of digital content.
India is currently in the process of developing policy frameworks to address AI and synthetic content, guided by judicial decisions that are helping shape the approach. In 2024, the Delhi High Court directed the central government to appoint members for a committee responsible for regulating deepfakes. Such moves indicate the government's willingness to regulate AI-generated content.
China, has already implemented mandatory watermarking on all deep synthesis content. Digital identifiers must be embedded in AI media by service providers, and China is one of the first countries to adopt stern watermarking legislation.
Understanding the Technical Feasibility
Watermarking AI media means inserting recognisable markers into digital material. They can be perceptible, such as logos or overlays or imperceptible, such as cryptographic tags or metadata. Sophisticated methods such as Google's SynthID apply imperceptible pixel-level changes that remain intact against standard image manipulation such as resizing or compression. Likewise, C2PA metadata standards enable the user to track the source and provenance of an item of content.
Nonetheless, watermarking is not an infallible process. Most watermarking methods are susceptible to tampering. Aforementioned adversaries with expertise, for instance, can use cropping editing or AI software to delete visible watermarks or remove metadata. Further, the absence of interoperability between different watermarking systems and platforms hampers their effectiveness. Scalability is also an issue enacting and authenticating watermarks for billions of units of online content necessitates huge computational efforts and routine policy enforcement across platforms. Scientists are currently working on solutions such as blockchain-based content authentication and zero-knowledge watermarking, which maintain authenticity without sacrificing privacy. These new techniques have potential for overcoming technical deficiencies and making watermarking more secure.
Challenges in Enforcement
Though increasing agreement exists for watermarking, implementation of such policies is still a major issue. Jurisdictional constraints prevent enforceability globally. A watermarking policy within one nation might not extend to content created or stored in another, particularly across decentralised or anonymous domains. This creates an exigency for international coordination and the development of worldwide digital trust standards. While it is a welcome step that platforms like Meta, YouTube, and TikTok have begun flagging AI-generated content, there remains a pressing need for a standardised policy that ensures consistency and accountability across all platforms. Voluntary compliance alone is insufficient without clear global mandates.
User literacy is also a significant hurdle. Even when content is properly watermarked, users might not see or comprehend its meaning. This aligns with issues of dealing with misinformation, wherein it's not sufficient just to mark off fake content, users need to be taught how to think critically about the information they're using. Public education campaigns, digital media literacy and embedding watermarking labels within user-friendly UI elements are necessary to ensure this technology is actually effective.
Balancing Privacy and Transparency
While watermarking serves to achieve digital transparency, it also presents privacy issues. In certain instances, watermarking might necessitate the embedding of metadata that will disclose the source or identity of the content producer. This threatens journalists, whistleblowers, activists, and artists utilising AI tools for creative or informative reasons. Governments have a responsibility to ensure that watermarking norms do not violate freedom of expression or facilitate surveillance. The solution is to achieve a balance by employing privacy-protection watermarking strategies that verify the origin of the content without revealing personally identifiable data. "Zero-knowledge proofs" in cryptography may assist in creating watermarking systems that guarantee authentication without undermining user anonymity.
On the transparency side, watermarking can be an effective antidote to misinformation and manipulation. For example, during the COVID-19 crisis, misinformation spread by AI on vaccines, treatments and public health interventions caused widespread impact on public behaviour and policy uptake. Watermarked content would have helped distinguish between authentic sources and manipulated media and protected public health efforts accordingly.
Best Practices and Emerging Solutions
Several programs and frameworks are at the forefront of watermarking norms. Adobe, Microsoft and others' collaborative C2PA framework puts tamper-proof metadata into images and videos, enabling complete traceability of content origin. SynthID from Google is already implemented on its Imagen text-to-image model and secretly watermarks images generated by AI without any susceptibility to tampering. The Partnership on AI (PAI) is also taking a leadership role by building out ethical standards for synthetic content, including standards around provenance and watermarking. These frameworks become guides for governments seeking to introduce equitable, effective policies. In addition, India's new legal mechanisms on misinformation and deepfake regulation present a timely point to integrate watermarking standards consistent with global practices while safeguarding civil liberties.
Conclusion
Watermarking regulations for synthetic media content are an essential step toward creating a safer and more credible digital world. As artificial media becomes increasingly indistinguishable from authentic content, the demand for transparency, origin, and responsibility increases. Governments, platforms, and civil society organisations will have to collaborate to deploy watermarking mechanisms that are technically feasible, compliant and privacy-friendly. India is especially at a turning point, with courts calling for action and regulatory agencies starting to take on the challenge. Empowering themselves with global lessons, applying best-in-class watermarking platforms and promoting public awareness can enable the nation to acquire a level of resilience against digital deception.
References
- https://artificialintelligenceact.eu/
- https://www.cyberpeace.org/resources/blogs/delhi-high-court-directs-centre-to-nominate-members-for-deepfake-committee
- https://c2pa.org
- https://www.cyberpeace.org/resources/blogs/misinformations-impact-on-public-health-policy-decisions
- https://deepmind.google/technologies/synthid/
- https://www.imatag.com/blog/china-regulates-ai-generated-content-towards-a-new-global-standard-for-transparency

Executive Summary:
A photo claiming that Mr. Rowan Atkinson, the famous actor who played the role of Mr. Bean, lying sick on bed is circulating on social media. However, this claim is false. The image is a digitally altered picture of Mr.Barry Balderstone from Bollington, England, who died in October 2019 from advanced Parkinson’s disease. Reverse image searches and media news reports confirm that the original photo is of Barry, not Rowan Atkinson. Furthermore, there are no reports of Atkinson being ill; he was recently seen attending the 2024 British Grand Prix. Thus, the viral claim is baseless and misleading.

Claims:
A viral photo of Rowan Atkinson aka Mr. Bean, lying on a bed in sick condition.



Fact Check:
When we received the posts, we first did some keyword search based on the claim made, but no such posts were found to support the claim made.Though, we found an interview video where it was seen Mr. Bean attending F1 Race on July 7, 2024.

Then we reverse searched the viral image and found a news report that looked similar to the viral photo of Mr. Bean, the T-Shirt seems to be similar in both the images.

The man in this photo is Barry Balderstone who was a civil engineer from Bollington, England, died in October 2019 due to advanced Parkinson’s disease. Barry received many illnesses according to the news report and his application for extensive healthcare reimbursement was rejected by the East Cheshire Clinical Commissioning Group.
Taking a cue from this, we then analyzed the image in an AI Image detection tool named, TrueMedia. The detection tool found the image to be AI manipulated. The original image is manipulated by replacing the face with Rowan Atkinson aka Mr. Bean.



Hence, it is clear that the viral claimed image of Rowan Atkinson bedridden is fake and misleading. Netizens should verify before sharing anything on the internet.
Conclusion:
Therefore, it can be summarized that the photo claiming Rowan Atkinson in a sick state is fake and has been manipulated with another man’s image. The original photo features Barry Balderstone, the man who was diagnosed with stage 4 Parkinson’s disease and subsequently died in 2019. In fact, Rowan Atkinson seemed perfectly healthy recently at the 2024 British Grand Prix. It is important for people to check on the authenticity before sharing so as to avoid the spreading of misinformation.
- Claim: A Viral photo of Rowan Atkinson aka Mr. Bean, lying on a bed in a sick condition.
- Claimed on: X, Facebook
- Fact Check: Fake & Misleading
.webp)
Executive Summary:
Cyber incidents are evolving along with time, they are designed to attract and lure people through social networking sites and/or messaging services. In the recent past a spate of messages alleging that TRAI is offering ‘3 months free recharge with free voice calls and internet for 4g/5g with 200 GB free data’. These messages display the TRAI logo with attractive offers to trick the users into revealing their personal details. This blog discusses the functioning of this free mobile recharge scheme, its methods and guidelines on how to avoid such fake schemes. This blog explains the importance of vigilance and verification when receiving any links, emphasizing the need to report suspicious activities and educate others to prevent identity theft and protect personal information.
Claim:
The message circulated an enticing offer: free mobile recharge for 3 months which provides unlimited free voice calls with 200GB 4G/5G data with TRAI logo. The key characteristics of the false claims are
- Official Branding: The logo of TRAI has been viewed as a deceptive facade of credibility.
- Unrealistic Offers: It is accompanied by a free recharge , which is intended for an extended period indefinite period, like most fraudsters’ bait.
- Urgency and Exclusivity: The offer is for a limited time to make urgency forcing the receiver to take the offer without confirmation.
The Deceptive Scheme:
Organized systematically, the fraudulent campaign usually proceeds in several steps, all of which aim at extracting the victim’s personal data. Here’s a breakdown of the scheme:
1. Initial Contact: Such messages or calls reach the users’ inboxes or phone numbers through social media applications such as WhatsApp or through text messages. These messages further implies that the user was chosen for the special offer from TRAI, which elicits the interest of the user.
2. Information Request: To claim the purported offer, users are directed to a website or asked to reply with personal details, including:
- Phone number
- State of residence
- SIM provider details
This is useful for the scammers as they harvest information which can be used to conduct identity theft or sold to others on the shady part of the internet known as the ‘Dark Web’.
3. Fake Confirmation: After providing all the information, a congratulatory message appears on the screen showing that their phone number is eligible for the offer. The user is compelled to forward the message to many phone numbers through whatsapp to get the offer.
4. Pressure Tactics: The message often implies a sense of time constraint or fear which psychologically produces pressure to provide all the user information. For example, users are given messages such as that if they do not ‘act now’, they will lose their mobile service.
Analyzing the Fraudulent Campaign
The TRAI fraudulent recharge scheme case depicts that social engineering is used in cyber crimes. Here are some key aspects that characterize this campaign:
- Sophisticated Social Engineering
Scammers take advantage of the holders’ confidence in official bodies such as TRAI. By using official TRAI logos, official language they try to deceive even cautious people.
- Viral Spread
The user is compelled to share the given message to friends and groups; this is an excellent strategy to spread the scam. It not only spreads the fraudulent message but also tries to extract the details of other people.
- Technical Analysis

- Domain Name: SGOFF[.]CYOU
- Registry Domain ID: D472308342-CNIC
- Registrar WHOIS Server: whois.hkdns.hk
- Registrar URL: http://www.hkdns.hk
- Updated Date: 2024-07-24T18:50:48.0Z
- Creation Date: 2024-07-19T18:48:44.0Z
- Registry Expiry Date: 2025-07-19T23:59:59.0Z
- Registrar: West263 International Limited
- Registrar IANA ID: 1915
- Registrant State/Province: Anhui
- Registrant Country: CN
- Name Server: NORMAN.NS.CLOUDFLARE.COM
- Name Server: PAM.NS.CLOUDFLARE.COM
- DNSSEC: unsigned
Cloudflare Inc. is used to cover the scam. The real website always uses the older domain while this url has been registered recently which indicates that this link is a scam.

The graph indicates that some of the communicated files and websites are malicious.
CyberPeace Advisory and Best Practice:
In light of the growing threat posed by such scams, the Research Wing of CyberPeace recommend the following best practices to help users protect themselves:
1. Verify Communications: It is always advisable to visit the official site of the organization or call the official contact numbers of the company to speak to their customer care and clarify about the offers.
2. Do not share personal information: No genuine organization will call the people for personal information. Step carefully and do not provide personal information that will lead to identity theft when dealing with such offers.
3. Report Fraudulent Activity: If one receives any calls or messages that seem to be suspicious, then the user can report cyber crimes to the National Cyber Crime Reporting Portal on www. cybercrime. gov. in or call on 1930. Such scams are reportable and assist the authorities in tracking and fighting the vice.
4. Educate Others : Always raise awareness among friends by sharing these kinds of scams. Educating people helps to avoid them falling prey to such fraudulent schemes.
5. Use Reliable Resources : Always refer to official sources or websites for any kind of offers or promotions.
Conclusion:
The free recharge scheme for 3 months with the logo of TRAI is a fraudulent scam. There is no official information from TRAI or in their official website about this free recharge scheme. Though the scheme looks attractive, it is deceptive. Through this, the scammers are trying to collect personal details of the individual. Before clicking any links, it is necessary to check the authenticity of the information, report these kinds of incidents to spread awareness among people. Always be safe and be vigilant.