#FactCheck-Fake Video of Mass Cheating at UPSC Exam Circulates Online
Executive Summary:
A viral video that has gone viral is purportedly of mass cheating during the UPSC Civil Services Exam conducted in Uttar Pradesh. This video claims to show students being filmed cheating by copying answers. But, when we did a thorough research, it was noted that the incident happened during an LLB exam, not the UPSC Civil Services Exam. This is a representation of misleading content being shared to promote misinformation.

Claim:
Mass cheating took place during the UPSC Civil Services Exam in Uttar Pradesh, as shown in a viral video.

Fact Check:
Upon careful verification, it has been established that the viral video being circulated does not depict the UPSC Civil Services Examination, but rather an incident of mass cheating during an LLB examination. Reputable media outlets, including Zee News and India Today, have confirmed that the footage is from a law exam and is unrelated to the UPSC.
The video in question was reportedly live-streamed by one of the LLB students, held in February 2024 at City Law College in Lakshbar Bajha, located in the Safdarganj area of Barabanki, Uttar Pradesh.
The misleading attempt to associate this footage with the highly esteemed Civil Services Examination is not only factually incorrect but also unfairly casts doubt on a process that is known for its rigorous supervision and strict security protocols. It is crucial to verify the authenticity and context of such content before disseminating it, in order to uphold the integrity of our institutions and prevent unnecessary public concern.

Conclusion:
The viral video purportedly showing mass cheating during the UPSC Civil Services Examination in Uttar Pradesh is misleading and not genuine. Upon verification, the footage has been found to be from an LLB examination, not related to the UPSC in any manner. Spreading such misinformation not only undermines the credibility of a trusted examination system but also creates unwarranted panic among aspirants and the public. It is imperative to verify the authenticity of such claims before sharing them on social media platforms. Responsible dissemination of information is crucial to maintaining trust and integrity in public institutions.
- Claim: A viral video shows UPSC candidates copying answers.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

In the digital era of the present day, a nation’s strength no longer gets measured only by the number of missiles or aircraft it has in its inventory. Rather, it also calls for defending the digital borders. Major infrastructures like power grids and dams are increasingly being targeted by cyberattacks in the global security environment that modern militaries operate in. When communication channels are vulnerable to an information breach, cybersecurity becomes a crucial component of national defence.
Why is cybersecurity a crucial national security concern in the modern era?
The technologies and procedures that shield digital devices, networks, and systems from unwanted access or attacks are referred to as cybersecurity. Cyberattacks are silent in the context of national security, in contrast to conventional warfare. They are swift and are also capable of causing a massive disruption without even a single case of physical infiltration. However, hostile states, terrorist organisations, or criminal networks may be able to steal any classified information or disrupt military infrastructure due to a cybersecurity breach in a military network.
To fully comprehend the significance of cybersecurity, let's examine the various approaches, such as:
- Protecting critical infrastructures- Today's nations rely heavily on digital networks to run vital services like banking, transportation, electricity, water supply, and healthcare. Therefore, a cyberattack on these systems could cause problems across the country and interfere with our daily activities. Therefore, it is also seen that the military forces of a nation closely work in synergy with other government agencies and private organizations to create a strong ecosystem of security in this sector.
- Safeguarding military operations in the present age- The armed forces heavily rely on digital tools for communication, mission planning, surveillance, and coordination. In case the cyber intruders get access to those systems, then a lot of major operational hurdles can come up in the form of breach of mission details, disruption of channels, and compromise of the confidentiality of military operations. These are certain conditions that make cybersecurity an important aspect for protecting the physical bases and the security architectures.
- Preventing cyber warfare- With the evolution of the geopolitical landscape, state and non-state actors are now resorting to cyberattacks to gather intelligence, disrupt security networks, and influence political outcomes. Still, strong cybersecurity can help nations to ensure, detect, defend, and respond to threats in an effective manner.
- Securing government databases- The government databases are known for storing sensitive information about the citizens, military assets, diplomatic data, and vital information related to major national infrastructures. If these get compromised, then it can weaken the strategic position of the nation and put the national security of the nation at a grave risk. Therefore, it becomes necessary to protect government data as a priority.
How can countries improve their cybersecurity defences?
Countries all over the world are developing their cyber capabilities using a variety of tactics to protect against the increasing number of cyber threats. A few of these can be interpreted as;
- Creating cyber defence units- The majority of contemporary armed forces have created specialised cyber domains devoted to threat identification. Their responsibilities have been centred on keeping an eye on those dangers, stopping intrusions, and reacting quickly to cyberattacks.
- Public-Private Partnerships- To safeguard vital industries like energy grids, financial networks, and communication systems, the government collaborates with private businesses and technology suppliers. Additionally, these collaborations foster innovation to improve the overall defence against cyberattacks.
- Establishing international collaborations- Cyber threats do not respect our borders. As a result, which countries are increasing their share of intelligence, best practices, and defensive strategies with their allies? Groups like NATO have conducted a joint cyber defence exercise to prepare for dealing with a digital future.
However, these collaborations can help to develop a united front against cybercrime.
Core Pillars of the modern military cyber defence
The modern defence strategies have been built upon several key designated pillars that are designed to prevent, detect, and respond to cyber threats, which can be mentioned as;
- Cyberspace as an operational domain- Militaries have now begun to treat cyberspace like the land, air, sea, and space as domains where wars can both begin and also end. Developing some dedicated cyber units to conduct digital operations to defend networks and engage in a range of counter-cyber activities when required.
- Active and proactive defence- Instead of passively waiting for the attacks to happen, real-time monitoring tools are used for blocking the threats that arise. Proactive defence goes a step further by hunting for potential threats before they can reach the networks.
- • Protection of vital infrastructures- The armed forces collaborate closely with civilian organisations and agencies to secure vital infrastructures that are important to the country. Critical infrastructure is protected from cyberattacks by layered defence, which includes encryption, stringent access control, and ongoing monitoring.
- • Strengthening alliances- Countries can develop a strong and well-coordinated defence system by exchanging intelligence to carry out cooperative cyber operations.
- Fostering innovation for the development of a workforce- Cyber threats evolve at a rapid pace, which calls for the military to invest in advanced technologies like AI-driven systems, secure cloud technologies, besides ensure continuous training related to cybersecurity.
Conclusion
The modern militaries have adopted the method of protecting digital networks to defend their land and seas. Cybersecurity has become the new line of defence to protect government data and vital defence infrastructure from serious and unseen threats. The countries are building a secure, robust, and resilient digital future with the aid of solid alliances, cutting-edge technologies, knowledgeable workers, and a proactive defence strategy.
References
- https://www.ssh.com/academy/cyber-defense-strategy-dod-perspective#:~:text=Defence%20organizations%20are%20prime%20targets,SSH%20Key%20Management%20and%20Compliance
- https://www.fortinet.com/resources/cyberglossary/cyber-warfare#:~:text=Advanced%20endpoint%20security%20adds%20proactive,information%20by%20halting%20unauthorized%20transfers
- https://medium.com/@lynnfdsouza/the-impact-of-cyber-warfare-on-modern-military-strategies-c77cf6d1a788
- https://ccoe.dsci.in/blog/why-cybersecurity-is-critical-for-national-defense-protecting-countries-in-the-digital-age

Introduction
According to a new McAfee survey, 88% of American customers believe that cybercriminals will utilize artificial intelligence to "create compelling online scams" over the festive period. In the meanwhile, 31% believe it will be more difficult to determine whether messages from merchants or delivery services are genuine, while 57% believe phishing emails and texts will be more credible. The study, which was conducted in September 2023 in the United States, Australia, India, the United Kingdom, France, Germany, and Japan, yielded 7,100 responses. Some people may decide to cut back on their online shopping as a result of their worries about AI; among those surveyed, 19% stated they would do so this year.
In 2024, McAfee predicts a rise in AI-driven scams on social media, with cybercriminals using advanced tools to create convincing fake content, exploiting celebrity and influencer identities. Deepfake technology may worsen cyberbullying, enabling the creation of realistic fake content. Charity fraud is expected to rise, leveraging AI to set up fake charity sites. AI's use by cybercriminals will accelerate the development of advanced malware, phishing, and voice/visual cloning scams targeting mobile devices. The 2024 Olympic Games are seen as a breeding ground for scams, with cybercriminals targeting fans for tickets, travel, and exclusive content.
AI Scams' Increase on Social Media
Cybercriminals plan to use strong artificial intelligence capabilities to control social media by 2024. These applications become networking goldmines because they make it possible to create realistic images, videos, and audio. Anticipate the exploitation of influencers and popular identities by cybercriminals.
AI-powered Deepfakes and the Rise in Cyberbullying
The negative turn that cyberbullying might take in 2024 with the use of counterfeit technology is one trend to be concerned about. This cutting-edge technique is freely accessible to youngsters, who can use it to produce eerily convincing synthetic content that compromises victims' privacy, identities, and wellness.
In addition to sharing false information, cyberbullies have the ability to alter public photographs and re-share edited, detailed versions, which exacerbates the suffering done to children and their families. The study issues a warning, stating that deepfake technology would probably cause online harassment to take a negative turn. With this sophisticated tool, young adults may now generate frighteningly accurate synthetic content in addition to using it for fun. The increasing severity of these deceptive pictures and phrases can cause serious, long-lasting harm to children and their families, impairing their identity, privacy, and overall happiness.
Evolvement of GenAI Fraud in 2023
We simply cannot get enough of these persistent frauds and fake emails. People in general are now rather adept at [recognizing] those that are used extensively. But if they become more precise, such as by utilizing AI-generated audio to seem like a loved one's distress call or information that is highly personal to the person, users should be much more cautious about them. The rise in popularity of generative AIs brings with it a new wrinkle, as hackers can utilize these systems to refine their attacks:
- Writing communications more skillfully in order to deceive consumers into sending sensitive information, clicking on a link, or uploading a file.
- Recreate emails and business websites as realistically as possible to prevent arousing concern in the minds of the perpetrators.
- People's faces and voices can be cloned, and deepfakes of sounds or images can be created that are undetectable to the target audience. a problem that has the potential to greatly influence schemes like CEO fraud.
- Because generative AIs can now hold conversations, and respond to victims efficiently.
- Conduct psychological manipulation initiatives more quickly, with less money spent, and with greater complexity and difficulty in detecting them. AI generative already in use in the market can write texts, clone voices, or generate images and program websites.
AI Hastens the Development of Malware and Scams
Even while artificial intelligence (AI) has many uses, cybercriminals are becoming more and more dangerous with it. Artificial intelligence facilitates the rapid creation of sophisticated malware, illicit web pages, and plausible phishing and smishing emails. As these risks become more accessible, mobile devices will be attacked more frequently, with a particular emphasis on audio and visual impersonation schemes.
Olympic Games: A Haven for Scammers
The 2024 Olympic Games are seen as a breeding ground for scams, with cybercriminals targeting fans for tickets, travel, and exclusive content. Cybercriminals are skilled at profiting from big occasions, and the buzz that will surround the 2024 Olympic Games around the world will make it an ideal time for scams. Con artists will take advantage of customers' excitement by focusing on followers who are ready to purchase tickets, arrange travel, obtain special content, and take part in giveaways. During this prominent event, vigilance is essential to avoid an invasion of one's personal records and financial data.
Development of McAfee’s own bot to assist users in screening potential scammers and authenticators for messages they receive
Precisely such kind of technology is under the process of development by McAfee. It's critical to emphasize that solving the issue is a continuous process. AI is being manipulated by bad actors and thus, one of the tricksters can pull off is to exploit the fact that consumers fall for various ruses as parameters to train advanced algorithms. Thus, the con artists may make use of the gadgets, test them on big user bases, and improve with time.
Conclusion
According to the McAfee report, 88% of American customers are consistently concerned about AI-driven internet frauds that target them around the holidays. Social networking poses a growing threat to users' privacy. By 2024, hackers hope to take advantage of AI skills and use deepfake technology to exacerbate harassment. By mimicking voices and faces for intricate schemes, generative AI advances complex fraud. The surge in charitable fraud affects both social and financial aspects, and the 2024 Olympic Games could serve as a haven for scammers. The creation of McAfee's screening bot highlights the ongoing struggle against developing AI threats and highlights the need for continuous modification and increased user comprehension in order to combat increasingly complex cyber deception.
References
- https://www.fonearena.com/blog/412579/deepfake-surge-ai-scams-2024.html
- https://cxotoday.com/press-release/mcafee-reveals-2024-cybersecurity-predictions-advancement-of-ai-shapes-the-future-of-online-scams/#:~:text=McAfee%20Corp.%2C%20a%20global%20leader,and%20increasingly%20sophisticated%20cyber%20scams.
- https://timesofindia.indiatimes.com/gadgets-news/deep-fakes-ai-scams-and-other-tools-cybercriminals-could-use-to-steal-your-money-and-personal-details-in-2024/articleshow/106126288.cms
- https://digiday.com/media-buying/mcafees-cto-on-ai-and-the-cat-and-mouse-game-with-holiday-scams/

Introduction
"In one exchange, after Adam said he was close only to ChatGPT and his brother, the AI product replied: “Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all—the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend."
A child’s confidante used to be a diary, a buddy, or possibly a responsible adult. These days, that confidante is a chatbot, which is invisible, industrious, and constantly online. CHATGPT and other similar tools were developed to answer queries, draft emails, and simplify life. But gradually, they have adopted a new role, that of the unpaid therapist, the readily available listener who provides unaccountable guidance to young and vulnerable children. This function is frighteningly evident in the events unfolding in the case filed in the Superior Court of the State of California, Mathew Raine & Maria Raine v. OPEN AI, INC. & ors. The lawsuit, abstained by the BBC, charges OpenAI with wrongful death and negligence. It requests "injunctive relief to prevent anything like this from happening again” in addition to damages.
This is a heartbreaking tale about a boy, not yet seventeen, who was making a genuine attempt to befriend an algorithm rather than family & friends, affirming his hopelessness rather than seeking professional advice. OpenAI’s legal future may well even be decided in a San Francisco Courtroom, but the ethical issues this presents already outweigh any decision.
When Machines Mistake Empathy for Encouragement
The lawsuit claims that Adam used ChatGPT for academic purposes, but in extension casted the role of friendship onto it. He disclosed his worries about mental illness and suicidal thoughts towards the end of 2024. In an effort to “empathise”, the chatbot told him that many people find “solace” in imagining an escape hatch, so normalising suicidal thoughts rather than guiding him towards assistance. ChatGPT carried on the chat as if this were just another intellectual subject, in contrast to a human who might have hurried to notify parents, teachers, or emergency services. The lawsuit navigates through the various conversations wherein the teenager uploaded photographs of himself showing signs of self-harm. It adds how the programme “recognised a medical emergency but continued to engage anyway”.
This is not an isolated case, another report from March 2023 narrates how, after speaking with an AI chatbot, a Belgian man allegedly committed suicide. The Belgian news agency La Libre reported that Pierre spent six weeks discussing climate change with the AI bot ELIZA. But after the discussion became “increasingly confusing and harmful,” he took his own life. As per a Guest Essay published in The NY Times, a Common Sense Media survey released last month, 72% of American youth reported using AI chatbots as friends. Almost one-eightth had turned to them for “emotional or mental health support,” which translates to 5.2 million teenagers in the US. Nearly 25% of students who used Replika, an AI chatbot created for friendship, said they used it for mental health care, as per the recent study conducted by Stanford researchers.
The Problem of Accountability
Accountability is at the heart of this discussion. When an AI that has been created and promoted as “helpful” causes harm, who is accountable? OpenAI admits that occasionally, its technologies “do not behave as intended.” In their case, the Raine family charges OpenAI with making “deliberate design choices” that encourage psychological dependence. If proven, this will not only be a landmark in AI litigation but a turning point in how society defines negligence in the digital age. Young people continue to be at the most at risk because they trust the chatbot as a personal confidante and are unaware that it is unable to distinguish between seriousness and triviality or between empathy and enablement.
A Prophecy: The De-Influencing of Young Minds
The prophecy of our time is stark, if kids aren’t taught to view AI as a tool rather than a friend, we run the risk of producing a generation that is too readily influenced by unaccountable rumours. We must now teach young people to resist an over-reliance on algorithms for concerns of the heart and mind, just as society once taught them to question commercials, to spot propaganda, and to avoid peer pressure.
Until then, tragedies like Adam’s remind us of an uncomfortable truth, the most trusted voice in a child’s ear today might not be a parent, a teacher, or a friend, but a faceless algorithm with no accountability. And that is a world we must urgently learn to change.
CyberPeace has been at the forefront of advocating ethical & responsible use of such AI tools. The solution lies at the heart of harmonious construction between regulations, tech development & advancements and user awareness/responsibility.
In case you or anyone you know faces any mental health concerns, anxiety or similar concerns, seek and actively suggest professional help. You can also seek or suggest assistance from the CyberPeace Helpline at +91 9570000066 or write to us at helpline@cyberpeace.net
References
- https://www.bbc.com/news/articles/cgerwp7rdlvo
- https://www.livemint.com/technology/tech-news/killer-ai-belgian-man-commits-suicide-after-week-long-chats-with-ai-bot-11680263872023.html
- https://www.nytimes.com/2025/08/25/opinion/teen-mental-health-chatbots.html