#FactCheck: False Claim About Indian Flag Hoisted in Balochistan amid the success of Operation Sindoor
Executive Summary:
A video circulating on social media claims that people in Balochistan, Pakistan, hoisted the Indian national flag and declared independence from Pakistan. The claim has gone viral, sparking strong reactions and spreading misinformation about the geopolitical scenario in South Asia. Our research reveals that the video is misrepresented and actually shows a celebration in Surat, Gujarat, India.

Claim:
A viral video shows people hoisting the Indian flag and allegedly declaring independence from Pakistan in Balochistan. The claim implies that Baloch nationals are revolting against Pakistan and aligning with India.

Fact Check:
After researching the viral video, it became clear that the claim was misleading. We took key screenshots from the video and performed a reverse image search to trace its origin. This search led us to one of the social media posts from the past, which clearly shows the event taking place in Surat, Gujarat, not Balochistan.

In the original clip, a music band is performing in the middle of a crowd, with people holding Indian flags and enjoying the event. The environment, language on signboards, and festive atmosphere all confirm that this is an Indian Independence Day celebration. From a different angle, another photo we found further proves our claim.

However, some individuals with the intention of spreading false information shared this video out of context, claiming it showed people in Balochistan raising the Indian flag and declaring independence from Pakistan. The video was taken out of context and shared with a fake narrative, turning a local celebration into a political stunt. This is a classic example of misinformation designed to mislead and stir public emotions.
To add further clarity, The Indian Express published a report on May 15 titled ‘Slogans hailing Indian Army ring out in Surat as Tiranga Yatra held’. According to the article, “A highlight of the event was music bands of Saifee Scout Surat, which belongs to the Dawoodi Bohra community, seen leading the yatra from Bhagal crossroads.” This confirms that the video was from an event in Surat, completely unrelated to Balochistan, and was falsely portrayed by some to spread misleading claims online.

Conclusion:
The claim that people in Balochistan hoisted the Indian national flag and declared independence from Pakistan is false and misleading. The video used to support this narrative is actually from Surat, Gujarat, India, during “The Tiranga Yatra”. Social media users are urged to verify the authenticity and source of content before sharing, to avoid spreading misinformation that may escalate geopolitical tensions.
- Claim: Mass uprising in Balochistan as citizens reject Pakistan and honor India.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs
.webp)
Introduction
In an era where organisations are increasingly interdependent through global supply chains, outsourcing and digital ecosystems, third-party risk has become one of the most vital aspects of enterprise risk management. The SolarWinds hack, the MOVEit vulnerabilities and recent software vendor attacks all serve as a reminder of the necessity to enhance Third-Party Risk Management (TPRM). As cyber risks evolve and become more sophisticated and as regulatory oversight sharpens globally, 2025 is a transformative year for the development of TPRM practices. This blog explores the top trends redefining TPRM in 2025, encompassing real-time risk scoring, AI-driven due diligence, harmonisation of regulations, integration of ESG, and a shift towards continuous monitoring. All of these trends signal a larger movement towards resilience, openness and anticipatory defence in an increasingly dependent world.
Real-Time and Continuous Monitoring becomes the Norm
The old TPRM methods entailed point-in-time testing, which typically was an annual or onboarding process. By 2025, organisations are shifting towards continuous, real-time monitoring of their third-party ecosystems. Now, authentic advanced tools are making it possible for companies to take a real-time pulse of the security of their vendors by monitoring threat indicators, patching practices and digital footprint variations. This change has been further spurred by the growth in cyber supply chain attacks, where the attackers target vendors to gain access to bigger organisations. Real-time monitoring software enables the timely detection of malicious activity, equipping organisations with a faster defence response. It also guarantees dynamic risk rating instead of relying on outdated questionnaire-based scoring.
AI and Automation in Risk Assessment and Due Diligence
Manual TPRM processes aren't sustainable anymore. In 2025, AI and machine learning are reshaping the TPRM lifecycle from onboarding and risk classification to contract review and incident handling. AI technology can now analyse massive amounts of vendor documentation and automatically raise red flags on potential issues. Natural language processing (NLP) is becoming more common for automated contract intelligence, which assists in the detection of risky clauses or liability gaps or data protection obligations. In addition, automation is increasing scalability for large organisations that have hundreds or thousands of third-party relationships, eliminating human errors and compliance fatigue. However, all of this must be implemented with a strong focus on security, transparency, and ethical AI use to ensure that sensitive vendor and organisational data remains protected throughout the process.
Risk Quantification and Business Impact Mapping
Risk scoring in isolation is no longer adequate. One of the major trends for 2025 is the merging of third-party risk with business impact analysis (BIA). Organisations are using tools that associate vendors to particular business processes and assets, allowing better knowledge of how a compromise of a vendor would impact operations, customer information or financial position. This movement has resulted in increased use of risk quantification models, such as FAIR (Factor Analysis of Information Risk), which puts dollar values on risks associated with vendors. By using the language of business value, CISOs and risk officers are more effective at prioritising risks and making resource allocations.
Environmental, Social, and Governance (ESG) enters into TPRM
As ESG keeps growing on the corporate agenda, organisations are taking TPRM one step further than cybersecurity and legal risks and expanding it to incorporate ESG-related factors. In 2025, organisations evaluate if their suppliers have ethical labour practices, sustainable supply chains, DEI (Diversity, Equity, Inclusion) metrics and climate impact disclosures. This growth is not only a reputational concern, but also a third-party non-compliance with ESG can now invoke regulatory or shareholder action. ESG risk scoring software and vendor ESG audits are becoming components of onboarding and performance evaluations.
Shared Assessments and Third-Party Exchanges
With the duplication of effort by having multiple vendors respond to the same security questionnaires, the trend is moving toward shared assessments. Systems such as the SIG Questionnaire (Standardised Information Gathering) and the Global Vendor Exchange allow vendors to upload once and share with many clients. This change not only simplifies the due diligence process but also enhances data accuracy, standardisation and vendor experience. In 2025, organisations are relying more and more on industry-wide vendor assurance platforms to minimise duplication, decrease costs and maximise trust.
Incident Response and Resilience Partnerships
Another trend on the rise is bringing vendors into incident response planning. In 2025, proactive organisations address major vendors as more than suppliers but as resilience partners. This encompasses shared tabletop exercises, communication procedures and breach notification SLAs. With the increasing ransomware attacks and cloud reliance, organisations are now calling for vendor-side recovery plans, RTO and RPO metrics. TPRM is transforming into a comprehensive resilience management function where readiness and not mere compliance takes centre stage.
Conclusion
Third-Party Risk Management in 2025 is no longer about checklists and compliance audits; it's a dynamic, intelligence-driven and continuous process. With regulatory alignment, AI automation, real-time monitoring, ESG integration and resilience partnerships leading the way, organisations are transforming their TPRM programs to address contemporary threat landscapes. As digital ecosystems grow increasingly complex and interdependent, managing third-party risk is now essential. Early adopters who invest in tools, talent and governance will be more likely to create secure and resilient businesses for the AI era.
References
- https://finance.ec.europa.eu/publications/digital-operational-resilience-act-dora_en
- https://digital-strategy.ec.europa.eu/en/policies/nis2-directive
- https://www.meity.gov.in/data-protection-framework
- https://securityscorecard.com
- https://sharedassessments.org/sig/
- https://www.fairinstitute.org/fair-model

Introduction
The term ‘super spreader’ is used to refer to social media and digital platform accounts that are able to quickly transmit information to a significantly large audience base in a short duration. The analogy references the medical term, where a small group of individuals is able to rapidly amplify the spread of an infection across a huge population. The fact that a few handful accounts are able to impact and influence many is attributed to a number of factors like large follower bases, high engagement rates, content attractiveness or virality and perceived credibility.
Super spreader accounts have become a considerable threat on social media because they are responsible for generating a large amount of low-credibility material online. These individuals or groups may create or disseminate low-credibility content for a number of reasons, running from social media fame to garnering political influence, from intentionally spreading propaganda to seeking financial gains. Given the exponential reach of these accounts, identifying, tracing and categorising such accounts as the sources of misinformation can be tricky. It can be equally difficult to actually recognise the content they spread for the misinformation that it actually is.
How Do A Few Accounts Spark Widespread Misinformation?
Recent research suggests that misinformation superspreaders, who consistently distribute low-credibility content, may be the primary cause of the issue of widespread misinformation about different topics. A study[1] by a team of social media analysts at Indiana University has found that a significant portion of tweets spreading misinformation are sent by a small percentage of a given user base. The researchers conducted a review of 2,397,388 tweets posted on Twitter (now X) that were flagged as having low credibility and details on who was sending them. The study found that it does not take a lot of influencers to sway the beliefs and opinions of large numbers. This is attributed to the impact of what they describe as superspreaders. The researchers collected 10 months of data, which added up to 2,397,388 tweets sent by 448,103 users, and then reviewed it, looking for tweets that were flagged as containing low-credibility information. They found that approximately a third of the low-credibility tweets had been posted by people using just 10 accounts, and that just 1,000 accounts were responsible for posting approximately 70% of such tweets.[2]
Case Study
- How Misinformation ‘Superspreaders’ Seed False Election Theories
During the 2020 U.S. presidential election, a small group of "repeat spreaders" aggressively pushed false election claims across various social media platforms for political gain, and this even led to rallies and radicalisation in the U.S.[3] Superspreaders accounts were responsible for disseminating a disproportionately large amount of misinformation related to the election, influencing public opinion and potentially undermining the electoral process.
In the domestic context, India was ranked highest for the risk of misinformation and disinformation according to experts surveyed for the World Economic Forum’s 2024 Global Risk Report. In today's digital age, misinformation, deep fakes, and AI-generated fakes pose a significant threat to the integrity of elections and democratic processes worldwide. With 64 countries conducting elections in 2024, the dissemination of false information carries grave implications that could influence outcomes and shape long-term socio-political landscapes. During the 2024 Indian elections, we witnessed a notable surge in deepfake videos of political personalities, raising concerns about the influence of misinformation on election outcomes.
- Role of Superspreaders During Covid-19
Clarity in public health communication is important when any grey areas or gaps in information can be manipulated so quickly. During the COVID-19 pandemic, misinformation related to the virus, vaccines, and public health measures spread rapidly on social media platforms, including Twitter (Now X). Some prominent accounts or popular pages on platforms like Facebook and Twitter(now X) were identified as superspreaders of COVID-19 misinformation, contributing to public confusion and potentially hindering efforts to combat the pandemic.
As per the Center for Countering Digital Hate Inc (US), The "disinformation dozen," a group of 12 prominent anti-vaccine accounts[4], were found to be responsible for a large amount of anti-vaccine content circulating on social media platforms, highlighting the significant role of superspreaders in influencing public perceptions and behaviours during a health crisis.
There are also incidents where users are unknowingly engaged in spreading misinformation by forwarding information or content which are not always shared by the original source but often just propagated by amplifiers, using other sources, websites, or YouTube videos that help in dissemination. The intermediary sharers amplify these messages on their pages, which is where it takes off. Hence such users do not always have to be the ones creating or deliberately popularising the misinformation, but they are the ones who expose more people to it because of their broad reach. This was observed during the pandemic when a handful of people were able to create a heavy digital impact sharing vaccine/virus-related misinformation.
- Role of Superspreaders in Influencing Investments and Finance
Misinformation and rumours in finance may have a considerable influence on stock markets, investor behaviour, and national financial stability. Individuals or accounts with huge followings or influence in the financial niche can operate as superspreaders of erroneous information, potentially leading to market manipulation, panic selling, or incorrect impressions about individual firms or investments.
Superspreaders in the finance domain can cause volatility in markets, affect investor confidence, and even trigger regulatory responses to address the spread of false information that may harm market integrity. In fact, there has been a rise in deepfake videos, and fake endorsements, with multiple social media profiles providing unsanctioned investing advice and directing followers to particular channels. This leads investors into dangerous financial decisions. The issue intensifies when scammers employ deepfake videos of notable personalities to boost their reputation and can actually shape people’s financial decisions.
Bots and Misinformation Spread on Social Media
Bots are automated accounts that are designed to execute certain activities, such as liking, sharing, or retweeting material, and they can broaden the reach of misinformation by swiftly spreading false narratives and adding to the virality of a certain piece of content. They can also artificially boost the popularity of disinformation by posting phony likes, shares, and comments, making it look more genuine and trustworthy to unsuspecting users. Bots can exploit social network algorithms by establishing false identities that interact with one another and with real users, increasing the spread of disinformation and pushing it to the top of users' feeds and search results.
Bots can use current topics or hashtags to introduce misinformation into popular conversations, allowing misleading information to acquire traction and reach a broader audience. They can lead to the construction of echo chambers, in which users are exposed to a narrow variety of perspectives and information, exacerbating the spread of disinformation inside restricted online groups. There are incidents reported where bot's were found as the sharers of content from low-credibility sources.
Bots are frequently employed as part of planned misinformation campaigns designed to propagate false information for political, ideological, or commercial gain. Bots, by automating the distribution of misleading information, can make it impossible to trace the misinformation back to its source. Understanding how bots work and their influence on information ecosystems is critical for combatting disinformation and increasing digital literacy among social media users.
CyberPeace Policy Recommendations
- Recommendations/Advisory for Netizens:
- Educating oneself: Netizens need to stay informed about current events, reliable fact-checking sources, misinformation counter-strategies, and common misinformation tactics, so that they can verify potentially problematic content before sharing.
- Recognising the threats and vulnerabilities: It is important for netizens to understand the consequences of spreading or consuming inaccurate information, fake news, or misinformation. Netizens must be cautious of sensationalised content spreading on social media as it might attempt to provoke strong reactions or to mold public opinions. Netizens must consider questioning the credibility of information, verifying its sources, and developing cognitive skills to identify low-credibility content and counter misinformation.
- Practice caution and skepticism: Netizens are advised to develop a healthy skepticism towards online information, and critically analyse the veracity of all information sources. Before spreading any strong opinions or claims, one must seek supporting evidence, factual data, and expert opinions, and verify and validate claims with reliable sources or fact-checking entities.
- Good netiquette on the Internet, thinking before forwarding any information: It is important for netizens to practice good netiquette in the online information landscape. One must exercise caution while sharing any information, especially if the information seems incorrect, unverified or controversial. It's important to critically examine facts and recognise and understand the implications of sharing false, manipulative, misleading or fake information/content. Netizens must also promote critical thinking and encourage their loved ones to think critically, verify information, seek reliable sources and counter misinformation.
- Adopting and promoting Prebunking and Debunking strategies: Prebunking and debunking are two effective strategies to counter misinformation. Netizens are advised to engage in sharing only accurate information and do fact-checking to debunk any misinformation. They can rely on reputable fact-checking experts/entities who are regularly engaged in producing prebunking and debunking reports and material. Netizens are further advised to familiarise themselves with fact-checking websites, and resources and verify the information.
- Recommendations for tech/social media platforms
- Detect, report and block malicious accounts: Tech/social media platforms must implement strict user authentication mechanisms to verify account holders' identities to minimise the formation of fraudulent or malicious accounts. This is imperative to weed out suspicious social media accounts, misinformation superspreader accounts and bots accounts. Platforms must be capable of analysing public content, especially viral or suspicious content to ascertain whether it is misleading, AI-generated, fake or deliberately misleading. Upon detection, platform operators must block malicious/ superspreader accounts. The same approach must apply to other community guidelines’ violations as well.
- Algorithm Improvements: Tech/social media platform operators must develop and deploy advanced algorithm mechanisms to detect suspicious accounts and recognise repetitive posting of misinformation. They can utilise advanced algorithms to identify such patterns and flag any misleading, inaccurate, or fake information.
- Dedicated Reporting Tools: It is important for the tech/social media platforms to adopt robust policies to take action against social media accounts engaged in malicious activities such as spreading misinformation, disinformation, and propaganda. They must empower users on the platforms to flag/report suspicious accounts, and misleading content or misinformation through user-friendly reporting tools.
- Holistic Approach: The battle against online mis/disinformation necessitates a thorough examination of the processes through which it spreads. This involves investing in information literacy education, modifying algorithms to provide exposure to varied viewpoints, and working on detecting malevolent bots that spread misleading information. Social media sites can employ similar algorithms internally to eliminate accounts that appear to be bots. All stakeholders must encourage digital literacy efforts that enable consumers to critically analyse information, verify sources, and report suspect content. Implementing prebunking and debunking strategies. These efforts can be further supported by collaboration with relevant entities such as cybersecurity experts, fact-checking entities, researchers, policy analysts and the government to combat the misinformation warfare on the Internet.
References:
- https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0302201 {1}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette {2}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette {3}
- https://counterhate.com/research/the-disinformation-dozen/ {4}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette
- https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0302201
- https://www.nytimes.com/2020/11/23/technology/election-misinformation-facebook-twitter.html
- https://www.wbur.org/onpoint/2021/08/06/vaccine-misinformation-and-a-look-inside-the-disinformation-dozen
- https://healthfeedback.org/misinformation-superspreaders-thriving-on-musk-owned-twitter/
- https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8139392/
- https://www.jmir.org/2021/5/e26933/
- https://www.yahoo.com/news/7-ways-avoid-becoming-misinformation-121939834.html

India’s online gaming industry has grown at lightning speed, drawing millions of users across age groups. From casual games and e-sports to fantasy leagues and online poker, digital entertainment has become both a social and economic phenomenon. But with this growth came rising concerns of addiction, financial loss, misleading ads, and even criminal misuse of gaming platforms for illegal betting. To address these concerns, the Government of India introduced the Promotion and Regulation of Online Gaming Act and draft Rules in October 2025. While the Act represents a crucial step toward accountability and user protection, it also raises difficult questions about freedom, innovation, and investor confidence.
The Current Legal Framework
The 2025 Act, along with corresponding changes in the Information Technology and GST laws, aims to create a safer and more transparent gaming environment.
1. Ban on real-money games:
Any online game where money is involved, whether it’s entry fees, bets, or prizes, is now banned, regardless of whether it is based on skill or chance. As a result, previously permitted formats such as fantasy sports, rummy, and poker once defended as “games of skill” now fall within the category of banned activities.
2. Promotion of e-sports and social gaming
Not all gaming is banned. Casual games, e-sports, and social games that don’t involve money are fully allowed. The government is encouraging these as part of India’s growing digital economy.
3. Advertising and financial restrictions: Banks, payment gateways, and advertisers cannot facilitate or promote real-money games. Any platform offering deposits or prize pools can be blocked.
4. Central regulatory authority: The law establishes a national body to classify games, monitor compliance, and address complaints. It has the power to order the locking of violative content and websites.
Why Regulation Was Needed
The push for regulation came after a surge in online betting scams, debt-related suicides, and disputes about whether certain apps were skill-based or chance-based. State governments had taken conflicting positions, some banning, others licensing such games. Meanwhile, offshore gaming apps operated freely in India’s grey market.
The 2025 Act thus attempts to impose uniformity, protect minors, and bring moral and fiscal discipline to a rapidly expanding digital frontier. Its underlying philosophy resembles that of the Digital Personal Data Protection Act, encouraging responsible use of technology rather than an unregulated free-for-all.
Key Challenges and Gaps
(a) Clarity of Definitions
The Act bans all real-money games, ignoring the difference between skill-based games and chance-based games. This could lead to legal challenges under Article 19(1)(g), which protects the right to do business. Games like rummy or fantasy cricket, which need real skill, arguably shouldn’t be banned outright
(b) Weak Consumer and Child Protection
Although age verification and KYC are mandated, compliance at the user-end remains uncertain. India needs a Responsible Gaming Code covering:
- Spending limits and cooling-off periods;
- Self-exclusion options;
- Transparent disclosure of odds; and
- Algorithmic fairness audits.
These measures can help mitigate addiction and prevent exploitation of minors.
(c) Federal Conflicts
“Betting and gambling” fall within the State List under India’s Constitution, yet the 2025 Act seeks national uniformity. States like Tamil Nadu and Karnataka already have independent bans. Without harmonisation, legal disputes between state and central authorities could multiply. A cooperative federal framework allowing states to adopt central norms voluntarily could offer flexibility without fragmentation.
(d) Regulatory Transparency
The gaming regulator has a lot of power, like deciding which games are allowed and blocking websites. But it’s not clear who chooses its members or how people can challenge its decisions. Including court oversight, public input, and regular reporting would make the regulator fairer and more reliable.
What’s Next for India’s Online Gaming
India’s online gaming scene is at a turning point. Banning all money-based games might reduce risks, but it also slows innovation and limits opportunities. A better approach could be to license skill-based or low-risk games with proper KYC and audits, set up a Responsible Gaming Charter with input from government, industry, and civil society, and create rules for offshore platforms targeting Indian players. Player data should be protected under the Digital Personal Data Protection Act, 2023, and the law should be reviewed every few years to keep up with new tech like the metaverse, NFTs, and AI-powered games.
Conclusion
CyberPeace has already provided its detailed feedback to MEITy as on 30th October, 2025 hopes the finalised rules are released soon with the acknowledgment of the challenges discussed. The Promotion and Regulation of Online Gaming Act, 2025, marks an important turning point since this is India’s first serious attempt to bring order to a chaotic digital arena. The goal is to keep players safe, stop crime, and hold platforms accountable. But the tricky part is moving away from blanket bans. We need rules that let new ideas grow, respect people’s rights, and keep players safe. With a few smart changes and fair enforcement, India could have a gaming industry that’s safe, responsible, and ready to compete globally.
References
- https://ssrana.in/articles/indias-online-gaming-bill-2025-regulation-prohibition-and-the-future-of-digital-play/
- https://www.google.com/amp/s/m.economictimes.com/news/economy/policy/new-online-gaming-law-takes-effect-money-games-banned-from-today/amp_articleshow/124255401.cms
- https://www.google.com/amp/s/timesofindia.indiatimes.com/technology/tech-news/government-proposes-to-make-violation-of-online-money-game-rules-non-bailable-draft-rules-ban-/amp_articleshow/124277740.cms
- https://www.egf.org.in/
- https://www.pib.gov.in/PressNoteDetails.aspx?NoteId=155075&ModuleId=3