#FactCheck: False Claim About Indian Flag Hoisted in Balochistan amid the success of Operation Sindoor
Executive Summary:
A video circulating on social media claims that people in Balochistan, Pakistan, hoisted the Indian national flag and declared independence from Pakistan. The claim has gone viral, sparking strong reactions and spreading misinformation about the geopolitical scenario in South Asia. Our research reveals that the video is misrepresented and actually shows a celebration in Surat, Gujarat, India.

Claim:
A viral video shows people hoisting the Indian flag and allegedly declaring independence from Pakistan in Balochistan. The claim implies that Baloch nationals are revolting against Pakistan and aligning with India.

Fact Check:
After researching the viral video, it became clear that the claim was misleading. We took key screenshots from the video and performed a reverse image search to trace its origin. This search led us to one of the social media posts from the past, which clearly shows the event taking place in Surat, Gujarat, not Balochistan.

In the original clip, a music band is performing in the middle of a crowd, with people holding Indian flags and enjoying the event. The environment, language on signboards, and festive atmosphere all confirm that this is an Indian Independence Day celebration. From a different angle, another photo we found further proves our claim.

However, some individuals with the intention of spreading false information shared this video out of context, claiming it showed people in Balochistan raising the Indian flag and declaring independence from Pakistan. The video was taken out of context and shared with a fake narrative, turning a local celebration into a political stunt. This is a classic example of misinformation designed to mislead and stir public emotions.
To add further clarity, The Indian Express published a report on May 15 titled ‘Slogans hailing Indian Army ring out in Surat as Tiranga Yatra held’. According to the article, “A highlight of the event was music bands of Saifee Scout Surat, which belongs to the Dawoodi Bohra community, seen leading the yatra from Bhagal crossroads.” This confirms that the video was from an event in Surat, completely unrelated to Balochistan, and was falsely portrayed by some to spread misleading claims online.

Conclusion:
The claim that people in Balochistan hoisted the Indian national flag and declared independence from Pakistan is false and misleading. The video used to support this narrative is actually from Surat, Gujarat, India, during “The Tiranga Yatra”. Social media users are urged to verify the authenticity and source of content before sharing, to avoid spreading misinformation that may escalate geopolitical tensions.
- Claim: Mass uprising in Balochistan as citizens reject Pakistan and honor India.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
Twitter is a popular social media plate form with millions of users all around the world. Twitter’s blue tick system, which verifies the identity of high-profile accounts, has been under intense scrutiny in recent years. The platform must face backlash from its users and brands who have accused it of basis, inaccuracy, and inconsistency in its verification process. This blog post will explore the questions raised on the verification process and its impact on users and big brands.
What is Twitter’s blue trick System?
The blue tick system was introduced in 2009 to help users identify the authenticity of well-known public figures, Politicians, celebrities, sportspeople, and big brands. The Twitter blue Tick system verifies the identity of high-profile accounts to display a blue badge next to your username.
According to a survey, roughly there are 294,000 verified Twitter Accounts which means they have a blue tick badge with them and have also paid the subscription for the service, which is nearly $7.99 monthly, so think about those subscribers who have paid the amount and have also lost their blue badge won’t they feel cheated?
The Controversy
Despite its initial aim, the blue tick system has received much criticism from consumers and brands. Twitter’s irregular and non-transparent verification procedure has sparked accusations of prejudice and inaccuracy. Many Twitter users have complained that the network’s verification process is random and favours account with huge followings or celebrity status. In contrast, others have criticised the platform for certifying accounts that promote harmful or controversial content.
Furthermore, the verification mechanism has generated user confusion, as many need to understand the significance of the blue tick badge. Some users have concluded that the blue tick symbol represents a Twitter endorsement or that the account is trustworthy. This confusion has resulted in users following and engaging with verified accounts that promote misleading or inaccurate data, undermining the platform’s credibility.
How did the Blue Tick Row start in India?
On 21 May 2021, when the government asked Twitter to remove the blue badge from several profiles of high-profile Indian politicians, including the Indian National Congress Party Vice-President Mr Rahul Ghandhi.
The blue badge gives the users an authenticated identity. Many celebrities, including Amitabh Bachchan, popularly known as Big B, Vir Das, Prakash Raj, Virat Kohli, and Rohit Sharma, have lost their blue tick despite being verified handles.
What is the Twitter policy on blue tick?
To Twitter’s policy, blue verification badges may be removed from accounts if the account holder violates the company’s verification policy or terms of service. In such circumstances, Twitter typically notifies the account holder of the removal of the verification badge and the reason for the removal. In the instance of the “Twitter blue badge row” in India, however, it appears that Twitter did not notify the impacted politicians or their representatives before revoking their verification badges. Twitter’s lack of communication has exacerbated the controversy around the episode, with some critics accusing the company of acting arbitrarily and not following due process.
Is there a solution?
The “Twitter blue badge row” has no simple answer since it involves a complex convergence of concerns about free expression, social media policies, and government laws. However, here are some alternatives:
- Establish clear guidelines: Twitter should develop and constantly implement clear guidelines and policies for the verification process. All users, including politicians and government officials, would benefit from greater transparency and clarity.
- Increase transparency: Twitter’s decision-making process for deleting or restoring verification badges should be more open. This could include providing explicit reasons for badge removal, notifying impacted users promptly, and offering an appeals mechanism for those who believe their credentials were removed unfairly.
- Engage in constructive dialogue: Twitter should engage in constructive dialogue with government authorities and other stakeholders to address concerns about the platform’s content moderation procedures. This could contribute to a more collaborative approach to managing online content, leading to more effective and accepted policies.
- Follow local rules and regulations: Twitter should collaborate with the Indian government to ensure it conforms to local laws and regulations while maintaining freedom of expression. This could involve adopting more precise standards for handling requests for material removal or other actions from governments and other organisations.
Conclusion
To sum up, the “Twitter blue tick row” in India has highlighted the complex challenges that Social media faces daily in handling the conflicting interests of free expression, government rules, and their own content moderation procedures. While Twitter’s decision to withdraw the blue verification badges of several prominent Indian politicians garnered anger from the government and some public members, it also raised questions about the transparency and uniformity of Twitter’s verification procedure. In order to deal with this issue, Twitter must establish clear verification procedures and norms, promote transparency in its decision-making process, participate in constructive communication with stakeholders, and adhere to local laws and regulations. Furthermore, the Indian government should collaborate with social media platforms to create more effective and acceptable laws that balance the necessity for free expression and the protection of citizens’ rights. The “Twitter blue tick row” is just one example of the complex challenges that social media platforms face in managing online content, and it emphasises the need for greater collaboration among platforms, governments, and civil society organisations to develop effective solutions that protect both free expression and citizens’ rights.

Introduction
In the digital realm of social media, Meta Platforms, the driving force behind Facebook and Instagram, faces intense scrutiny following The Wall Street Journal's investigative report. This exploration delves deeper into critical issues surrounding child safety on these widespread platforms, unravelling algorithmic intricacies, enforcement dilemmas, and the ethical maze surrounding monetisation features. Instances of "parent-managed minor accounts" leveraging Meta's subscription tools to monetise content featuring young individuals have raised eyebrows. While skirting the line of legality, this practice prompts concerns due to its potential appeal to adults and the associated inappropriate interactions. It's a nuanced issue demanding nuanced solutions.
Failed Algorithms
The very heartbeat of Meta's digital ecosystem, its algorithms, has come under intense scrutiny. These algorithms, designed to curate and deliver content, were found to actively promoting accounts featuring explicit content to users with known pedophilic interests. The revelation sparks a crucial conversation about the ethical responsibilities tied to the algorithms shaping our digital experiences. Striking the right balance between personalised content delivery and safeguarding users is a delicate task.
While algorithms play a pivotal role in tailoring content to users' preferences, Meta needs to reevaluate the algorithms to ensure they don't inadvertently promote inappropriate content. Stricter checks and balances within the algorithmic framework can help prevent the inadvertent amplification of content that may exploit or endanger minors.
Major Enforcement Challenges
Meta's enforcement challenges have come to light as previously banned parent-run accounts resurrect, gaining official verification and accumulating large followings. The struggle to remove associated backup profiles adds layers to concerns about the effectiveness of Meta's enforcement mechanisms. It underscores the need for a robust system capable of swift and thorough actions against policy violators.
To enhance enforcement mechanisms, Meta should invest in advanced content detection tools and employ a dedicated team for consistent monitoring. This proactive approach can mitigate the risks associated with inappropriate content and reinforce a safer online environment for all users.
The financial dynamics of Meta's ecosystem expose concerns about the exploitation of videos that are eligible for cash gifts from followers. The decision to expand the subscription feature before implementing adequate safety measures poses ethical questions. Prioritising financial gains over user safety risks tarnishing the platform's reputation and trustworthiness. A re-evaluation of this strategy is crucial for maintaining a healthy and secure online environment.
To address safety concerns tied to monetisation features, Meta should consider implementing stricter eligibility criteria for content creators. Verifying the legitimacy and appropriateness of content before allowing it to be monetised can act as a preventive measure against the exploitation of the system.
Meta's Response
In the aftermath of the revelations, Meta's spokesperson, Andy Stone, took centre stage to defend the company's actions. Stone emphasised ongoing efforts to enhance safety measures, asserting Meta's commitment to rectifying the situation. However, critics argue that Meta's response lacks the decisive actions required to align with industry standards observed on other platforms. The debate continues over the delicate balance between user safety and the pursuit of financial gain. A more transparent and accountable approach to addressing these concerns is imperative.
To rebuild trust and credibility, Meta needs to implement concrete and visible changes. This includes transparent communication about the steps taken to address the identified issues, continuous updates on progress, and a commitment to a user-centric approach that prioritises safety over financial interests.
The formation of a task force in June 2023 was a commendable step to tackle child sexualisation on the platform. However, the effectiveness of these efforts remains limited. Persistent challenges in detecting and preventing potential child safety hazards underscore the need for continuous improvement. Legislative scrutiny adds an extra layer of pressure, emphasising the urgency for Meta to enhance its strategies for user protection.
To overcome ongoing challenges, Meta should collaborate with external child safety organisations, experts, and regulators. Open dialogues and partnerships can provide valuable insights and recommendations, fostering a collaborative approach to creating a safer online environment.
Drawing a parallel with competitors such as Patreon and OnlyFans reveals stark differences in child safety practices. While Meta grapples with its challenges, these platforms maintain stringent policies against certain content involving minors. This comparison underscores the need for universal industry standards to safeguard minors effectively. Collaborative efforts within the industry to establish and adhere to such standards can contribute to a safer digital environment for all.
To align with industry standards, Meta should actively participate in cross-industry collaborations and adopt best practices from platforms with successful child safety measures. This collaborative approach ensures a unified effort to protect users across various digital platforms.
Conclusion
Navigating the intricate landscape of child safety concerns on Meta Platforms demands a nuanced and comprehensive approach. The identified algorithmic failures, enforcement challenges, and controversies surrounding monetisation features underscore the urgency for Meta to reassess and fortify its commitment to being a responsible digital space. As the platform faces this critical examination, it has an opportunity to not only rectify the existing issues but to set a precedent for ethical and secure social media engagement.
This comprehensive exploration aims not only to shed light on the existing issues but also to provide a roadmap for Meta Platforms to evolve into a safer and more responsible digital space. The responsibility lies not just in acknowledging shortcomings but in actively working towards solutions that prioritise the well-being of its users.
References
- https://timesofindia.indiatimes.com/gadgets-news/instagram-facebook-prioritised-money-over-child-safety-claims-report/articleshow/107952778.cms
- https://www.adweek.com/blognetwork/meta-staff-found-instagram-tool-enabled-child-exploitation-the-company-pressed-ahead-anyway/107604/
- https://www.tbsnews.net/tech/meta-staff-found-instagram-subscription-tool-facilitated-child-exploitation-yet-company

Introduction
The 2023-24 annual report of the Union Home Ministry states that WhatsApp is among the primary platforms being targeted for cyber fraud in India, followed by Telegram and Instagram. Cybercriminals have been conducting frauds like lending and investment scams, digital arrests, romance scams, job scams, online phishing etc., through these platforms, creating trauma for victims and overburdening law enforcement, which is not always the best equipped to recover their money. WhatsApp’s scale, end-to-end encryption, and ease of mass messaging make it both a powerful medium of communication and a vulnerable target for bad actors. It has over 500 million users in India, which makes it a primary subject for scammers running illegal lending apps, phishing schemes, and identity fraud.
Action Taken by Whatsapp
As a response to this worrying trend and in keeping with Rule 4(1)(d) of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, [updated as of 6.4.2023], WhatsApp has been banning millions of Indian accounts through automated tools, AI-based detection systems, and behaviour analysis, which can detect suspicious activity and misuse. In July 2021, it banned over 2 million accounts. By February 2025, this number had shot up to over 9.7 million, with 1.4 million accounts removed proactively, that is, before any user reported them. While this may mean that the number of attacks has increased, or WhatsApp’s detection systems have improved, or both, what it surely signals is the acknowledgement of a deeper, systemic challenge to India’s digital ecosystem and the growing scale and sophistication of cyber fraud, especially on encrypted platforms.
CyberPeace Insights
- Under Rule 4(1)(d) of the IT Rules, 2021, significant social media intermediaries (SSMIs) are required to implement automated tools to detect harmful content. But enforcement has been uneven. WhatsApp’s enforcement action demonstrates what effective compliance with proactive moderation can look like because of the scale and transparency of its actions.
- Platforms must treat fraud not just as a content violation but as a systemic abuse of the platform’s infrastructure.
- India is not alone in facing this challenge. The EU’s Digital Services Act (DSA), for instance, mandates large platforms to conduct regular risk assessments, maintain algorithmic transparency, and allow independent audits of their safety mechanisms. These steps go beyond just removing bad content by addressing the design of the platform itself. India can draw from this by codifying a baseline standard for fraud detection, requiring platforms to publish detailed transparency reports, and clarifying the legal expectations around proactive monitoring. Importantly, regulators must ensure this is done without compromising encryption or user privacy.
- WhatsApp’s efforts are part of a broader, emerging ecosystem of threat detection. The Indian Cyber Crime Coordination Centre (I4C) is now sharing threat intelligence with platforms like Google and Meta to help take down scam domains, malicious apps, and sponsored Facebook ads promoting illegal digital lending. This model of public-private intelligence collaboration should be institutionalized and scaled across sectors.
Conclusion: Turning Enforcement into Policy
WhatsApp’s mass account ban is not just about enforcement but an example of how platforms must evolve. As India becomes increasingly digital, it needs a forward-looking policy framework that supports proactive monitoring, ethical AI use, cross-platform coordination, and user safety. The digital safety of users in India and those around the world must be built into the architecture of the internet.
References
- https://scontent.xx.fbcdn.net/v/t39.8562-6/486805827_1197340372070566_282096906288453586_n.pdf?_nc_cat=104&ccb=1-7&_nc_sid=b8d81d&_nc_ohc=BRGwyxF87MgQ7kNvwHyyW8u&_nc_oc=AdnNG2wXIN5F-Pefw_FTt2T4K6POllUyKpO7nxwzCWxNgQEkVLllHmh81AHT2742dH8&_nc_zt=14&_nc_ht=scontent.xx&_nc_gid=iaQzNQ8nBZzxuIS4rXLOkQ&oh=00_AfEnbac47YDXvymJ5vTVB-gXteibjpbTjY5uhP_sMN9ouw&oe=67F95BF0
- https://scontent.xx.fbcdn.net/v/t39.8562-6/217535270_342765227288666_5007519467044742276_n.pdf?_nc_cat=110&ccb=1-7&_nc_sid=b8d81d&_nc_ohc=aj6og9xy5WQQ7kNvwG9Vzkd&_nc_oc=AdnDtVbrQuo4lm3isKg5O4cw5PHkp1MoMGATVpuAdOUUz-xyJQgWztGV1PBovGACQ9c&_nc_zt=14&_nc_ht=scontent.xx&_nc_gid=gabMfhEICh_gJFiN7vwzcA&oh=00_AfE7lXd9JJlEZCpD4pxW4OOc03BYcp1e3KqHKN9-kaPGMQ&oe=67FD6FD3
- https://www.hindustantimes.com/india-news/whatsapp-is-most-used-platform-for-cyber-crimes-home-ministry-report-101735719475701.html
- https://www.indiatoday.in/technology/news/story/whatsapp-bans-over-97-lakhs-indian-accounts-to-protect-users-from-scam-2702781-2025-04-02