#FactCheck - Viral image claiming to show injury marks of the MP Kangana Ranaut slapped is fake & misleading
Executive Summary:
The viral image in the social media which depicts fake injuries on the face of the MP(Member of Parliament, Lok Sabha) Kangana Ranaut alleged to have been beaten by a CISF officer at the Chandigarh airport. The reverse search of the viral image taken back to 2006, was part of an anti-mosquito commercial and does not feature the MP, Kangana Ranaut. The findings contradict the claim that the photos are evidence of injuries resulting from the incident involving the MP, Kangana Ranaut. It is always important to verify the truthfulness of visual content before sharing it, to prevent misinformation.

Claims:
The images circulating on social media platforms claiming the injuries on the MP, Kangana Ranaut’s face were because of an assault incident by a female CISF officer at Chandigarh airport. This claim hinted that the photos are evidence of the physical quarrel and resulting injuries suffered by the MP, Kangana Ranaut.



Fact Check:
When we received the posts, we reverse-searched the image and found another photo that looked similar to the viral one. We could verify through the earring in the viral image with the new image.

The reverse image search revealed that the photo was originally uploaded in 2006 and is unrelated to the MP, Kangana Ranaut. It depicts a model in an advertisement for an anti-mosquito spray campaign.
We can validate this from the earrings in the photo after the comparison between the two photos.

Hence, we can confirm that the viral image of the injury mark of the MP, Kangana Ranaut has been debunked as fake and misleading, instead it has been cropped out from the original photo to misrepresent the context.
Conclusion:
Therefore, the viral photos on social media which claimed to be the results of injuries on the MP, Kangana Ranaut’s face after being assaulted allegedly by a CISF officer at the airport in Chandigarh were fake. Detailed analysis of the pictures provided the fact that the pictures have no connection with Ranaut; the picture was a 2006 anti-mosquito spray advertisement; therefore, the allegations that show these images as that of Ranaut’s injury are fake and misleading.
- Claim: photos circulating on social media claiming to show injuries on the MP, Kangana Ranaut's face following an assault incident by a female CISF officer at Chandigarh airport.
- Claimed on: X (Formerly known as Twitter), thread, Facebook
- Fact Check: Fake & Misleading
Related Blogs
%20(1).webp)
Introduction
Bumble’s launch of its ‘Opening Move’ feature has sparked a new narrative on safety and privacy within the digital dating sphere and has garnered mixed reactions from users. It was launched against the backdrop of women stating that the ‘message first’ policy of Bumble was proving to be tedious. Addressing the large-scale review, Bumble launched its ‘Opening Move’ feature, whereby users can either craft or select from pre-set questions which potential matches may choose to answer to start the conversation at first glance. These questions are a segue into meaningful and insightful conversation from the get-go and overstep the traditional effort to start engaging chats between matched users. This feature is an optional feature that users may enable and as such does not prevent a user from exercising the autonomy previously in place.
Innovative Approach to Conversation Starters
Many users consider this feature as innovative; not only does it act as a catalyst for fluid conversation but also cultivates insightful dialogue, fostering meaningful interactions that are devoid of the constraint of superficial small talk. The ‘Opening Moves’ feature may also be aligned with unique scientific research indicating that individuals form their initial attractions within 3-seconds of intimate interaction, thereby proving to be a catalyst to the decision-making process of an individual in the attraction time frame.
Organizational Benefits and Data Insights
From an organisational standpoint, the feature is a unique solution towards localisation challenges faced by apps; the option of writing a personalised ‘Opening Move’ implies setting prompts that are culturally relevant and appropriate in a specific area. Moreover, it is anticipated that Bumble may enhance and improve user experience within the platform through data analysis. Data from responses to an ‘Opening Move’ may provide valuable insights into user preferences and patterns by analysing which pre-set prompts garner more responses over others and how often is a user-written ‘Opening Move’ successful in obtaining a response in comparison with Bumble’s pre-set prompts. A quick glance at Bumble’s privacy policy[1] shows that data storing and transferring of chats between users are not shared with third parties, further safeguarding personal privacy. However, Bumble does use the chat data for its own internal purposes after removing personally identifiable information from chats. The manner of such review and removal of data has not been specified, which may raise challenges depending upon whether the reviewer is a human or an algorithm.
However, some users perceive the feature as counterproductive to the company’s principle of ‘women make the first move’. While Bumble aims to market the feature as a neutral ground for matched users based on the exercise of choice, users see it as a step back into the heteronormative gender expectations that most dating apps conform to, putting the onus of the ‘first move’ on men. Many male users have complained that the feature acts as a catalyst for men to opt out of the dating app and would most likely refrain from interacting with profiles enabled with the ‘Opening Move’ feature, since the pressure to answer in a creative manner is disproportionate with the likelihood their response actually being entertained.[2] Coupled with the female users terming the original protocol as ‘too much effort’, the preset questions of the ‘Opening Move’ feature may actively invite users to categorise potential matches according to arbitrary questions that undermine real-life experiences, perspectives and backgrounds of each individual.[3]
Additionally, complications are likely to arise when a notorious user sets a question that indirectly gleans personal or sensitive, identifiable information. The individual responding may be bullied or be subjected to hateful slurs when they respond to such carefully crafted conversation prompts.
Safety and Privacy Concerns
On the corollary, the appearance of choice may translate into more challenges for women on the platform. The feature may spark an increase in the number of unsolicited, undesirable messages and images from a potential match. The most vulnerable groups at present remain individuals who identify as females and other sexual minorities.[4] At present, there appears to be no mechanism in place to proactively monitor the content of responses, relying instead on user reporting. This approach may prove to be impractical given the potential volume of objectionable messages, necessitating a more efficient solution to address this issue. It is to be noted that in spite of a user reporting, the current redressal systems of online platforms remain lax, largely inadequate and demonstrate ineffectiveness in addressing user concerns or grievances. This lack of proactiveness is violative of the right to redressal provided under the Digital Personal Data Protection Act, 2023. It is thought that the feature may actually take away user autonomy that Bumble originally aimed to grant since Individuals who identify as introverted, shy, soft-spoken, or non-assertive may refrain from reporting harassing messages altogether, potentially due to discomfort or reluctance to engage in confrontation. Resultantly, it is anticipated that there would be a sharp uptake in cases pertaining to cyberbullying, harassment and hate speech (especially vulgar communications) towards both the user and the potential match.
From an Indian legal perspective, dating apps have to adhere to the Information Technology Act, 2000 [5], the Information Technology (Intermediary and Digital Media Ethics) Rules 2021 [6] and the Digital Personal Data Protection Act, 2023, that regulates a person’s digital privacy and set standards on the kind of content an intermediary may host. An obligation is cast upon an intermediary to uprise its users on what content is not allowed on its platform in addition to mandating intimation of the user’s digital rights. The lack of automated checks, as mentioned above, is likely to make Bumble non-compliant with the ethical guidelines.
The optional nature of the ‘Opening Move’ grants users some autonomy. However, some technical updates may enhance the user experience of this feature. Technologies like AI are an effective aid in behavioural and predictive analysis. An upgraded ‘matching’ algorithm can analyse the number of un-matches a profile receives, thereby identifying and flagging a profile having multiple lapsed matches. Additionally, the design interface of the application bearing a filter option to filter out flagged profiles would enable a user to be cautious while navigating through the matches. Another possible method of weeding out notorious profiles is by deploying a peer-review system of profiles whereby a user has a singular check-box that enables them to flag a profile. Such a checkbox would ideally be devoid of any option for writing personal comments and would bear a check box stating whether the profile is most or least likely to bully/harass. This would ensure that a binary, precise response is recorded and any coloured remarks are avoided. [7]
Governance and Monitoring Mechanisms
From a governance point of view, a monitoring mechanism on the manner of crafting questions is critical. Systems should be designed to detect certain words/sentences and a specific manner of framing sentences to disallow questions contrary to the national legal framework. An onscreen notification having instructions on generally acceptable manner of conversations as a reminder to users to maintain cyber hygiene while conversing is also proposed as a mandated requirement for platforms. The notification/notice may also include guidelines on what information is safe to share in order to safeguard user privacy. Lastly, a revised privacy policy should establish the legal basis for processing responses to ‘Opening Moves’, thereby bringing it in compliance with national legislations such as the Digital Personal Data Protection Act, 2023.
Conclusion
Bumble's 'Opening Move' feature marks the company’s ‘statement’ step to address user concerns regarding initiating conversations on the platform. While it has been praised for fostering more meaningful interactions, it also raises not only ethical concerns but also concerns over user safety. While the 'Opening Move' feature can potentially enhance user experience, its success is largely dependent on Bumble's ability to effectively navigate the complex issues associated with this feature. A more robust monitoring mechanism that utilises newer technology is critical to address user concerns and to ensure compliance with national laws on data privacy.
Endnotes:
- [1] Bumble’s privacy policy https://bumble.com/en-us/privacy
- [2] Discussion thread, r/bumble, Reddit https://www.reddit.com/r/Bumble/comments/1cgrs0d/women_on_bumble_no_longer_have_to_make_the_first/?share_id=idm6DK7e0lgkD7ZQ2TiTq&utm_content=2&utm_medium=ios_app&utm_name=ioscss&utm_source=share&utm_term=1&rdt=65068
- [3] Mcrea-Hedley, Olivia, “Love on the Apps: When did Dating Become so Political?”, 8 February 2024 https://www.service95.com/the-politics-of-dating-apps/
- [4] Gewirtz-Meydan, A., Volman-Pampanel, D., Opuda, E., & Tarshish, N. (2024). ‘Dating Apps: A New Emerging Platform for Sexual Harassment? A Scoping Review. Trauma, Violence, & Abuse, 25(1), 752-763. https://doi.org/10.1177/15248380231162969
- [5] Information Technology Act, 2000 https://www.indiacode.nic.in/bitstream/123456789/13116/1/it_act_2000_updated.pdf
- [6] Information Technology (Intermediary Guidelines and Digital Media Ethics) Rules 2021 https://www.meity.gov.in/writereaddata/files/Information%20Technology%20%28Intermediary%20Guidelines%20and%20Digital%20Media%20Ethics%20Code%29%20Rules%2C%202021%20%28updated%2006.04.2023%29-.pdf
- [7] Date Confidently: Engaging Features in a Dating App (Use Cases), Consaguous, 10 July 2023 https://www.consagous.co/blog/date-confidently-engaging-features-in-a-dating-app-use-cases

Introduction
In today’s digital world, data has emerged as the new currency that influences global politics, markets, and societies. Companies, governments, and tech behemoths aim to control data because it accords them influence and power. However, a fundamental challenge brought about by this increased reliance on data is how to strike a balance between privacy protection and innovation and utility.
In recognition of these dangers, more than 200 Nobel laureates, scientists, and world leaders have recently signed the Global Call for AI Red Lines. Governments are urged by this initiative to create legally binding international regulations on artificial intelligence by 2026. Its goal is to stop AI from going beyond moral and security bounds, particularly in areas like political manipulation, mass surveillance, cyberattacks, and dangers to democratic institutions.
One way to address the threat to privacy is through pseudonymization, which makes it possible to use data valuable for research and innovation by substituting personal identifiers for artificial ones. Pseudonymization thus directly advances the AI Red Lines initiative's mission of facilitating technological advancement while lowering the risks of data misuse and privacy violations.
The Red Lines of AI: Why do they matter?
The Global Call for AI Red Lines initiative represents a collective attempt to impose precaution before catastrophe, which was done with the objective of recognising the Red Lines in the use of AI tools. Thus, anything that unites the risks of using AI is due to the absence of global safeguards. Some of these Red Lines can be understood as;
- Cybersecurity breaches in the form of exposure of financial and personal data due to AI-driven hacking and surveillance.
- Occurrence of privacy invasions due to endless tracking.
- Generative AI can also help to create realistic fake content, undermining the trust of public discourses, leading to misinformation.
- Algorithmic amplification of polarising content can also threaten civic stability, leading to a demographic disruption.
Legal Frameworks and Regulatory Landscape
The regulations of Artificial Intelligence stand fragmented across jurisdictions, leaving significant loopholes aside. Some of the frameworks already provide partial guidance. The European Union’s Artificial Intelligence Act 2024 bans “unacceptable” AI practices, whereas the US-China Agreement also ensures that nuclear weapons remain under human, not machine-controlled. The UN General Assembly has adopted resolutions urging safe and ethical AI usage, with a binding and elusive global treaty.
On the front of data protection, the General Data Protection Regulations (GDPR) of EU offers a clear definition of Pseudonymisation under Article 4(5). It also describes a process where personal data is altered in a way that it cannot be attributed to an individual without additional information, which must be stored securely and separately. Importantly, pseudonymised data still qualifies as “personal data” under GDPR. However, India’s Digital Personal Data Protection Act (DPDP) 2023 adopts a similar stance. It does not explicitly define pseudonymisation in broad terms, such as “personal data” by including potentially reversible identifiers. According to Section 8(4) of the Act, companies are meant to adopt appropriate technical or organisational measures. International bodies and conventions like the OECD Principles on AI or the Council of Europe Convention 108+ emphasize accountability, transparency, and data minimisation. Collectively, these instruments point towards pseudonymization as a best practice, though interpretations of its scope differ.
Strategies for Corporate Implementation
For a company, pseudonymisation is not just about compliance, it is also a practical solution that offers measurable benefits. By pseudonymising data, businesses can get benefits, such as;
- Enhancing Privacy protection by masking identifiers like names or IDs by reducing the impact of data breaches.
- Preserving Data Utility, unlike having a full anonymisation, pseudonymisation also retains patterns that are essential for analytical innovation.
- Facilitating data sharing can allow organizations to collaborate with their partners and researchers while maintaining proper trust.
According to these benefits, competitive advantages get translated to clauses where customers find it more likely to trust organizations that prioritise data protection, while pseudonymisation further enables the firms to engage in cross-border collaboration without violating local data laws.
Balancing Privacy Rights and Data Utility
Balancing is a central dilemma; on one side lies the case of necessity over data utility, where companies, researchers and governments rely on large datasets to enhance the scale of AI innovation. On the other hand lies the question of the right to privacy, which is a non-negotiable principle protected under the international human rights law.
Pseudonymisation offers a practical compromise by enabling the use of sensitive data while reducing the privacy risks. Taking examples of different domains, such as healthcare, it allows the researchers to work with patient information without exposing identities, whereas in finance, it supports fraud detection without revealing the customer details.
Conclusion
The rapid rise of artificial intelligence has led to the outpacing of regulations, raising urgent questions related to safety, fairness and accountability. The global call for recognising the AI red lines is a bold step that looks in the direction of setting universal boundaries. Yet, alongside the remaining global treaties, practical safeguards are also needed. Pseudonymisation exemplifies such a safeguard, which is legally recognised under the GDPR and increasingly relevant in India’s DPDP Act. It balances the twin imperatives of privacy, protection, and data utility. For organizations, adopting pseudonymisation is not only about ensuring regulatory compliance, rather, it is also about building trust, ensuring resilience, and aligning with the broader ethical responsibilities in this digital age. As the future of AI is debatable, the guiding principles also need to be clear. By embedding techniques for preserving privacy, like pseudonymisation, into AI systems, we can take a significant step towards developing a sustainable, ethical and innovation-driven digital ecosystem.
References
https://www.techaheadcorp.com/blog/shadow-ai-the-risks-of-unregulated-ai-usage-in-enterprises/
https://planetmainframe.com/2024/11/the-risks-of-unregulated-ai-what-to-know/
https://cepr.org/voxeu/columns/dangers-unregulated-artificial-intelligence
https://www.forbes.com/sites/bernardmarr/2023/06/02/the-15-biggest-risks-of-artificial-intelligence/

Introduction
In the digital entertainment world, OTT platforms have become highly popular and have attracted larger audiences. They offer a wide variety of entertaining content. However, there are certain concerns about depicting illicit or objectionable content on such platforms. The Indian Ministry of Information and Broadcasting (I&B) has been working on tackling issues like the availability of obscene content on online streaming platforms and other platforms. I&B Ministry has taken important steps to prevent the spread of such illicit or objectionable content.
The I&B Ministry has taken action against obscene and vulgar content on OTT platforms. A total 18 OTT platforms and several associated websites, apps, and social media handles have been blocked nationwide. The government has been in consistent talks with these platforms and issued several advisories, but they have not been adhered to. The decision was made after consultation with other ministries, domain experts, and industry bodies. The content allegedly obscene was found to depict nudity, sexual intercourse, and inappropriate sexual acts within societal contexts. The government states that it is the responsibility of platforms to ensure that content is not present in a vulgar fashion. Creativities do not necessarily mean promoting or propagating vulgar and sexual content.
Key Highlights of I&B Ministry Action against Obscene Content
On 14th March 2024, The Indian Ministry of Information and Broadcasting (I&B) announced the blocking of 18 OTT platforms, 19 Websites, 10 apps, and 57 social media handles for displaying obscene and vulgar content. Union Minister for Information & Broadcasting Shri Anurag Singh Thakur has announced the removal of 18 OTT platforms that published obscene and vulgar content, underscoring the responsibility of platforms to prevent the spread of such content. The decision was made under the Information Technology Act 2000 and in consultation with other Indian ministries and domain experts in media, entertainment, women's rights, and child rights.
List of Blocked OTT Platforms
OTT platforms that have been blocked are Dreams Films, Voovi, Yessma, Uncut Adda, Tri Flicks, X Prime, Neon X VIP, Besharams, Hunters, Rabbit, Xtramood, Nuefliks, MoodX, Mojflix, Hot Shots VIP, Fugi, Chikooflix, Prime Play.
It was highlighted that these OTT platforms, despite not being widely popular, have a significant viewership. One app has over 1 crore downloads, while two others have more than 50 lakh downloads on Google Play Store. These platforms also market their content through social media, with a combined followership of over 32 lakh.
Nature of content
The ministry reported that a significant portion of the content on social media platforms was obscene, vulgar, and demeaning, depicting nudity and sexual acts in inappropriate contexts like teacher-student relationships and incestuous family relationships. The content included sexual innuendos and prolonged pornographic scenes without any thematic or societal relevance. It was further stated that the content was found to be prima facie in violation of Section 67 and 67A of the Information & Technology Act, 2000, Section 292 of the Indian Penal Code and Section 4 of the Indecent Representation of Women (Prohibition) Act, 1986.
Way Forward
The press release by the ministry stated that “The Government of India remains committed to fostering the growth and development of the OTT industry. Several measures have been undertaken in this regard, including the introduction of the Inaugural OTT Award for Web Series at the 54th International Film Festival of India, collaboration with OTT platforms in the media and entertainment sector, and the establishment of a light touch regulatory framework with an emphasis on self-regulation under the IT Rules, 2021.”
This shows that the Indian government is dedicated to promoting the growth of the OTT industry but within certain checks or oversight mechanisms to prevent illicit or objectionable content on such platforms.
OTT Content and Regulatory Checks
Online content streaming on OTT platforms lacks regulatory checks, unlike films, which are reviewed and certified by a government-appointed board. The government has instructed streaming services to independently review content for obscenity and violence before it is made available online. There have been repeated instances where criticism has been raised about the illicit or violative depicted content in some OTT shows. This highlights the issue of checks and balances. The government has urged self-regulation on platforms, but the repeated instances of illicit content raise societal concerns. The Ministry of I&B is keen towards promoting ethical & moral standards of content that is being hosted on online OTT platforms.
Conclusion
The Ministry of I&B has taken a step and announced the shutdown of 18 OTT platforms that were engaged in depicting illicit content. This shows that the I&B Ministry is committed to promoting ethical online content. While legislative measures are required to prevent the spread of such illicit or violative content, joint efforts by the government, industry players, and civil society are critical to ensuring a secure and responsible digital environment for all users.
References
- https://pib.gov.in/PressReleaseIframePage.aspx?PRID=2014477
- https://www.thehindu.com/news/national/centre-bans-ott-platforms-websites-and-apps-over-obscene-and-vulgar-content/article67949819.ece
- https://economictimes.indiatimes.com/news/india/ib-ministry-blocks-18-ott-platforms-for-vulgar-content/articleshow/108485880.cms?from=mdr
- https://indianexpress.com/article/entertainment/information-and-broadcasting-ministry-blocks-18-ott-platforms-for-obscene-and-vulgar-content-9213749/
- https://www.storyboard18.com/ott-news/mib-blocks-18-ott-platforms-for-showing-obscene-and-vulgar-content-26400.htm