#FactCheck - False Claim of Hindu Sadhvi Marrying Muslim Man Debunked
Executive Summary:
A viral image circulating on social media claims to show a Hindu Sadhvi marrying a Muslim man; however, this claim is false. A thorough investigation by the Cyberpeace Research team found that the image has been digitally manipulated. The original photo, which was posted by Balmukund Acharya, a BJP MLA from Jaipur, on his official Facebook account in December 2023, he was posing with a Muslim man in his election office. The man wearing the Muslim skullcap is featured in several other photos on Acharya's Instagram account, where he expressed gratitude for the support from the Muslim community. Thus, the claimed image of a marriage between a Hindu Sadhvi and a Muslim man is digitally altered.

Claims:
An image circulating on social media claims to show a Hindu Sadhvi marrying a Muslim man.


Fact Check:
Upon receiving the posts, we reverse searched the image to find any credible sources. We found a photo posted by Balmukund Acharya Hathoj Dham on his facebook page on 6 December 2023.

This photo is digitally altered and posted on social media to mislead. We also found several different photos with the skullcap man where he was featured.

We also checked for any AI fabrication in the viral image. We checked using a detection tool named, “content@scale” AI Image detection. This tool found the image to be 95% AI Manipulated.

We also checked with another detection tool for further validation named, “isitai” image detection tool. It found the image to be 38.50% of AI content, which concludes to the fact that the image is manipulated and doesn’t support the claim made. Hence, the viral image is fake and misleading.

Conclusion:
The lack of credible source and the detection of AI manipulation in the image explains that the viral image claiming to show a Hindu Sadhvi marrying a Muslim man is false. It has been digitally altered. The original image features BJP MLA Balmukund Acharya posing with a Muslim man, and there is no evidence of the claimed marriage.
- Claim: An image circulating on social media claims to show a Hindu Sadhvi marrying a Muslim man.
- Claimed on: X (Formerly known as Twitter)
- Fact Check: Fake & Misleading
Related Blogs

Introduction
Deepfake technology, which combines the words "deep learning" and "fake," uses highly developed artificial intelligence—specifically, generative adversarial networks (GANs)—to produce computer-generated content that is remarkably lifelike, including audio and video recordings. Because it can provide credible false information, there are concerns about its misuse, including identity theft and the transmission of fake information. Cybercriminals leverage AI tools and technologies for malicious activities or for committing various cyber frauds. By such misuse of advanced technologies such as AI, deepfake, and voice clones. Such new cyber threats have emerged.
India Topmost destination for deepfake attacks
According to Sumsub’s identity fraud report 2023, a well-known digital identity verification company with headquarters in the UK. India, Bangladesh, and Pakistan have become an important participants in the Asia-Pacific identity fraud scene with India’s fraud rate growing exponentially by 2.99% from 2022 to 2023. They are among the top ten nations most impacted by the use of deepfake technology. Deepfake technology is being used in a significant number of cybercrimes, according to the newly released Sumsub Identity Fraud Report for 2023, and this trend is expected to continue in the upcoming year. This highlights the need for increased cybersecurity awareness and safeguards as identity fraud poses an increasing concern in the area.
How Deeepfake Works
Deepfakes are a fascinating and worrisome phenomenon that have emerged in the modern digital landscape. These realistic-looking but wholly artificial videos have become quite popular in the last few months. Such realistic-looking, but wholly artificial, movies have been ingrained in the very fabric of our digital civilisation as we navigate its vast landscape. The consequences are enormous and the attraction is irresistible.
Deep Learning Algorithms
Deepfakes examine large datasets, frequently pictures or videos of a target person, using deep learning techniques, especially Generative Adversarial Networks. By mimicking and learning from gestures, speech patterns, and facial expressions, these algorithms can extract valuable information from the data. By using sophisticated approaches, generative models create material that mixes seamlessly with the target context. Misuse of this technology, including the dissemination of false information, is a worry. Sophisticated detection techniques are becoming more and more necessary to separate real content from modified content as deepfake capabilities improve.
Generative Adversarial Networks
Deepfake technology is based on GANs, which use a dual-network design. Made up of a discriminator and a generator, they participate in an ongoing cycle of competition. The discriminator assesses how authentic the generated information is, whereas the generator aims to create fake material, such as realistic voice patterns or facial expressions. The process of creating and evaluating continuously leads to a persistent improvement in Deepfake's effectiveness over time. The whole deepfake production process gets better over time as the discriminator adjusts to become more perceptive and the generator adapts to produce more and more convincing content.
Effect on Community
The extensive use of Deepfake technology has serious ramifications for several industries. As technology develops, immediate action is required to appropriately manage its effects. And promoting ethical use of technologies. This includes strict laws and technological safeguards. Deepfakes are computer trickery that mimics prominent politicians' statements or videos. Thus, it's a serious issue since it has the potential to spread instability and make it difficult for the public to understand the true nature of politics. Deepfake technology has the potential to generate totally new characters or bring stars back to life for posthumous roles in the entertainment industry. It gets harder and harder to tell fake content from authentic content, which makes it simpler for hackers to trick people and businesses.
Ongoing Deepfake Assaults In India
Deepfake videos continue to target popular celebrities, Priyanka Chopra is the most recent victim of this unsettling trend. Priyanka's deepfake adopts a different strategy than other examples including actresses like Rashmika Mandanna, Katrina Kaif, Kajol, and Alia Bhatt. Rather than editing her face in contentious situations, the misleading film keeps her look the same but modifies her voice and replaces real interview quotes with made-up commercial phrases. The deceptive video shows Priyanka promoting a product and talking about her yearly salary, highlighting the worrying development of deepfake technology and its possible effects on prominent personalities.
Actions Considered by Authorities
A PIL was filed requesting the Delhi High Court that access to websites that produce deepfakes be blocked. The petitioner's attorney argued in court that the government should at the very least establish some guidelines to hold individuals accountable for their misuse of deepfake and AI technology. He also proposed that websites should be asked to identify information produced through AI as such and that they should be prevented from producing illegally. A division bench highlighted how complicated the problem is and suggested the government (Centre) to arrive at a balanced solution without infringing the right to freedom of speech and expression (internet).
Information Technology Minister Ashwini Vaishnaw stated that new laws and guidelines would be implemented by the government to curb the dissemination of deepfake content. He presided over a meeting involving social media companies to talk about the problem of deepfakes. "We will begin drafting regulation immediately, and soon, we are going to have a fresh set of regulations for deepfakes. this might come in the way of amending the current framework or ushering in new rules, or a new law," he stated.
Prevention and Detection Techniques
To effectively combat the growing threat posed by the misuse of deepfake technology, people and institutions should place a high priority on developing critical thinking abilities, carefully examining visual and auditory cues for discrepancies, making use of tools like reverse image searches, keeping up with the latest developments in deepfake trends, and rigorously fact-check reputable media sources. Important actions to improve resistance against deepfake threats include putting in place strong security policies, integrating cutting-edge deepfake detection technologies, supporting the development of ethical AI, and encouraging candid communication and cooperation. We can all work together to effectively and mindfully manage the problems presented by deepfake technology by combining these tactics and adjusting the constantly changing terrain.
Conclusion
Advanced artificial intelligence-powered deepfake technology produces extraordinarily lifelike computer-generated information, raising both creative and moral questions. Misuse of tech or deepfake presents major difficulties such as identity theft and the propagation of misleading information, as demonstrated by examples in India, such as the latest deepfake video involving Priyanka Chopra. It is important to develop critical thinking abilities, use detection strategies including analyzing audio quality and facial expressions, and keep up with current trends in order to counter this danger. A thorough strategy that incorporates fact-checking, preventative tactics, and awareness-raising is necessary to protect against the negative effects of deepfake technology. Important actions to improve resistance against deepfake threats include putting in place strong security policies, integrating cutting-edge deepfake detection technologies, supporting the development of ethical AI, and encouraging candid communication and cooperation. We can all work together to effectively and mindfully manage the problems presented by deepfake technology by combining these tactics and making adjustments to the constantly changing terrain. Creating a true cyber-safe environment for netizens.
References:
- https://yourstory.com/2023/11/unveiling-deepfake-technology-impact
- https://www.indiatoday.in/movies/celebrities/story/deepfake-alert-priyanka-chopra-falls-prey-after-rashmika-mandanna-katrina-kaif-and-alia-bhatt-2472293-2023-12-05
- https://www.csoonline.com/article/1251094/deepfakes-emerge-as-a-top-security-threat-ahead-of-the-2024-us-election.html
- https://timesofindia.indiatimes.com/city/delhi/hc-unwilling-to-step-in-to-curb-deepfakes-delhi-high-court/articleshow/105739942.cms
- https://www.indiatoday.in/india/story/india-among-top-targets-of-deepfake-identity-fraud-2472241-2023-12-05
- https://sumsub.com/fraud-report-2023/

Recently, Apple has pushed away the Advanced Data Protection feature for its customers in the UK. This was done due to a request by the UK’s Home Office, which demanded access to encrypted data stored in its cloud service, empowered by the Investigatory Powers Act (IPA). The Act compels firms to provide information to law enforcement. This move and its subsequent result, however, have raised concerns—bringing out different perspectives regarding the balance between privacy and security, along with the involvement of higher authorities and tech firms.
What is Advanced Data Protection?
Advanced Data Protection is an opt-in feature and doesn’t necessarily require activation. It is Apple’s strongest data tool, which provides end-to-end encryption for the data that the user chooses to protect. This is different from the standard (default) encrypted data services that Apple provides for photos, back-ups, and notes, among other things. The flip side of having such a strong security feature from a user perspective is that if the Apple account holder were to lose access to the account, they would lose their data as well since there are no recovery paths.
Doing away with the feature altogether, the sign-ups have been currently halted, and the company is working on removing existing user access at a later date (which is yet to be confirmed). For the UK users who hadn’t availed of this feature, there would be no change. However, for the ones who are currently trying to avail it are met with a notification on the Advanced Data Protection settings page that states that the feature cannot be enabled anymore. Consequently, there is no clarity whether the data stored by the UK users who availed the former facility would now cease to exist as even Apple doesn’t have access to it. It is important to note that withdrawing the feature does not ensure compliance with the Investigative Powers Act (IPA) as it is applicable to tech firms worldwide that have a UK market. Similar requests to access data have been previously shut down by Apple in the US.
Apple’s Stand on Encryption and Government Requests
The Tech giant has resisted court orders, rejecting requests to write software that would allow officials to access and enable identification of iPhones operated by gunmen (made in 2016 and 2020). It is said that the supposed reasons for such a demand by the UK Home Office have been made owing to the elusive role of end-to-end encryption in hiding criminal activities such as child sexual abuse and terrorism, hampering the efforts of security officials in catching them. Over the years, Apple has emphasised time and again its reluctance to create a backdoor to its encrypted data, stating the consequences of it being more vulnerable to attackers once a pathway is created. The Salt Typhoon attack on the US Telecommunication system is a recent example that has alerted officials, who now encourage the use of end-to-end encryption. Barring this, such requests could set a dangerous precedent for how tech firms and governments operate together. This comes against the backdrop of the Paris AI Action Summit, where US Vice President J.D. Vance raised concerns regarding regulation. As per reports, Apple has now filed a legal complaint against the Investigatory Powers Tribunal, the UK’s judicial body that handles complaints with respect to surveillance power usage by public authorities.
The Broader Debate on Privacy vs. Security
This standoff raises critical questions about how tech firms and governments should collaborate without compromising fundamental rights. Striking the right balance between privacy and regulation is imperative, ensuring security concerns are addressed without dismantling individual data protection. The outcome of Apple’s legal challenge against the IPA may set a significant precedent for how encryption policies evolve in the future.
References
- https://www.bbc.com/news/articles/c20g288yldko
- https://www.bbc.com/news/articles/cgj54eq4vejo
- https://www.bbc.com/news/articles/cn524lx9445o
- https://www.yahoo.com/tech/apple-advanced-data-protection-why-184822119.html
- https://indianexpress.com/article/technology/tech-news-technology/apple-advanced-data-protection-removal-uk-9851486/
- https://www.techtarget.com/searchsecurity/news/366619638/Apple-pulls-Advanced-Data-Protection-in-UK-sparking-concerns
- https://www.computerweekly.com/news/366619614/Apple-withdraws-encrypted-iCloud-storage-from-UK-after-government-demands-back-door-access?_gl=1*1p1xpm0*_ga*NTE3NDk1NzQxLjE3MzEzMDA2NTc.*_ga_TQKE4GS5P9*MTc0MDc0MTA4Mi4zMS4xLjE3NDA3NDEwODMuMC4wLjA.
- https://www.theguardian.com/technology/2025/feb/21/apple-removes-advanced-data-protection-tool-uk-government
- https://proton.me/blog/protect-data-apple-adp-uk#:~:text=Proton-,Apple%20revoked%20advanced%20data%20protection%20
- https://www.theregister.com/2025/03/05/apple_reportedly_ipt_complaint/
- https://www.computerweekly.com/news/366616972/Government-agencies-urged-to-use-encrypted-messaging-after-Chinese-Salt-Typhoon-hack
![Securing Digital Banking: RBI Mandates Migration to [.]bank[.]in Domains](https://cdn.prod.website-files.com/64b94adadbfa4c824629b337/6818602cfbcc953fcae859a1_POLICY%20TEAM%20COVER%20PAGES%20-21%20(1).webp)
Introduction
The Reserve Bank of India (RBI) has mandated banks to switch their digital banking domains to 'Bank.in' by October 31, 2025, as part of a strategy to modernise the sector and maintain consumer confidence. The move is expected to provide a consistent and secure interface for online banking, as a response to the increasing threats posed by cybercriminals who exploit vulnerabilities in online platforms. The RBI's directive is seen as a proactive measure to address the growing concerns over cybersecurity in the banking sector.
RBI Circular - Migration to '.bank.in' domain
The official circular released by the RBI dated April 22, 2025, read as follows:
“It has now been decided to operationalise the ‘. bank.in’ domain for banks through the Institute for Development and Research in Banking Technology (IDRBT), which has been authorised by National Internet Exchange of India (NIXI), under the aegis of the Ministry of Electronics and Information Technology (MeitY), to serve as the exclusive registrar for this domain. Banks may contact IDRBT at sahyog@idrbt.ac.in to initiate the registration process. IDRBT shall guide the banks on various aspects related to application process and migration to new domain.”
“All banks are advised to commence the migration of their existing domains to the ‘.bank.in’ domain and complete the process at the earliest and in any case, not later than October 31, 2025.”
CyberPeace Outlook
The Reserve Bank of India's directive mandating banks to shift to the 'Bank.in' domain by October 31, 2025, represents a strategic and forward-looking measure to modernise the nation’s digital banking infrastructure. With this initiative, the RBI is setting a new benchmark in cybersecurity by creating a trusted, exclusive domain that banks must adopt. This move will drastically reduce cyber threats, phishing attacks, and fake banking websites, which have been major sources of financial fraud. This fixed domain will simplify verification for consumers and tech platforms to more easily identify legitimate banking websites and apps. Furthermore, a strong drop in online financial fraud will have a long-term effect by this order. Since phishing and domain spoofing are two of the most prevalent forms of cybercrime, a shift to a strictly regulated domain name system will remove the potential for lookalike URLs and fraudulent websites that mimic banks. As India’s digital economy grows, RBI’s move is timely, essential, and future-ready.
References