#FactCheck – False Claim of Lord Ram's Hologram in Srinagar - Video Actually from Dehradun
Executive Summary:
A video purporting to be from Lal Chowk in Srinagar, which features Lord Ram's hologram on a clock tower, has gone popular on the internet. The footage is from Dehradun, Uttarakhand, not Jammu and Kashmir, the CyberPeace Research Team discovered.
Claims:
A Viral 48-second clip is getting shared over the Internet mostly in X and Facebook, The Video shows a car passing by the clock tower with the picture of Lord Ram. A screen showcasing songs about Lord Ram is shown when the car goes forward and to the side of the road.
The Claim is that the Video is from Kashmir, Srinagar
Similar Post:
Fact Check:
The CyberPeace Research team found that the Information is false. Firstly we did some keyword search relating to the Caption and found that the Clock Tower in Srinagar is not similar to the Video.
We found an article by NDTV mentioning Srinagar Lal Chowk’s Clock Tower, It's the only Clock Tower in the Middle of Road. We are somewhat confirmed that the Video is not From Srinagar. We then ran a reverse image search of the Video by breaking down into frames.
We found another Video that visualizes a similar structure tower in Dehradun.
Taking a cue from this we then Searched for the Tower in Dehradun and tried to see if it matches with the Video, and yes it’s confirmed that the Tower is a Clock Tower in Paltan Bazar, Dehradun and the Video is actually From Dehradun but not from Srinagar.
Conclusion:
After a thorough Fact Check Investigation of the Video and the originality of the Video, we found that the Visualisation of Lord Ram in the Clock Tower is not from Srinagar but from Dehradun. Internet users who claim the Visual of Lord Ram from Srinagar is totally Baseless and Misinformation.
- Claim: The Hologram of Lord Ram on the Clock Tower of Lal Chowk, Srinagar
- Claimed on: Facebook, X
- Fact Check: Fake
Related Blogs
In the vast, uncharted territories of the digital world, a sinister phenomenon is proliferating at an alarming rate. It's a world where artificial intelligence (AI) and human vulnerability intertwine in a disturbing combination, creating a shadowy realm of non-consensual pornography. This is the world of deepfake pornography, a burgeoning industry that is as lucrative as it is unsettling.
According to a recent assessment, at least 100,000 deepfake porn videos are readily available on the internet, with hundreds, if not thousands, being uploaded daily. This staggering statistic prompts a chilling question: what is driving the creation of such a vast number of fakes? Is it merely for amusement, or is there a more sinister motive at play?
Recent Trends and Developments
An investigation by India Today’s Open-Source Intelligence (OSINT) team reveals that deepfake pornography is rapidly morphing into a thriving business. AI enthusiasts, creators, and experts are extending their expertise, investors are injecting money, and even small financial companies to tech giants like Google, VISA, Mastercard, and PayPal are being misused in this dark trade. Synthetic porn has existed for years, but advances in AI and the increasing availability of technology have made it easier—and more profitable—to create and distribute non-consensual sexually explicit material. The 2023 State of Deepfake report by Home Security Heroes reveals a staggering 550% increase in the number of deepfakes compared to 2019.
What’s the Matter with Fakes?
But why should we be concerned about these fakes? The answer lies in the real-world harm they cause. India has already seen cases of extortion carried out by exploiting deepfake technology. An elderly man in UP’s Ghaziabad, for instance, was tricked into paying Rs 74,000 after receiving a deep fake video of a police officer. The situation could have been even more serious if the perpetrators had decided to create deepfake porn of the victim.
The danger is particularly severe for women. The 2023 State of Deepfake Report estimates that at least 98 percent of all deepfakes is porn and 99 percent of its victims are women. A study by Harvard University refrained from using the term “pornography” for creating, sharing, or threatening to create/share sexually explicit images and videos of a person without their consent. “It is abuse and should be understood as such,” it states.
Based on interviews of victims of deepfake porn last year, the study said 63 percent of participants talked about experiences of “sexual deepfake abuse” and reported that their sexual deepfakes had been monetised online. It also found “sexual deepfake abuse to be particularly harmful because of the fluidity and co-occurrence of online offline experiences of abuse, resulting in endless reverberations of abuse in which every aspect of the victim’s life is permanently disrupted”.
Creating deepfake porn is disturbingly easy. There are largely two types of deepfakes: one featuring faces of humans and another featuring computer-generated hyper-realistic faces of non-existing people. The first category is particularly concerning and is created by superimposing faces of real people on existing pornographic images and videos—a task made simple and easy by AI tools.
During the investigation, platforms hosting deepfake porn of stars like Jennifer Lawrence, Emma Stone, Jennifer Aniston, Aishwarya Rai, Rashmika Mandanna to TV actors and influencers like Aanchal Khurana, Ahsaas Channa, and Sonam Bajwa and Anveshi Jain were encountered. It takes a few minutes and as little as Rs 40 for a user to create a high-quality fake porn video of 15 seconds on platforms like FakeApp and FaceSwap.
The Modus Operandi
These platforms brazenly flaunt their business association and hide behind frivolous declarations such as: the content is “meant solely for entertainment” and “not intended to harm or humiliate anyone”. However, the irony of these disclaimers is not lost on anyone, especially when they host thousands of non-consensual deepfake pornography.
As fake porn content and its consumers surge, deepfake porn sites are rushing to forge collaborations with generative AI service providers and have integrated their interfaces for enhanced interoperability. The promise and potential of making quick bucks have given birth to step-by-step guides, video tutorials, and websites that offer tools and programs, recommendations, and ratings.
Nearly 90 per cent of all deepfake porn is hosted by dedicated platforms that charge for long-duration premium fake content and for creating porn—of whoever a user wants, and take requests for celebrities. To encourage them further, they enable creators to monetize their content.
One such website, Civitai, has a system in place that pays “rewards” to creators of AI models that generate “images of real people'', including ordinary people. It also enables users to post AI images, prompts, model data, and LoRA (low-rank adaptation of large language models) files used in generating the images. Model data designed for adult content is gaining great popularity on the platform, and they are not only targeting celebrities. Common people are equally susceptible.
Access to premium fake porn, like any other content, requires payment. But how can a gateway process payment for sexual content that lacks consent? It seems financial institutes and banks are not paying much attention to this legal question. During the investigation, many such websites accepting payments through services like VISA, Mastercard, and Stripe were found.
Those who have failed to register/partner with these fintech giants have found a way out. While some direct users to third-party sites, others use personal PayPal accounts to manually collect money in the personal accounts of their employees/stakeholders, which potentially violates the platform's terms of use that ban the sale of “sexually oriented digital goods or content delivered through a digital medium.”
Among others, the MakeNude.ai web app – which lets users “view any girl without clothing” in “just a single click” – has an interesting method of circumventing restrictions around the sale of non-consensual pornography. The platform has partnered with Ukraine-based Monobank and Dublin’s BetaTransfer Kassa which operates in “high-risk markets”.
BetaTransfer Kassa admits to serving “clients who have already contacted payment aggregators and received a refusal to accept payments, or aggregators stopped payments altogether after the resource was approved or completely freeze your funds”. To make payment processing easy, MakeNude.ai seems to be exploiting the donation ‘jar’ facility of Monobank, which is often used by people to donate money to Ukraine to support it in the war against Russia.
The Indian Scenario
India currently is on its way to design dedicated legislation to address issues arising out of deepfakes. Though existing general laws requiring such platforms to remove offensive content also apply to deepfake porn. However, persecution of the offender and their conviction is extremely difficult for law enforcement agencies as it is a boundaryless crime and sometimes involves several countries in the process.
A victim can register a police complaint under provisions of Section 66E and Section 66D of the IT Act, 2000. Recently enacted Digital Personal Data Protection Act, 2023 aims to protect the digital personal data of users. Recently Union Government issued an advisory to social media intermediaries to identify misinformation and deepfakes. Comprehensive law promised by Union IT minister Ashwini Vaishnav will be able to address these challenges.
Conclusion
In the end, the unsettling dance of AI and human vulnerability continues in the dark web of deepfake pornography. It's a dance that is as disturbing as it is fascinating, a dance that raises questions about the ethical use of technology, the protection of individual rights, and the responsibility of financial institutions. It's a dance that we must all be aware of, for it is a dance that affects us all.
References
- https://www.indiatoday.in/india/story/deepfake-porn-artificial-intelligence-women-fake-photos-2471855-2023-12-04
- https://www.hindustantimes.com/opinion/the-legal-net-to-trap-peddlers-of-deepfakes-101701520933515.html
- https://indianexpress.com/article/opinion/columns/with-deepfakes-getting-better-and-more-alarming-seeing-is-no-longer-believing/
Executive Summary:
Traditional Business Email Compromise(BEC) attacks have become smarter, using advanced technologies to enhance their capability. Another such technology which is on the rise is WormGPT, which is a generative AI tool that is being leveraged by the cybercriminals for the purpose of BEC. This research aims at discussing WormGPT and its features as well as the risks associated with the application of the WormGPT in criminal activities. The purpose is to give a general overview of how WormGPT is involved in BEC attacks and give some advice on how to prevent it.
Introduction
BEC(Business Email Compromise) in simple terms can be defined as a kind of cybercrime whereby the attackers target the business in an effort to defraud through the use of emails. Earlier on, BEC attacks were executed through simple email scams and phishing. However, in recent days due to the advancement of AI tools like WormGPT such malicious activities have become sophisticated and difficult to identify. This paper seeks to discuss WormGPT, a generative artificial intelligence, and how it is used in the BEC attacks to make the attacks more effective.
What is WormGPT?
Definition and Overview
WormGPT is a generative AI model designed to create human-like text. It is built on advanced machine learning algorithms, specifically leveraging large language models (LLMs). These models are trained on vast amounts of text data to generate coherent and contextually relevant content. WormGPT is notable for its ability to produce highly convincing and personalised email content, making it a potent tool in the hands of cybercriminals.
How WormGPT Works
1. Training Data: Here the WormGPT is trained with the arrays of data sets, like emails, articles, and other writing material. This extensive training enables it to understand and to mimic different writing styles and recognizable textual content.
2. Generative Capabilities: Upon training, WormGPT can then generate text based on specific prompts, as in the following examples in response to prompts. For example, if a cybercriminal comes up with a prompt concerning the company’s financial information, WormGPT is capable of releasing an appearance of a genuine email asking for more details.
3. Customization: WormGPT can be retrained any time with an industry or an organisation of interest in mind. This customization enables the attackers to make their emails resemble the business activities of the target thus enhancing the chances for an attack to succeed.
Enhanced Phishing Techniques
Traditional phishing emails are often identifiable by their generic and unconvincing content. WormGPT improves upon this by generating highly personalised and contextually accurate emails. This personalization makes it harder for recipients to identify malicious intent.
Automation of Email Crafting
Previously, creating convincing phishing emails required significant manual effort. WormGPT automates this process, allowing attackers to generate large volumes of realistic emails quickly. This automation increases the scale and frequency of BEC attacks.
Exploitation of Contextual Information
WormGPT can be fed with contextual information about the target, such as recent company news or employee details. This capability enables the generation of emails that appear highly relevant and urgent, further deceiving recipients into taking harmful actions.
Implications for Cybersecurity
Challenges in Detection
The use of WormGPT complicates the detection of BEC attacks. Traditional email security solutions may struggle to identify malicious emails generated by advanced AI, as they can closely mimic legitimate correspondence. This necessitates the development of more sophisticated detection mechanisms.
Need for Enhanced Training
Organisations must invest in training their employees to recognize signs of BEC attacks. Awareness programs should emphasise the importance of verifying email requests for sensitive information, especially when such requests come from unfamiliar or unexpected sources.
Implementation of Robust Security Measures
- Multi-Factor Authentication (MFA): MFA can add an additional layer of security, making it harder for attackers to gain unauthorised access even if they successfully deceive an employee.
- Email Filtering Solutions: Advanced email filtering solutions that use AI and machine learning to detect anomalies and suspicious patterns can help identify and block malicious emails.
- Regular Security Audits: Conducting regular security audits can help identify vulnerabilities and ensure that security measures are up to date.
Case Studies
Case Study 1: Financial Institution
A financial institution fell victim to a BEC attack orchestrated using WormGPT. The attacker used the tool to craft a convincing email that appeared to come from the institution’s CEO, requesting a large wire transfer. The email’s convincing nature led to the transfer of funds before the scam was discovered.
Case Study 2: Manufacturing Company
In another instance, a manufacturing company was targeted by a BEC attack using WormGPT. The attacker generated emails that appeared to come from a key supplier, requesting sensitive business information. The attack exploited the company’s lack of awareness about BEC threats, resulting in a significant data breach.
Recommendations for Mitigation
- Strengthen Email Security Protocols: Implement advanced email security solutions that incorporate AI-driven threat detection.
- Promote Cyber Hygiene: Educate employees on recognizing phishing attempts and practising safe email habits.
- Invest in AI for Defense: Explore the use of AI and machine learning in developing defences against generative AI-driven attacks.
- Implement Verification Procedures: Establish procedures for verifying the authenticity of sensitive requests, especially those received via email.
Conclusion
WormGPT is a new tool in the arsenal of cybercriminals which improved their options to perform Business Email Compromise attacks more effectively and effectively. Therefore, it is critical to provide the defence community with information regarding the potential of WormGPT and its implications for enhancing the threat landscape and strengthening the protection systems against advanced and constantly evolving threats.
This means the development of rigorous security protocols, general awareness of security solutions, and incorporating technologies such as artificial intelligence to mitigate the risk factors that arise from generative AI tools to the best extent possible.
Introduction
A famous quote, “Half knowledge is always dangerous”, but “Too much knowledge of anything can lead to destruction”. Recently very infamous spyware and malware named WyrmSpy and Dragon Egg were invented by a Chinese group of hackers APT41. The APT41 is a state-endorsed Clandstein active group based in the People’s Republic of China that has been active since 2012. In contrast to numerous countries-government supported, APT has a footprint record jeopardising both government organisations for clandestine activities as well as different private organisations or enterprises for their financial gain. APT41 group aims at Android devices through spyware wyrmspy and dragon egg, which masquerades as a legitimate application. According to the U.S. jury legal accusation from 2019 to 2020, the group was entangled in threatening over more than 100 public and private individuals and organisations in the United States and around the world.Moreover, a detailed analysis report was shared by the Lookout Threat Researchers, that has been actively monitoring and tracking both spyware and malware.
Briefing about how spyware attacks on Android devices take place
To begin with, this malware imitates a real source Android application to show some sort of notification. Once it is successfully installed on the user’s machine, proclaims multiple device’s permission to enable data filtration.
Wyrmspy complies with log files, photos, device locations, SMS(read and write), and audio recordings. It has also authenticated that there are no detection malware activities found on google play even after running multiple security levels. These malicious things are made with the intent to obtain rooting access privileges to the device and monitor activities to the specified commands received from the C2 servers.
Similarly, Dragon Egg can collect data files, contacts, locations, and audio recordings, and it also accesses camera photos once it successfully trade-off the device. Dragon egg receives a payload that is also known as “smallmload.jar”, which is either from APK(Android Packet Kit).
WyrmSpy initially masquerades as a default operation system application, and Dragon Egg simulates a third-party keyboard/ messaging application.
Overview of APT41 Chinese group background
APT41 is a Chinese-based stealth activity-carrying group that is said to be active since mid-2006. Rumours about APT41 that it was also a part of the 2nd Bureau of the People’s Liberation Army (PLA) General Staff Department’s (GSD) 3rd Department. Owning to that fact, 2006 has seen 140+ organisations’ security getting compromised, ranging from 20 strategically crucial companies.APT is also recognised for rationally plundering hundreds of terabytes of data from at least 141 organisations between 2006 and 2013. It typically begins with spear-phishing emails to the targeted victims. These sent emails contain official templates along with language pretending to be from a legitimate real source, carrying a malicious attachment. As the victim opens the attached file, the backdoor bestows the control of the targeted machine to the APT groups machine. Once there is an unauthorised gain of access, the attacker visits and revisits the victim’s machine. The group remains dormant for lengthy durations, more likely for months or even for years.
Advisory points need to adhere to while using Android devices
- The security patch update is necessary at least once a week
- Clearing up unwanted junk files.
- Cache files of every frequently used application need to clear out.
- Install only required applications from
Google play store. - Download only necessary APK files only it comes from trusted resources.
- Before giving device permission, it is advisable to run your files or URLs on VirusTotal.com this website will give a good closure to the malicious intent.
- Install good antivirus software.
- Individuals need to check the source of the email before opening an attachment to it.
- Never collect or add any randomly found device to your system
- Moreover, the user needs to keep track of their device activity. Rather than using devices just for entertainment purposes, it is more important to look for data protection on that device.
Conclusion
Network Crack Program Hacker Group (NCPH), which grew as an APT41 group with malicious intent, earlier performed the role of grey hat hacker, this group somehow grew up greedy to enhance more money laundering by hacking networks, devices, etc. As this group conducts a supply chain of attacks to gain unauthorised access to the network throughout the world, targeting hundreds of companies, including an extensive selection of industries such as social media, telecommunications, government, defence, education, and manufacturing. Last but not least, many more fraud-making groups with malicious intent will be forming and implementing in the future. It is on individuals and organisations to secure themselves but practise basic security levels to safeguard themselves against such threats and attacks.