#FactCheck - AI Manipulated image showing Anant Ambani and Radhika Merchant dressed in golden outfits.
Executive Summary:
A viral claim circulated in social media that Anant Ambani and Radhika Merchant wore clothes made of pure gold during their pre-wedding cruise party in Europe. Thorough analysis revealed abnormalities in image quality, particularly between the face, neck, and hands compared to the claimed gold clothing, leads to possible AI manipulation. A keyword search found no credible news reports or authentic images supporting this claim. Further analysis using AI detection tools, TrueMedia and Hive Moderator, confirmed substantial evidence of AI fabrication, with a high probability of the image being AI-generated or a deep fake. Additionally, a photo from a previous event at Jio World Plaza matched with the pose of the manipulated image, further denying the claim and indicating that the image of Anant Ambani and Radhika Merchant wearing golden outfit during their pre-wedding cruise was digitally altered.

Claims:
Anant Ambani and Radhika Merchant wore clothes made of pure gold during their pre-wedding cruise party in Europe.



Fact Check:
When we received the posts, we found anomalies that were usually found in edited images or AI manipulated images, particularly between the face, neck, and hands.

It’s very unusual in any image. So we then checked in AI Image detection software named Hive Moderation detection tool and found it to be 95.9% AI manipulated.

We also checked with another widely used AI detection tool named True Media. True Media also found it to be 100% to be made using AI.




This implies that the image is AI-generated. To find the original image that has been edited, we did keyword search. We found an image with the same pose as in the manipulated image, with the title "Radhika Merchant, Anant Ambani pose with Mukesh Ambani at Jio World Plaza opening”. The two images can be compared to verify that the digitally altered image is the same.

Hence, it’s confirmed that the viral image is digitally altered and has no connection with the 2nd Pre-wedding cruise party in Europe. Thus the viral image is fake and misleading.
Conclusion:
The claim that Anant Ambani and Radhika Merchant wore clothes made of pure gold at their pre-wedding cruise party in Europe is false. The analysis of the image showed signs of manipulation, and a lack of credible news reports or authentic photos supports that it was likely digitally altered. AI detection tools confirmed a high probability that the image was fake, and a comparison with a genuine photo from another event revealed that the image had been edited. Therefore, the claim is false and misleading.
- Claim: Anant Ambani and Radhika Merchant wore clothes made of pure gold during their pre-wedding cruise party in Europe.
- Claimed on: YouTube, LinkedIn, Instagram
- Fact Check: Fake & Misleading
Related Blogs
.webp)
Introduction
Digitalisation presents both opportunities and challenges for micro, small, and medium enterprises (MSMEs) in emerging markets. Digital tools can increase business efficiency and reach but also increase exposure to misinformation, fraud, and cyber attacks. Such cyber threats can lead to financial losses, reputational damage, loss of customer trust, and other challenges hindering MSMEs' ability and desire to participate in the digital economy.
The current information dump is a major component of misinformation. Misinformation spreads or emerges from online sources, causing controversy and confusion in various fields including politics, science, medicine, and business. One obvious adverse effect of misinformation is that MSMEs might lose trust in the digital market. Misinformation can even result in the devaluation of a product, sow mistrust among customers, and negatively impact the companies’ revenue. The reach of and speed with which misinformation can spread and ruin companies’ brands, as well as the overall difficulty businesses face in seeking recourse, may discourage MSMEs from fully embracing the digital ecosystem.
MSMEs are essential for innovation, job development, and economic growth. They contribute considerably to the GDP and account for a sizable share of enterprises. They serve as engines of economic resilience in many nations, including India. Hence, a developing economy’s prosperity and sustainability depend on the MSMEs' growth and such digital threats might hinder this process of growth.
There are widespread incidents of misinformation on social media, and these affect brand and product promotion. MSMEs also rely on online platforms for business activities, and threats such as misinformation and other digital risks can result in reputational damage and financial losses. A company's reputation being tarnished due to inaccurate information or a product or service being incorrectly represented are just some examples and these incidents can cause MSMSs to lose clients and revenue.
In the digital era, MSMEs need to be vigilant against false information in order to preserve their brand name, clientele, and financial standing. In the interconnected world of today, these organisations must develop digital literacy and resistance against misinformation in order to succeed in the long run. Information resilience is crucial for protecting and preserving their reputation in the online market.
The Impact of Misinformation on MSMEs
Misinformation can have serious financial repercussions, such as lost sales, higher expenses, legal fees, harm to the company's reputation, diminished consumer trust, bad press, and a long-lasting unfavourable impact on image. A company's products may lose value as a result of rumours, which might affect both sales and client loyalty.
Inaccurate information can also result in operational mistakes, which can interrupt regular corporate operations and cost the enterprise a lot of money. When inaccurate information on a product's safety causes demand to decline and stockpiling problems to rise, supply chain disruptions may occur. Misinformation can also lead to operational and reputational issues, which can cause psychological stress and anxiety at work. The peace of the workplace and general productivity may suffer as a result. For MSMEs, false information has serious repercussions that impact their capacity to operate profitably, retain employees, and maintain a sustainable business. Companies need to make investments in cybersecurity defence, legal costs, and restoring consumer confidence and brand image in order to lessen the effects of false information and ensure smooth operations.
When we refer to the financial implications caused by misinformation spread in the market, be it about the product or the enterprise, the cost is two-fold in all scenarios: there is loss of revenue and then the organisation has to contend with the costs of countering the impact of the misinformation. Stock Price Volatility is one financial consequence for publicly-traded MSMEs, as misinformation can cause stock price fluctuations. Potential investors might be discouraged due to false negative information.
Further, the reputational damage consequences of misinformation on MSMEs is also a serious concern as a loss of their reputation can have long-term damages for a carefully-cultivated brand image.
There are also operational disruptions caused by misinformation: for instance, false product recalls can take place and supplier mistrust or false claims about supplier reliability can disrupt procurement leading to disruptions in the operations of MSMEs.
Misinformation can negatively impact employee morale and productivity due to its physiological effects. This leads to psychological stress and workplace tensions. Staff confidence is also affected due to the misinformation about the brand. Internal operational stability is a core component of any organisation’s success.
Misinformation: Key Risk Areas for MSMEs
- Product and Service Misinformation
For MSMEs, misinformation about products and services poses a serious danger since it undermines their credibility and the confidence clients place in the enterprise and its products or services. Because this misleading material might mix in with everyday activities and newsfeeds, viewers may find it challenging to identify fraudulent content. For example, falsehoods and rumours about a company or its goods may travel quickly through social media, impacting the confidence and attitude of customers. Algorithms that favour sensational material have the potential to magnify disinformation, resulting in the broad distribution of erroneous information that can harm a company's brand.
- False Customer Reviews and Testimonials
False testimonies and evaluations pose a serious risk to MSMEs. These might be abused to damage a company's brand or lead to unfair competition. False testimonials, for instance, might mislead prospective customers about the calibre or quality of a company’s offerings, while phony reviews can cause consumers to mistrust a company's goods or services. These actions frequently form a part of larger plans by rival companies or bad individuals to weaken a company's position in the market.
- Misleading Information about Business Practices
False statements or distortions regarding a company's operations constitute misleading information about business practices. This might involve dishonest marketing, fabrications regarding the efficacy or legitimacy of goods, and inaccurate claims on a company's compliance with laws or moral principles. Such incorrect information can result in a decline in consumer confidence, harm to one's reputation, and even legal issues if consumers or rival businesses act upon it. Even before the truth is confirmed, for example, allegations of wrongdoing or criminal activity pertaining can inflict a great deal of harm, even if they are disproven later.
- Fake News Related to Industry and Market Conditions
By skewing consumer views and company actions, fake news about market and industry circumstances can have a significant effect on MSMEs. For instance, false information about market trends, regulations, or economic situations might make consumers lose faith in particular industries or force corporations to make poor strategic decisions. The rapid dissemination of misinformation on online platforms intensifies its effects on enterprises that significantly depend on digital engagement for their operations.
Factors Contributing to the Vulnerability of MSMEs
- Limited Resources for Verification
MSMEs have a small resource pool. Information verification is typically not a top priority for most. MSMEs usually lack the resources needed to verify the information and given their limited resources, they usually tend to deploy the same towards other, more seemingly-critical functions. They are more susceptible to misleading information because they lack the capacity to do thorough fact-checking or validate the authenticity of digital content. Technology tools, human capital, and financial resources are all in low supply but they are essential requirements for effective verification processes.
- Inadequate Digital Literacy
Digital literacy is required for effective day-to-day operations. Fake reviews, rumours, or fake images commonly used by malicious actors can result in increased scrutiny or backlash against the targeted business. The lack of awareness combined with limited resources usually spells out a pale redressal plan on part of the affected MSME. Due to their low digital literacy in this domain, a large number of MSMEs are more susceptible to false information and other online threats. Inadequate knowledge and abilities to use digital platforms securely and effectively can result in making bad decisions and raising one's vulnerability to fraud, deception, and online scams.
- Lack of Crisis Management Plans
MSMEs frequently function without clear-cut procedures for handling crises. They lack the strategic preparation necessary to deal with the fallout from disinformation and cyberattacks. Proactive crisis management plans usually incorporate procedures for detecting, addressing, and lessening the impact of digital harms, which are frequently absent from MSMEs.
- High Dependence on Social Media and Online Platforms
The marketing strategy for most MSMEs is heavily reliant on social media and online platforms. While the digital-first nature of operations reduces the need for a large capital to set up in the form of stores or outlets, it also gives them a higher need to stay relevant to the trends of the online community and make their products attractive to the customer base. However, MSMEs are depending more and more on social media and other online channels for marketing, customer interaction, and company operations. These platforms are really beneficial, but they also put organisations at a higher risk of false information and online fraud. Heavy reliance on these platforms coupled with the absence of proper security measures and awareness can result in serious interruptions to operations and monetary losses.
CyberPeace Policy Recommendations to Enhance Information Resilience for MSMEs
CyberPeace advocates for establishing stronger legal frameworks to protect MSMEs from misinformation. Governments should establish regulations to build trust in online business activities and mitigate fraud and misinformation risks. Mandatory training programs should be implemented to cover online safety and misinformation awareness for MSME businesses. Enhanced reporting mechanisms should be developed to address digital harm incidents promptly. Governments should establish strict penalties for deliberate inaccurate misinformation spreaders, similar to those for copyright or intellectual property violations. Community-based approaches should be encouraged to help MSMEs navigate digital challenges effectively. Donor communities and development agencies should invest in digital literacy and cybersecurity training for MSMEs, focusing on misinformation mitigation and safe online practices. Platform accountability should be increased, with social media and online platforms playing a more active role in removing content from known scam networks and responding to fraudulent activity reports. There should be investment in comprehensive digital literacy solutions for MSMEs that incorporate cyber hygiene and discernment skills to combat misinformation.
Conclusion
Misinformation poses a serious risk to MSME’s digital resilience, operational effectiveness, and financial stability. MSMEs are susceptible to false information because of limited technical resources, lack of crisis management strategies, and insufficient digital literacy. They are also more vulnerable to false information and online fraud because of their heavy reliance on social media and other online platforms. To address these challenges it is significant to strengthen their cyber hygiene and information resilience. Robust policy and regulatory frameworks are encouraged, promoting and mandating online safety training programmes, and improved reporting procedures, are required to overall enhance the information landscape.
References:
- https://www.dai.com/uploads/digital-downsides.pdf
- https://www.indiacode.nic.in/bitstream/123456789/2013/3/A2006-27.pdf
- https://pib.gov.in/PressReleaseIframePage.aspx?PRID=1946375
- https://dai-global-digital.com/digital-downsides-the-economic-impact-of-misinformation-and-other-digital-harms-on-msmes-in-kenya-india-and-cambodia.html
- https://www.dai.com/uploads/digital-downsides.pdf

AI has grown manifold in the past decade and so has its reliance. A MarketsandMarkets study estimates the AI market to reach $1,339 billion by 2030. Further, Statista reports that ChatGPT amassed more than a million users within the first five days of its release, showcasing its rapid integration into our lives. This development and integration have their risks. Consider this response from Google’s AI chatbot, Gemini to a student’s homework inquiry: “You are not special, you are not important, and you are not needed…Please die.” In other instances, AI has suggested eating rocks for minerals or adding glue to pizza sauce. Such nonsensical outputs are not just absurd; they’re dangerous. They underscore the urgent need to address the risks of unrestrained AI reliance.
AI’s Rise and Its Limitations
The swiftness of AI’s rise, fueled by OpenAI's GPT series, has revolutionised fields like natural language processing, computer vision, and robotics. Generative AI Models like GPT-3, GPT-4 and GPT-4o with their advanced language understanding, enable learning from data, recognising patterns, predicting outcomes and finally improving through trial and error. However, despite their efficiency, these AI models are not infallible. Some seemingly harmless outputs can spread toxic misinformation or cause harm in critical areas like healthcare or legal advice. These instances underscore the dangers of blindly trusting AI-generated content and highlight the importance and the need to understand its limitations.
Defining the Problem: What Constitutes “Nonsensical Answers”?
Harmless errors due to AI nonsensical responses can be in the form of a wrong answer for a trivia question, whereas, critical failures could be as damaging as wrong legal advice.
AI algorithms sometimes produce outputs that are not based on training data, are incorrectly decoded by the transformer or do not follow any identifiable pattern. This response is known as a Nonsensical Answer and the situation is known as an “AI Hallucination”. It can be factual inaccuracies, irrelevant information or even contextually inappropriate responses.
A significant source of hallucination in machine learning algorithms is the bias in input that it receives. If the inputs for the AI model are full of biased datasets or unrepresentative data, it may lead to the model hallucinating and producing results that reflect these biases. These models are also vulnerable to adversarial attacks, wherein bad actors manipulate the output of an AI model by tweaking the input data ina subtle manner.
The Need for Policy Intervention
Nonsensical AI responses risk eroding user trust and causing harm, highlighting the need for accountability despite AI’s opaque and probabilistic nature. Different jurisdictions address these challenges in varied ways. The EU’s AI Act enforces stringent reliability standards with a risk-based and transparent approach. The U.S. emphasises creating ethical guidelines and industry-driven standards. India’s DPDP Act indirectly tackles AI safety through data protection, focusing on the principles of accountability and consent. While the EU prioritises compliance, the U.S. and India balance innovation with safeguards. This reflects on the diverse approaches that nations have to AI regulation.
Where Do We Draw the Line?
The critical question is whether AI policies should demand perfection or accept a reasonable margin for error. Striving for flawless AI responses may be impractical, but a well-defined framework can balance innovation and accountability. Adopting these simple measures can lead to the creation of an ecosystem where AI develops responsibly while minimising the societal risks it can pose. Key measures to achieve this include:
- Ensure that users are informed about AI and its capabilities and limitations. Transparent communication is the key to this.
- Implement regular audits and rigorous quality checks to maintain high standards. This will in turn prevent any form of lapses.
- Establishing robust liability mechanisms to address any harms caused by AI-generated material which is in the form of misinformation. This fosters trust and accountability.
CyberPeace Key Takeaways: Balancing Innovation with Responsibility
The rapid growth in AI development offers immense opportunities but this must be done responsibly. Overregulation of AI can stifle innovation, on the other hand, being lax could lead to unintended societal harm or disruptions.
Maintaining a balanced approach to development is essential. Collaboration between stakeholders such as governments, academia, and the private sector is important. They can ensure the establishment of guidelines, promote transparency, and create liability mechanisms. Regular audits and promoting user education can build trust in AI systems. Furthermore, policymakers need to prioritise user safety and trust without hindering creativity while making regulatory policies.
We can create a future that is AI-development-driven and benefits us all by fostering ethical AI development and enabling innovation. Striking this balance will ensure AI remains a tool for progress, underpinned by safety, reliability, and human values.
References
- https://timesofindia.indiatimes.com/technology/tech-news/googles-ai-chatbot-tells-student-you-are-not-needed-please-die/articleshow/115343886.cms
- https://www.forbes.com/advisor/business/ai-statistics/#2
- https://www.reuters.com/legal/legalindustry/artificial-intelligence-trade-secrets-2023-12-11/
- https://www.indiatoday.in/technology/news/story/chatgpt-has-gone-mad-today-openai-says-it-is-investigating-reports-of-unexpected-responses-2505070-2024-02-21

Introduction
Phishing as a Service (PhaaS) platform 'LabHost' has been a significant player in cybercrime targeting North American banks, particularly financial institutes in Canada. LabHost offers turnkey phishing kits, infrastructure for hosting pages, email content generation, and campaign overview services to cybercriminals in exchange for a monthly subscription. The platform's popularity surged after introducing custom phishing kits for Canadian banks in the first half of 2023.Fortra reports that LabHost has overtaken Frappo, cybercriminals' previous favorite PhaaS platform, and is now the primary driving force behind most phishing attacks targeting Canadian bank customers.
In the digital realm, where the barriers to entry for nefarious activities are crumbling, and the tools of the trade are being packaged and sold with the same customer service one might expect from a legitimate software company. This is the world of Phishing-as-a-Service (PhaaS), and at the forefront of this ominous trend is LabHost, a platform that has been instrumental in escalating attacks on North American banks, with a particular focus on Canadian financial institutions.
LabHost is not a newcomer to the cybercrime scene, but its ascent to infamy was catalyzed by the introduction of custom phishing kits tailored for Canadian banks in the first half of 2023. The platform operates on a subscription model, offering turnkey solutions that include phishing kits, infrastructure for hosting malicious pages, email content generation, and campaign overview services. For a monthly fee, cybercriminals are handed the keys to a kingdom of deception and theft.
Emergence of Labhost
The rise of LabHost has been meticulously chronicled by various cyber security firms which reports that LabHost has dethroned the previously favored PhaaS platform, Frappo. LabHost has become the primary driving force behind the majority of phishing attacks targeting customers of Canadian banks. Despite suffering a disruptive outage in early October 2023, LabHost has rebounded with vigor, orchestrating several hundreds of attacks per month.
Their investigation into LabHost's operations reveals a tiered membership system: Standard, Premium, and World, with monthly fees of $179, $249, and $300, respectively. Each tier offers an escalating scope of targets, from Canadian banks to 70 institutions worldwide, excluding North America. The phishing templates provided by LabHost are not limited to financial entities; they also encompass online services like Spotify, postal delivery services like DHL, and regional telecommunication service providers.
LabRat
The true ingenuity of LabHost lies in its integration with 'LabRat,' a real-time phishing management tool that enables cybercriminals to monitor and control an active phishing attack. This tool is a linchpin in man-in-the-middle style attacks, designed to capture two-factor authentication codes, validate credentials, and bypass additional security measures. In essence, LabRat is the puppeteer's strings, allowing the phisher to manipulate the attack with precision and evade the safeguards that are the bulwarks of our digital fortresses.
LabSend
In the aftermath of its October disruption, LabHost unveiled 'LabSend,' an SMS spamming tool that embeds links to LabHost phishing pages in text messages. This tool orchestrates a symphony of automated smishing campaigns, randomizing portions of text messages to slip past the vigilant eyes of spam detection systems. Once the SMS lure is cast, LabSend responds to victims with customizable message templates, a Machiavellian touch to an already insidious scheme.
The Proliferation of PhaaS
The proliferation of PhaaS platforms like LabHost, 'Greatness,' and 'RobinBanks' has democratized cybercrime, lowering the threshold for entry and enabling even the most unskilled hackers to launch sophisticated attacks. These platforms are the catalysts for an exponential increase in the pool of threat actors, thereby magnifying the impact of cybersecurity on a global scale.
The ease with which these services can be accessed and utilized belies the complexity and skill traditionally required to execute successful phishing campaigns. Stephanie Carruthers, who leads an IBM X-Force phishing research project, notes that crafting a single phishing email can consume upwards of 16 hours, not accounting for the time and resources needed to establish the infrastructure for sending the email and harvesting credentials.
PhaaS platforms like LabHost have commoditized this process, offering a buffet of malevolent tools that can be customized and deployed with a few clicks. The implications are stark: the security measures that businesses and individuals have come to rely on, such as multi-factor authentication (MFA), are no longer impenetrable. PhaaS platforms have engineered ways to circumvent these defenses, rendering them vulnerable to exploitation.
Emerging Cyber Defense
In the face of this escalating threat, a multi-faceted defense strategy is imperative. Cybersecurity solutions like SpamTitan employ advanced AI and machine learning to identify and block phishing threats, while end-user training platforms like SafeTitan provide ongoing education to help individuals recognize and respond to phishing attempts. However, with phishing kits now capable of bypassing MFA,it is clear that more robust solutions, such as phishing-resistant MFA based on FIDO/WebAuthn authentication or Public Key Infrastructure (PKI), are necessary to thwart these advanced attacks.
Conclusion
The emergence of PhaaS platforms represents a significant shift in the landscape of cybercrime, one that requires a vigilant and sophisticated response. As we navigate this treacherous terrain, it is incumbent upon us to fortify our defenses, educate our users, and remain ever-watchful of the evolving tactics of cyber adversaries.
References
- https://www-bleepingcomputer-com.cdn.ampproject.org/c/s/www.bleepingcomputer.com/news/security/labhost-cybercrime-service-lets-anyone-phish-canadian-bank-users/amp/
- https://www.techtimes.com/articles/302130/20240228/phishing-platform-labhost-allows-cybercriminals-target-banks-canada.htm
- https://www.spamtitan.com/blog/phishing-as-a-service-threat/
- https://timesofindia.indiatimes.com/gadgets-news/five-government-provided-botnet-and-malware-cleaning-tools/articleshow/107951686.cms