#FactCheck – Debunked: Dhoni's Viral Picture Misinterpreted as Political Support
Executive Summary:
The picture that went viral with the false story that Dhoni was supporting the Congress party, actually shows his joy over Chennai Super Kings' victory in the achievement of 6 million followers on X (formerly known as Twitter) in 2020. Dhoni's gesture was misinterpreted by many, which resulted in the spread of false information. The Research team of CyberPeace did an in-depth investigation of the photo's roots and confirmed its authenticity through a reverse image search, highlighting how news outlets and CSK's official social media channels shared it. The case illustrates the value of fact verification and the role of real information in preventing the fake news epidemic.
Claims:
An image of former Indian Cricket captain Mahendra Singh Dhoni, showed him urging people to vote for the Congress party, wearing the Chennai Super Kings (CSK) jersey and showing his right palm visible and gesturing the number 'one' with his left index finger. In reality he is celebrating Chennai Super Kings' milestone achievement on X (formerly Twitter) in 2020. Many people are sharing the misinterpretation knowingly or unknowingly over social media platforms.
Fact Check:
After receiving the post, we ran a reverse image search of the image and found a news article published by NDTV. According to the news outlet, Dhoni and his teammates were celebrating CSK's milestone of reaching six million followers on X (formerly known as Twitter) in the photos.
In the image it is written as a tweet of @chennaiipl, to get an idea we dig into the official account of Chennai Super Kings on X (formerly known as Twitter). And Voila! we found the exact post which surfaced on the X (formerly known as Twitter) on 5th October 2020.
Additionally, we found a video posted on the X (formerly known as Twitter) handle of CSK, featuring other cricketers celebrating the Six Million Followers milestone for which they are thanking the audience for their support. Again, it was posted on Oct 05, 2020. The caption of the video is written as “Chennai Super #SixerOnTwitter! A big thanks to all the super fans for each and every bouquet and brickbat throughout the last decade. All the #yellove to you. #WhistlePodu”
Therefore it is easy to conclude that the viral image of MS Dhoni supporting Congress is wrong and misleading.
Conclusion:
The information that circulated online media regarding a picture of Mahendra Singh Dhoni supporting the Congress Party has been proven to be untrue. The actual photograph was of Dhoni congratulating the Chennai Super Kings for having six million followers on social media in the year 2020. This highlights the need for checking the facts of any news circulating online.
- Claim: A photo allegedly depicting former Indian cricket captain Mahendra Singh Dhoni encouraging people to support the Congress party in elections surfaced online.
- Claimed on: X (Formerly known as Twitter)
- Fact Check: Fake & Misleading
Related Blogs
Introduction
The CID of Jharkhand Police has uncovered a network of around 8000 bank accounts engaged in cyber fraud across the state, with a focus on Deoghar district, revealing a surprising 25% concentration of fraudulent accounts. In a recent meeting with bank officials, the CID shared compiled data, with 20% of the identified accounts traced to State Bank of India branches. This revelation, surpassing even Jamtara's cyber fraud reputation, prompts questions about the extent of cybercrime in Jharkhand. Under Director General Anurag Gupta's leadership, the CID has registered 90 cases, apprehended 468 individuals, and seized 1635 SIM cards and 1107 mobile phones through the Prakharna portal to combat cybercrime.
This shocking revelation by, Jharkhand Police's Criminal Investigation Department (CID) has built a comprehensive database comprising information on about 8000 bank accounts tied to cyber fraud operations in the state. This vital information has aided in the launch of investigations to identify the account holders implicated in these illegal actions. Furthermore, the CID shared this information with bank officials at a meeting on January 12 to speed up the identification process.
Background of the Investigation
The CID shared the collated material with bank officials in a meeting on 12 January 2024 to expedite the identification process. A stunning 2000 of the 8000 bank accounts under investigation are in the Deoghar district alone, with 20 per cent of these accounts connected to various State Bank of India branches. The discovery of 8000 bank accounts related to cybercrime in Jharkhand is shocking and disturbing. Surprisingly, Deoghar district has exceeded even Jamtara, which was famous for cybercrime, accounting for around 25% of the discovered bogus accounts in the state.
As per the information provided by the CID Crime Branch, it has been found that most of the accounts were opened in banks, are currently under investigation and around 2000 have been blocked by the investigating agencies.
Recovery Process
During the investigation, it was found out that most of these accounts were running on rent, the cyber criminals opened them by taking fake phone numbers along with Aadhar cards and identity cards from people in return these people(account holders) will get a fixed amount every month.
The CID has been unrelenting in its pursuit of cybercriminals. Police have recorded 90 cases and captured 468 people involved in cyber fraud using the Prakharna site. 1635 SIM Cards and 1107 mobile phones were confiscated by police officials during raids in various cities.
The Crime Branch has revealed the names of the cities where accounts are opened
- Deoghar 2500
- Dhanbad 1183
- Ranchi 959
- Bokaro 716
- Giridih 707
- Jamshedpur 584
- Hazaribagh 526
- Dumka 475
- Jamtara 443
Impact on the Financial Institutions and Individuals
These cyber scams significantly influence financial organisations and individuals; let us investigate the implications.
- Victims: Cybercrime victims have significant financial setbacks, which can lead to long-term financial insecurity. In addition, people frequently suffer mental pain as a result of the breach of personal information, which causes worry, fear, and a lack of faith in the digital financial system. One of the most difficult problems for victims is the recovery process, which includes retrieving lost cash and repairing the harm caused by the cyberattack. Individuals will find this approach time-consuming and difficult, in a lot of cases people are unaware of where and when to approach and seek help. Hence, awareness about cybercrimes and a reporting mechanism are necessary to guide victims through the recovery process, aiding them in retrieving lost assets and repairing the harm inflicted by cyberattacks.
- Financial Institutions: Financial institutions face direct consequences when they incur significant losses due to cyber financial fraud. Unauthorised account access, fraudulent transactions, and the compromise of client data result in immediate cash losses and costs associated with investigating and mitigating the breach's impact. Such assaults degrade the reputation of financial organisations, undermine trust, erode customer confidence, and result in the loss of potential clients.
- Future Implications and Solutions: Recently, the CID discovered a sophisticated cyber fraud network in Jharkhand. As a result, it is critical to assess the possible long-term repercussions of such discoveries and propose proactive ways to improve cybersecurity. The CID's findings are expected to increase awareness of the ongoing threat of cyber fraud to both people and organisations. Given the current state of cyber dangers, it is critical to implement rigorous safeguards and impose heavy punishments on cyber offenders. Government organisations and regulatory bodies should also adapt their present cybersecurity strategies to address the problems posed by modern cybercrime.
Solution and Preventive Measures
Several solutions can help combat the growing nature of cybercrime. The first and foremost step is to enhance cybersecurity education at all levels, including:
- Individual Level: To improve cybersecurity for individuals, raising awareness across all age groups is crucial. This can only be done by knowing the potential threats by following the best online practices, following cyber hygiene, and educating people to safeguard themselves against financial frauds such as phishing, smishing etc.
- Multi-Layered Authentication: Encouraging individuals to enable MFA for their online accounts adds an extra layer of security by requiring additional verification beyond passwords.
- Continuous monitoring and incident Response: By continuously monitoring their financial transactions and regularly reviewing the online statements and transaction history, ensure that everyday transactions are aligned with your expenditures, and set up the accounts alert for transactions exceeding a specified amount for usual activity.
- Report Suspicious Activity: If you see any fraudulent transactions or activity, contact your bank or financial institution immediately; they will lead you through investigating and resolving the problem. The victim must supply the necessary paperwork to support your claim.
How to reduce the risks
- Freeze compromised accounts: If you think that some of your accounts have been compromised, call the bank immediately and request that the account be frozen or temporarily suspended, preventing further unauthorised truncations
- Update passwords: Update and change your passwords for all the financial accounts, emails, and online banking accounts regularly, if you suspect any unauthorised access, report it immediately and always enable MFA that adds an extra layer of protection to your accounts.
Conclusion
The CID's finding of a cyber fraud network in Jharkhand is a stark reminder of the ever-changing nature of cybersecurity threats. Cyber security measures are necessary to prevent such activities and protect individuals and institutions from being targeted against cyber fraud. As the digital ecosystem continues to grow, it is really important to stay vigilant and alert as an individual and society as a whole. We should actively participate in more awareness activities to update and upgrade ourselves.
References
- https://avenuemail.in/cid-uncovers-alarming-cyber-fraud-network-8000-bank-accounts-in-jharkhand-involved/
- https://www.the420.in/jharkhand-cid-cyber-fraud-crackdown-8000-bank-accounts-involved/
- https://www.livehindustan.com/jharkhand/story-cyber-fraudsters-in-jharkhand-opened-more-than-8000-bank-accounts-cid-freezes-2000-accounts-investigating-9203292.html
Introduction
According to a shocking report, there are multiple scam loan apps on the App Store in India that charge excessive interest rates and force users to pay by blackmailing and harassing them. Apple has prohibited and removed these apps from the App Store, but they may still be installed on your iPhone and running. You must delete any of these apps if you have downloaded them. Learn the names of these apps and how they operated the fraud.
Why Apple banned these apps?
- Apple has taken action to remove certain apps from the Indian App Store. These apps were engaging in unethical behaviour, such as impersonating financial institutions, demanding high fees, and threatening borrowers. Here are the titles of these apps, as well as what Apple has said about their suspension.
- Following user concerns, Apple removed six loan apps from the Indian App Store. Loan apps include White Kash, Pocket Kash, Golden Kash, Ok Rupee, and others.
- According to multiple user reviews, certain apps seek unjustified access to users’ contact lists and media. These apps also charge exorbitant fees that are not necessitated. Furthermore, companies have been found to engage in unethical tactics such as charging high-interest rates and “processing fees” equal to half the loan amount.
- Some lending app users have reported being harassed and threatened for failing to return their loans on time. In some circumstances, the apps threatened the user’s contacts if payment was not completed by the deadline. According to one user, the app company threatened to produce and send false photographs of her to her contacts.
- These loan apps were removed from the App Store, according to Apple, because they broke the norms and standards of the Apple Developer Program License Agreement. These apps were discovered to be falsely claiming financial institution connections.
Issue of Fake loan apps on the App Store
- The App Store and our App Review Guidelines are designed to ensure we provide our users with the safest experience possible,” Apple explained. “We do not tolerate fraudulent activity on the App Store and have strict rules against apps and developers who attempt to game the system.
- In 2022, Apple blocked nearly $2 billion in fraudulent App Store sales. Furthermore, it rejected nearly 1.7 million software submissions that did not match Apple’s quality and safety criteria and cancelled 428,000 developer accounts due to suspected fraudulent activities.
- The scammers also used heinous tactics to force the loanees to pay. According to reports, the scammers behind the apps gained access to the user’s contact list as well as their images. They would morph the images and then scare the individual by sharing their fake nude photos with their whole contact list.
Dangerous financial fraud apps have surfaced on the App Store
- TechCrunch acquired a user review from one of these apps. “I borrowed an amount in a helpless situation, and a day before the repayment due date, I got some messages with my picture and my contacts in my phone saying that repay your loan or they will inform our contacts that you are not paying the loan,” it said.
- Sandhya Ramesh, a journalist from The Print, recently tweeted a screenshot of a direct message she got. A victim’s friend told a similar story in the message.
- TechCrunch contacted Apple, who confirmed that the apps had been removed from the App Store for breaking the Apple Developer Program License Agreement and guidelines.
Conclusion
Recently, some users have claimed that some quick-loan applications, such as White Kash, Pocket Kash, and Golden Kash, have appeared on the Top Finance applications chart in recent days. These apps necessitate unauthorised and intrusive access to users’ contact lists and media. According to hundreds of user evaluations, these apps charged exorbitantly high and useless fees. They used unscrupulous techniques such as demanding “processing fees” equal to half the loan amount and charging high-interest rates. Users were also harassed and threatened with restitution. If payments were not made by the due date, the lending applications threatened to notify users’ contacts. According to one user, the app provider even threatened to generate phoney nude images of her and send them to her contacts.
Introduction
Considering the development of technology, Voice cloning schemes are one such issue that has recently come to light. Scammers are moving forward with AI, and their methods and plans for deceiving and scamming people have also altered. Deepfake technology creates realistic imitations of a person’s voice that can be used to conduct fraud, dupe a person into giving up crucial information, or even impersonate a person for illegal purposes. We will look at the dangers and risks associated with AI voice cloning frauds, how scammers operate and how one might protect themselves from one.
What is Deepfake?
Artificial intelligence (AI), known as “deepfake,” can produce fake or altered audio, video, and film that pass for the real thing. The words “deep learning” and “fake” are combined to get the name “deep fake”. Deepfake technology creates content with a realistic appearance or sound by analysing and synthesising diverse volumes of data using machine learning algorithms. Con artists employ technology to portray someone doing something that has never been in audio or visual form. The best example is the American President, who used deep voice impersonation technology. Deep voice impersonation technology can be used maliciously, such as in deep voice fraud or disseminating false information. As a result, there is growing concerned about the potential influence of deep fake technology on society and the need for effective tools to detect and mitigate the hazards it may provide.
What exactly are deepfake voice scams?
Artificial intelligence (AI) is sometimes utilised in deepfake speech frauds to create synthetic audio recordings that seem like real people. Con artists can impersonate someone else over the phone and pressure their victims into providing personal information or paying money by using contemporary technology. A con artist may pose as a bank employee, a government official, or a friend or relative by utilising a deep false voice. It aims to earn the victim’s trust and raise the likelihood that they will fall for the hoax by conveying a false sense of familiarity and urgency. Deep fake speech frauds are increasing in frequency as deep fake technology becomes more widely available, more sophisticated, and harder to detect. In order to avoid becoming a victim of such fraud, it is necessary to be aware of the risks and take appropriate measures.
Why do cybercriminals use AI voice deep fake?
In order to mislead users into providing private information, money, or system access, cybercriminals utilise artificial intelligence (AI) speech-deep spoofing technology to claim to be people or entities. Using AI voice-deep fake technology, cybercriminals can create audio recordings that mimic real people or entities, such as CEOs, government officials, or bank employees, and use them to trick victims into taking activities that are advantageous to the criminals. This can involve asking victims for money, disclosing login credentials, or revealing sensitive information. In phishing assaults, where fraudsters create audio recordings that impersonate genuine messages from organisations or people that victims trust, deepfake AI voice technology can also be employed. These audio recordings can trick people into downloading malware, clicking on dangerous links, or giving out personal information. Additionally, false audio evidence can be produced using AI voice-deep fake technology to support false claims or accusations. This is particularly risky regarding legal processes because falsified audio evidence may lead to wrongful convictions or acquittals. Artificial intelligence voice deep fake technology gives con artists a potent tool for tricking and controlling victims. Every organisation and the general population must be informed of this technology’s risk and adopt the appropriate safety measures.
How to spot voice deepfake and avoid them?
Deep fake technology has made it simpler for con artists to edit audio recordings and create phoney voices that exactly mimic real people. As a result, a brand-new scam called the “deep fake voice scam” has surfaced. In order to trick the victim into handing over money or private information, the con artist assumes another person’s identity and uses a fake voice. What are some ways to protect oneself from deepfake voice scams? Here are some guidelines to help you spot them and keep away from them:
- Steer clear of telemarketing calls
- One of the most common tactics used by deep fake voice con artists, who pretend to be bank personnel or government officials, is making unsolicited phone calls.
- Listen closely to the voice
- Anyone who phones you pretending to be someone else should pay special attention to their voice. Are there any peculiar pauses or inflexions in their speech? Something that doesn’t seem right can be a deep voice fraud.
- Verify the caller’s identity
- It’s crucial to verify the caller’s identity in order to avoid falling for a deep false voice scam. You might ask for their name, job title, and employer when in doubt. You can then do some research to be sure they are who they say they are.
- Never divulge confidential information
- No matter who calls, never give out personal information like your Aadhar, bank account information, or passwords over the phone. Any legitimate companies or organisations will never request personal or financial information over the phone; if they do, it’s a warning sign that they’re a scammer.
- Report any suspicious activities
- Inform the appropriate authorities if you think you’ve fallen victim to a deep voice fraud. This may include your bank, credit card company, local police station, or the nearest cyber cell. By reporting the fraud, you could prevent others from being a victim.
Conclusion
In conclusion, the field of AI voice deep fake technology is fast expanding and has huge potential for beneficial and detrimental effects. While deep fake voice technology has the potential to be used for good, such as improving speech recognition systems or making voice assistants sound more realistic, it may also be used for evil, such as deep fake voice frauds and impersonation to fabricate stories. Users must be aware of the hazard and take the necessary precautions to protect themselves as AI voice deep fake technology develops, making it harder to detect and prevent deep fake schemes. Additionally, it is necessary to conduct ongoing research and develop efficient techniques to identify and control the risks related to this technology. We must deploy AI appropriately and ethically to ensure that AI voice-deep fake technology benefits society rather than harming or deceiving it.