#FactCheck-AI-Generated Viral Image of US President Joe Biden Wearing a Military Uniform
Executive Summary:
A circulating picture which is said to be of United States President Joe Biden wearing military uniform during a meeting with military officials has been found out to be AI-generated. This viral image however falsely claims to show President Biden authorizing US military action in the Middle East. The Cyberpeace Research Team has identified that the photo is generated by generative AI and not real. Multiple visual discrepancies in the picture mark it as a product of AI.
Claims:
A viral image claiming to be US President Joe Biden wearing a military outfit during a meeting with military officials has been created using artificial intelligence. This picture is being shared on social media with the false claim that it is of President Biden convening to authorize the use of the US military in the Middle East.
Similar Post:
Fact Check:
CyberPeace Research Team discovered that the photo of US President Joe Biden in a military uniform at a meeting with military officials was made using generative-AI and is not authentic. There are some obvious visual differences that plainly suggest this is an AI-generated shot.
Firstly, the eyes of US President Joe Biden are full black, secondly the military officials face is blended, thirdly the phone is standing without any support.
We then put the image in Image AI Detection tool
The tool predicted 4% human and 96% AI, Which tells that it’s a deep fake content.
Let’s do it with another tool named Hive Detector.
Hive Detector predicted to be as 100% AI Detected, Which likely to be a Deep Fake Content.
Conclusion:
Thus, the growth of AI-produced content is a challenge in determining fact from fiction, particularly in the sphere of social media. In the case of the fake photo supposedly showing President Joe Biden, the need for critical thinking and verification of information online is emphasized. With technology constantly evolving, it is of great importance that people be watchful and use verified sources to fight the spread of disinformation. Furthermore, initiatives to make people aware of the existence and impact of AI-produced content should be undertaken in order to promote a more aware and digitally literate society.
- Claim: A circulating picture which is said to be of United States President Joe Biden wearing military uniform during a meeting with military officials
- Claimed on: X
- Fact Check: Fake
Related Blogs
Introduction
Considering the development of technology, Voice cloning schemes are one such issue that has recently come to light. Scammers are moving forward with AI, and their methods and plans for deceiving and scamming people have also altered. Deepfake technology creates realistic imitations of a person’s voice that can be used to conduct fraud, dupe a person into giving up crucial information, or even impersonate a person for illegal purposes. We will look at the dangers and risks associated with AI voice cloning frauds, how scammers operate and how one might protect themselves from one.
What is Deepfake?
Artificial intelligence (AI), known as “deepfake,” can produce fake or altered audio, video, and film that pass for the real thing. The words “deep learning” and “fake” are combined to get the name “deep fake”. Deepfake technology creates content with a realistic appearance or sound by analysing and synthesising diverse volumes of data using machine learning algorithms. Con artists employ technology to portray someone doing something that has never been in audio or visual form. The best example is the American President, who used deep voice impersonation technology. Deep voice impersonation technology can be used maliciously, such as in deep voice fraud or disseminating false information. As a result, there is growing concerned about the potential influence of deep fake technology on society and the need for effective tools to detect and mitigate the hazards it may provide.
What exactly are deepfake voice scams?
Artificial intelligence (AI) is sometimes utilised in deepfake speech frauds to create synthetic audio recordings that seem like real people. Con artists can impersonate someone else over the phone and pressure their victims into providing personal information or paying money by using contemporary technology. A con artist may pose as a bank employee, a government official, or a friend or relative by utilising a deep false voice. It aims to earn the victim’s trust and raise the likelihood that they will fall for the hoax by conveying a false sense of familiarity and urgency. Deep fake speech frauds are increasing in frequency as deep fake technology becomes more widely available, more sophisticated, and harder to detect. In order to avoid becoming a victim of such fraud, it is necessary to be aware of the risks and take appropriate measures.
Why do cybercriminals use AI voice deep fake?
In order to mislead users into providing private information, money, or system access, cybercriminals utilise artificial intelligence (AI) speech-deep spoofing technology to claim to be people or entities. Using AI voice-deep fake technology, cybercriminals can create audio recordings that mimic real people or entities, such as CEOs, government officials, or bank employees, and use them to trick victims into taking activities that are advantageous to the criminals. This can involve asking victims for money, disclosing login credentials, or revealing sensitive information. In phishing assaults, where fraudsters create audio recordings that impersonate genuine messages from organisations or people that victims trust, deepfake AI voice technology can also be employed. These audio recordings can trick people into downloading malware, clicking on dangerous links, or giving out personal information. Additionally, false audio evidence can be produced using AI voice-deep fake technology to support false claims or accusations. This is particularly risky regarding legal processes because falsified audio evidence may lead to wrongful convictions or acquittals. Artificial intelligence voice deep fake technology gives con artists a potent tool for tricking and controlling victims. Every organisation and the general population must be informed of this technology’s risk and adopt the appropriate safety measures.
How to spot voice deepfake and avoid them?
Deep fake technology has made it simpler for con artists to edit audio recordings and create phoney voices that exactly mimic real people. As a result, a brand-new scam called the “deep fake voice scam” has surfaced. In order to trick the victim into handing over money or private information, the con artist assumes another person’s identity and uses a fake voice. What are some ways to protect oneself from deepfake voice scams? Here are some guidelines to help you spot them and keep away from them:
- Steer clear of telemarketing calls
- One of the most common tactics used by deep fake voice con artists, who pretend to be bank personnel or government officials, is making unsolicited phone calls.
- Listen closely to the voice
- Anyone who phones you pretending to be someone else should pay special attention to their voice. Are there any peculiar pauses or inflexions in their speech? Something that doesn’t seem right can be a deep voice fraud.
- Verify the caller’s identity
- It’s crucial to verify the caller’s identity in order to avoid falling for a deep false voice scam. You might ask for their name, job title, and employer when in doubt. You can then do some research to be sure they are who they say they are.
- Never divulge confidential information
- No matter who calls, never give out personal information like your Aadhar, bank account information, or passwords over the phone. Any legitimate companies or organisations will never request personal or financial information over the phone; if they do, it’s a warning sign that they’re a scammer.
- Report any suspicious activities
- Inform the appropriate authorities if you think you’ve fallen victim to a deep voice fraud. This may include your bank, credit card company, local police station, or the nearest cyber cell. By reporting the fraud, you could prevent others from being a victim.
Conclusion
In conclusion, the field of AI voice deep fake technology is fast expanding and has huge potential for beneficial and detrimental effects. While deep fake voice technology has the potential to be used for good, such as improving speech recognition systems or making voice assistants sound more realistic, it may also be used for evil, such as deep fake voice frauds and impersonation to fabricate stories. Users must be aware of the hazard and take the necessary precautions to protect themselves as AI voice deep fake technology develops, making it harder to detect and prevent deep fake schemes. Additionally, it is necessary to conduct ongoing research and develop efficient techniques to identify and control the risks related to this technology. We must deploy AI appropriately and ethically to ensure that AI voice-deep fake technology benefits society rather than harming or deceiving it.
Reference
Introduction
The Computer Emergency Response Team (CERT-in) is a nodal agency of the government established and appointed as a national agency in respect of cyber incidents and cyber security incidents in terms of the provisions of section 70B of the Information Technology (IT) Act, 2000. CERT-In has issued a cautionary note to Microsoft Edge, Adobe and Google Chrome users. Users have been alerted to many vulnerabilities by the government's cybersecurity agency, which hackers might use to obtain private data and run arbitrary code on the targeted machine. Users are advised by CERT-In to apply a security update right away in order to guard against the problem.
Vulnerability note
Vulnerability notes CIVN-2023-0361, CIVN-2023-0362 and CIVN-2023-0364 for Google Chrome for Desktop, Microsoft Edge and Adobe respectively, include more information on the alert. The problems have been categorized as high-severity issues by CERT-In, which suggests applying a security upgrade right now. According to the warning, there is a security risk if you use Google Chrome versions earlier than v120.0.6099.62 on Linux and Mac, or earlier than 120.0.6099.62/.63 on Windows. Similar to this, the vulnerability may also impact users of Microsoft Edge browser versions earlier than 120.0.2210.61.
Cause of the Problem
These vulnerabilities are caused by "Use after release in Media Stream, Side Panel Search, and Media Capture; Inappropriate implementation in Autofill and Web Browser UI, “according to the explanation in the issue note on the CERT-In website. The alert further warns that individuals who use the susceptible Microsoft Edge and Google Chrome browsers could end up being targeted by a remote attacker using these vulnerabilities to send a specially crafted request.” Once these vulnerabilities are effectively exploited, hackers may obtain higher privileges, obtain sensitive data, and run arbitrary code on the system of interest.
High-security issues: consequences
CERT-In has brought attention to vulnerabilities in Google Chrome, Microsoft Edge, and Adobe that might have serious repercussions and put users and their systems at risk. The vulnerabilities found in widely used browsers, like Adobe, Microsoft Edge, and Google Chrome, present serious dangers that might result in data breaches, unauthorized code execution, privilege escalation, and remote attacks. If these vulnerabilities are taken advantage of, private information may be violated, money may be lost, and reputational harm may result.
Additionally, the confidentiality and integrity of sensitive information may be compromised. The danger also includes the potential to interfere with services, cause outages, reduce productivity, and raise the possibility of phishing and social engineering assaults. Users may become less trusting of the impacted software as a result of the urgent requirement for security upgrades, which might make them hesitant to utilize these platforms until guarantees of thorough security procedures are provided.
Advisory
- Users should update their Google Chrome, Microsoft Edge, and Adobe software as soon as possible to protect themselves against the vulnerabilities that have been found. These updates are supplied by the individual software makers. Furthermore, use caution when browsing and refrain from downloading things from unidentified sites or clicking on dubious links.
- Make use of reliable ad-blockers and strong, often updated antivirus and anti-malware software. Maintain regular backups of critical data to reduce possible losses in the event of an attack, and keep up with best practices for cybersecurity. Maintaining current security measures with vigilance and proactiveness can greatly lower the likelihood of becoming a target for prospective vulnerabilities.
References
Introduction
The Indian government has developed the National Cybersecurity Reference Framework (NCRF) to provide an implementable measure for cybersecurity, based on existing legislations, policies, and guidelines. The National Critical Information Infrastructure Protection Centre is responsible for the framework. The government is expected to recommend enterprises, particularly those in critical sectors like banking, telecom, and energy, to use only security products and services developed in India. The NCRF aims to ensure that cybersecurity is protected and that the use of made-in-India products is encouraged to safeguard cyber infrastructure. The Centre is expected to emphasise the significant progress in developing indigenous cybersecurity products and solutions.
National Cybersecurity Reference Framework (NCRF)
The Indian government has developed the National Cybersecurity Reference Framework (NCRF), a guideline that sets the standard for cybersecurity in India. The framework focuses on critical sectors and provides guidelines to help organisations develop strong cybersecurity systems. It can serve as a template for critical sector entities to develop their own governance and management systems. The government has identified telecom, power, transportation, finance, strategic entities, government entities, and health as critical sectors.
The NCRF is non-binding in nature, meaning its recommendations will not be binding. It recommends enterprises allocate at least 10% of their total IT budget towards cybersecurity, with monitoring by top-level management or the board of directors. The framework may suggest that national nodal agencies evolve platforms and processes for machine-processing data from different sources to ensure proper audits and rate auditors based on performance.
Regulators overseeing critical sectors may have greater powers to set rules for information security and define information security requirements to ensure proper audits. They also need an effective Information Security Management System (ISMS) instance to access sensitive data and deficiencies related to operations in the critical sector. The policy is based on a Common but Differentiated Responsibility (CBDR) approach, recognising that different organisations have varying levels of cybersecurity needs and responsibilities.
India faces a barrage of cybersecurity-related incidents, such as the high-profile attack on AIIMS Delhi in 2022. Many ministries feel hamstrung by the lack of an overarching framework on cybersecurity when formulating sector-specific legislation. In recent years, threat actors backed by nation-states and organised cyber-criminal groups have attempted to target the critical information infrastructure (CII) of the government and enterprises. The current guiding framework on cybersecurity for critical infrastructure in India comes from the National Cybersecurity Policy of 2013. From 2013 to 2023, the world has evolved significantly due to the emergence of new threats necessitating the development of new strategies.
Significance in the realm of Critical Infrastructure
India faces numerous cybersecurity incidents due to a lack of a comprehensive framework. Critical Information Infrastructure like banking, energy, healthcare, telecommunications, transportation, strategic enterprises, and government enterprises are most targeted by threat actors, including nation-states and cybercriminals. These critical information sectors especially by their vary nature as they hold sensitive data make them prime targets for cyber threats and attacks. Cyber-attacks can compromise patient privacy, disrupt services, compromise control systems, pose safety risks, and disrupt critical services. Hence it is of paramount importance to come up with NCRF which can potentially address the emerging issues by providing sector-specific guidelines.
The Indian government is considering promoting the use of made-in-India products to enhance Cyber Infrastructure
India is preparing to recommend the use of domestically developed cybersecurity products and services, particularly for critical sectors like banking, telecom, and energy, to enhance national security in the face of escalating cybersecurity threats. The initiative aims to enhance national security in response to increasing cybersecurity threats.
Conclusion
Promoting locally made cybersecurity products and services in important industries shows India's commitment to strengthening national security. A step of coming up with the National Cybersecurity Reference Framework (NCRF) which outlines duties, responsibilities, and recommendations for organisations and regulators shows the critical step towards a comprehensive cybersecurity policy framework which is a need of the hour. The government underscoring made-in-India solutions and allocating cybersecurity resources underlines its determination to protect the country's cyber infrastructure in light of increasing cyber threats & attacks. The NCRF is expected to help draft sector-specific guidelines on cyber security.
References
- https://indianexpress.com/article/business/market/overhaul-of-cybersecurity-framework-to-safeguard-cyber-infra-govt-may-push-use-of-made-in-india-products-9133687/
- https://vajiramandravi.com/upsc-daily-current-affairs/mains-articles/national-cybersecurity-reference-framework-ncrf/
- https://m.toppersnotes.com/current-affairs/blog/to-push-cyber-infra-govt-may-push-use-of-made-in-india-products-DxQP
- https://appkida.in/overhaul-of-cybersecurity-framework-in-2024/