#FactCheck - Manipulated Image Alleging Disrespect Towards PM Circulates Online
Executive Summary:
A manipulated image showing someone making an offensive gesture towards Prime Minister Narendra Modi is circulating on social media. However, the original photo does not display any such behavior towards the Prime Minister. The CyberPeace Research Team conducted an analysis and found that the genuine image was published in a Hindustan Times article in May 2019, where no rude gesture was visible. A comparison of the viral and authentic images clearly shows the manipulation. Moreover, The Hitavada also published the same image in 2019. Further investigation revealed that ABPLive also had the image.
Claims:
A picture showing an individual making a derogatory gesture towards Prime Minister Narendra Modi is being widely shared across social media platforms.
Fact Check:
Upon receiving the news, we immediately ran a reverse search of the image and found an article by Hindustan Times, where a similar photo was posted but there was no sign of such obscene gestures shown towards PM Modi.
ABP Live and The Hitavada also have the same image published on their website in May 2019.
Comparing both the viral photo and the photo found on official news websites, we found that almost everything resembles each other except the derogatory sign claimed in the viral image.
With this, we have found that someone took the original image, published in May 2019, and edited it with a disrespectful hand gesture, and which has recently gone viral across social media and has no connection with reality.
Conclusion:
In conclusion, a manipulated picture circulating online showing someone making a rude gesture towards Prime Minister Narendra Modi has been debunked by the Cyberpeace Research team. The viral image is just an edited version of the original image published in 2019. This demonstrates the need for all social media users to check/ verify the information and facts before sharing, to prevent the spread of fake content. Hence the viral image is fake and Misleading.
- Claim: A picture shows someone making a rude gesture towards Prime Minister Narendra Modi
- Claimed on: X, Instagram
- Fact Check: Fake & Misleading
Related Blogs
Executive Summary:
A photo circulating on the web that claims to show the future design of the Bhabha Atomic Research Center, BARC building, has been found to be fake after fact checking has been done. Nevertheless, there is no official notice or confirmation from BARC on its website or social media handles. Through the AI Content Detection tool, we have discovered that the image is a fake as it was generated by an AI. In short, the viral picture is not the authentic architectural plans drawn up for the BARC building.
Claims:
A photo allegedly representing the new outlook of the Bhabha Atomic Research Center (BARC) building is reigning over social media platforms.
Fact Check:
To begin our investigation, we surfed the BARC's official website to check out their tender and NITs notifications to inquire for new constructions or renovations.
It was a pity that there was no corresponding information on what was being claimed.
Then, we hopped on their official social media pages and searched for any latest updates on an innovative building construction, if any. We looked on Facebook, Instagram and X . Again, there was no information about the supposed blueprint. To validate the fact that the viral image could be generated by AI, we gave a search on an AI Content Detection tool by Hive that is called ‘AI Classifier’. The tool's analysis was in congruence with the image being an AI-generated computer-made one with 100% accuracy.
To be sure, we also used another AI-image detection tool called, “isitai?” and it turned out to be 98.74% AI generated.
Conclusion:
To conclude, the statement about the image being the new BARC building is fake and misleading. A detailed investigation, examining BARC's authorities and utilizing AI detection tools, proved that the picture is more probable an AI-generated one than an original architectural design. BARC has not given any information nor announced anything for such a plan. This makes the statement untrustworthy since there is no credible source to support it.
Claim: Many social media users claim to show the new design of the BARC building.
Claimed on: X, Facebook
Fact Check: Misleading
On March 02, 2023, the Biden-Harris Administration unveiled the National Cybersecurity Plan to ensure that all Americans can enjoy the advantages of a secure digital environment. In this pivotal decade, the United States will reimagine cyberspace as a tool to achieve our goals in a way that is consistent with our values. These values include a commitment to economic security and prosperity, respect for human rights and fundamental freedoms, faith in our democracy and its institutions, and a commitment to creating a fair and diverse society. This goal cannot be achieved without a dramatic reorganisation of the United States’ cyberspace responsibilities, roles, and resources.
VISION- AIM
A more planned, organised, and well-resourced strategy to cyber protection is necessary for today’s rapidly developing world. State and non-state actors alike are launching creative new initiatives to challenge the United States. New avenues for innovation are opening up as next-generation technologies attain maturity and digital interdependencies are expanding. Thus, this Plan lays forth a plan to counter these dangers and protect the digital future. Putting it into effect can safeguard spending on things like infrastructure, clean energy, and the re-shoring of American industry.
The USA will create its digital environment by:
- Defensible if the cyber defence is comparatively easier, more effective, cheaper
- Resilient, where the impacts of cyberattacks and operator mistakes are lasting and little widespread.
- Values-aligned, where our most cherished values shape—and are in turn reinforced by— our digital world.
Already, the National Security Strategy, Executive Order 14028 (Improving the Nation’s Cybersecurity), National Security Memorandum 5 (Improving Cybersecurity for Critical Infrastructure Control Systems), M-22-09 (Moving the U.S. Government Toward Zero-Trust Cybersecurity Principles), and National Security Memorandum 10 (Improving Cybersecurity for Federal Information Systems) have all been issued to help secure cyberspace and our digital ecosystem (Promoting United States Leadership in Quantum Computing While Mitigating Risks to Vulnerable Cryptographic Systems). The Strategy builds upon previous efforts by acknowledging that the Internet serves not as an end in itself but as a means to a goal—the achievement of our highest ideals.
There are five key points that constitute the National Cybersecurity Strategy:
1. Defend Critical Infrastructure –
Defend critical infrastructure by, among other things: i) enacting cybersecurity regulations to secure essential infrastructure; (ii) boosting public-private sector collaboration; (iii) integrating federal cybersecurity centres; (iv) updating federal incident response plans and processes; and (v) modernising federal systems in accordance with zero trust principles.
2. Disrupt and Dismantle Threat Actors
Disrupt and dismantle threat actors, including by i) integrating military, diplomatic, information, financial, intelligence, and law enforcement competence, (ii) strengthening public-private sector collaborations, (iii) increasing the speed and scale of intelligence sharing and victim information, (iv) preventing the abuse of U.S.-based infrastructure, and (v) increasing disruption campaigns and other endeavours against ransomware operators;
3. Shape Market Forces to Drive Security and Resilience
The federal government can help shape market forces that drive security and resilience by doing the following: i) supporting legislative efforts to limit organisations’ ability to collect, use, transfer, and maintain personal information and providing strong protections for sensitive data (such as geolocation and health data), (ii) boosting IoT device security via federal research, development, sourcing, risk management efforts, and IoT security labelling programs, and (iii) instituting legislation establishing standards for the security of IoT devices. (iv) strengthening cybersecurity contract standards with government suppliers, (v) studying a federal cyber insurance framework, and (vi) using federal grants and other incentives to invest in efforts to secure critical infrastructure.
4. Invest in a Resilient Future
Invest in a resilient future by doing things like i) securing the Internet’s underlying infrastructure, (ii) funding federal cybersecurity R&D in areas like artificial intelligence, cloud computing, telecommunications, and data analytics used in critical infrastructure, (iii) migrating vulnerable public networks and systems to quantum-resistant cryptography-based environments, and (iv) investing hardware and software systems that strengthen the resiliency, safety, and security of these areas, (v) enhancing and expanding the nation’s cyber workforce; and (vi) investing in verifiable, strong digital identity solutions that promote security, interoperability, and accessibility.
5. Forge International Partnerships to Pursue Shared Goals
The United States should work with other countries to advance common interests, such as i) forming international coalitions to counter threats to the digital ecosystem; (ii) increasing the scope of U.S. assistance to allies and partners in strengthening cybersecurity; (iii) forming international coalitions to reinforce global norms of responsible state behaviour; and (v) securing global supply chains for information, communications, and operational technologies.
Conclusion:
The Strategy results from months of work by the Office of the National Cyber Director (“ONCD”), the primary cybersecurity policy and strategy advisor to President Biden and coordinates cybersecurity engagement with business and international partners. The National Security Council will oversee the Strategy’s implementation through ONCD and the Office of Management and Budget.
In conclusion, we can say that the National Cybersecurity Plan of the Biden administration lays out an ambitious goal for American cybersecurity that is to be accomplished by the end of the decade. The administration aims to shift tasks and responsibilities to those organisations in the best position to safeguard systems and software and to encourage incentives for long-term investment in cybersecurity to build a more cyber-secure future.
It is impossible to assess the cyber strategy in a vacuum. It’s critical to consider the previous efforts and acknowledge the ones that still need to be made. The implementation specifics for several aspects of the approach are left up to a yet-to-be-written plan.
Given these difficulties, it would be simple to voice some pessimism at this stage regarding the next effort that will be required. Yet, the Biden administration has established a vision for cybersecurity oriented towards the future, with novel projects that could fundamentally alter how the United States handles and maintains cybersecurity. The Biden administration raised the bar for cybersecurity by outlining this robust plan, which will be challenging for succeeding administrations to let go. Also, it has alerted Congress to areas where it will need to act.
References:
- https://www.whitehouse.gov/briefing-room/statements-releases/2023/03/02/fact-sheet-biden-harris-administration-announces-national-cybersecurity-strategy/
- https://www.huntonprivacyblog.com/2023/03/02/white-house-releases-national-cybersecurity-strategy/
- https://www.lawfareblog.com/biden-harris-administration-releases-new-national-cybersecurity-strategy
Introduction
Advanced deepfake technology blurs the line between authentic and fake. To ascertain the credibility of the content it has become important to differentiate between genuine and manipulated or curated online content highly shared on social media platforms. AI-generated fake voice clone, videos are proliferating on the Internet and social media. There is the use of sophisticated AI algorithms that help manipulate or generate synthetic multimedia content such as audio, video and images. As a result, it has become increasingly difficult to differentiate between genuine, altered, or fake multimedia content. McAfee Corp., a well-known or popular global leader in online protection, has recently launched an AI-powered deepfake audio detection technology under Project “Mockingbird” intending to safeguard consumers against the surging threat of fabricated or AI-generated audio or voice clones to dupe people for money or unauthorisly obtaining their personal information. McAfee Corp. announced its AI-powered deepfake audio detection technology, known as Project Mockingbird, at the Consumer Electronics Show, 2024.
What is voice cloning?
To create a voice clone of anyone's, audio can be deeplyfaked, too, which closely resembles a real voice but, in actuality, is a fake voice created through deepfake technology.
Emerging Threats: Cybercriminal Exploitation of Artificial Intelligence in Identity Fraud, Voice Cloning, and Hacking Acceleration
AI is used for all kinds of things from smart tech to robotics and gaming. Cybercriminals are misusing artificial intelligence for rather nefarious reasons including voice cloning to commit cyber fraud activities. Artificial intelligence can be used to manipulate the lips of an individual so it looks like they're saying something different, it could also be used for identity fraud to make it possible to impersonate someone for a remote verification for your bank and it also makes traditional hacking more convenient. Cybercriminals have been misusing advanced technologies such as artificial intelligence, which has led to an increase in the speed and volume of cyber attacks, and that's been the theme in recent times.
Technical Analysis
To combat Audio cloning fraudulent activities, McAfee Labs has developed a robust AI model that precisely detects artificially generated audio used in videos or otherwise.
- Context-Based Recognition: Contextual assessment is used by technological devices to examine audio components in the overall setting of an audio. It improves the model's capacity to recognise discrepancies suggestive of artificial intelligence-generated audio by evaluating its surroundings information.
- Conductual Examination: Psychological detection techniques examine linguistic habits and subtleties, concentrating on departures from typical individual behaviour. Examining speech patterns, tempo, and pronunciation enables the model to identify artificially or synthetically produced material.
- Classification Models: Auditory components are categorised by categorisation algorithms for detection according to established traits of human communication. The technology differentiates between real and artificial intelligence-synthesized voices by comparing them against an extensive library of legitimate human speech features.
- Accuracy Outcomes: McAfee Labs' deepfake voice recognition solution, which boasts an impressive ninety per cent success rate, is based on a combined approach incorporating psychological, context-specific, and categorised identification models. Through examining audio components in the larger video context and examining speech characteristics, such as intonation, rhythm, and pronunciation, the system can identify discrepancies that could be signs of artificial intelligence-produced audio. Categorical models make an additional contribution by classifying audio information according to characteristics of known human speech. This all-encompassing strategy is essential for precisely recognising and reducing the risks connected to AI-generated audio data, offering a strong barrier against the growing danger of deepfake situations.
- Application Instances: The technique protects against various harmful programs, such as celebrity voice-cloning fraud and misleading content about important subjects.
Conclusion
It is important to foster ethical and responsible consumption of technology. Awareness of common uses of artificial intelligence is a first step toward broader public engagement with debates about the appropriate role and boundaries for AI. Project Mockingbird by Macafee employs AI-driven deepfake audio detection to safeguard against cyber criminals who are using fabricated AI-generated audio for scams and manipulating the public image of notable figures, protecting consumers from financial and personal information risks.
References:
- https://www.cnbctv18.com/technology/mcafee-deepfake-audio-detection-technology-against-rise-in-ai-generated-misinformation-18740471.htm
- https://www.thehindubusinessline.com/info-tech/mcafee-unveils-advanced-deepfake-audio-detection-technology/article67718951.ece
- https://lifestyle.livemint.com/smart-living/innovation/ces-2024-mcafee-ai-technology-audio-project-mockingbird-111704714835601.html
- https://news.abplive.com/fact-check/audio-deepfakes-adding-to-cacophony-of-online-misinformation-abpp-1654724