#FactCheck - Manipulated Image Alleging Disrespect Towards PM Circulates Online
Executive Summary:
A manipulated image showing someone making an offensive gesture towards Prime Minister Narendra Modi is circulating on social media. However, the original photo does not display any such behavior towards the Prime Minister. The CyberPeace Research Team conducted an analysis and found that the genuine image was published in a Hindustan Times article in May 2019, where no rude gesture was visible. A comparison of the viral and authentic images clearly shows the manipulation. Moreover, The Hitavada also published the same image in 2019. Further investigation revealed that ABPLive also had the image.

Claims:
A picture showing an individual making a derogatory gesture towards Prime Minister Narendra Modi is being widely shared across social media platforms.



Fact Check:
Upon receiving the news, we immediately ran a reverse search of the image and found an article by Hindustan Times, where a similar photo was posted but there was no sign of such obscene gestures shown towards PM Modi.

ABP Live and The Hitavada also have the same image published on their website in May 2019.


Comparing both the viral photo and the photo found on official news websites, we found that almost everything resembles each other except the derogatory sign claimed in the viral image.

With this, we have found that someone took the original image, published in May 2019, and edited it with a disrespectful hand gesture, and which has recently gone viral across social media and has no connection with reality.
Conclusion:
In conclusion, a manipulated picture circulating online showing someone making a rude gesture towards Prime Minister Narendra Modi has been debunked by the Cyberpeace Research team. The viral image is just an edited version of the original image published in 2019. This demonstrates the need for all social media users to check/ verify the information and facts before sharing, to prevent the spread of fake content. Hence the viral image is fake and Misleading.
- Claim: A picture shows someone making a rude gesture towards Prime Minister Narendra Modi
- Claimed on: X, Instagram
- Fact Check: Fake & Misleading
Related Blogs

THREE CENTRES OF EXCELLENCE IN ARTIFICIAL INTELLIGENCE:
India’s Finance Minister, Mrs. Nirmala Sitharaman, with a vision of ‘Make AI for India’ and ‘Make AI work for India, ’ announced during the presentation of Union Budget 2023 that the Indian Government is planning to set up three ‘Centre of Excellence’ for Artificial Intelligence in top Educational Institutions to revolutionise fields such as health, agriculture, etc.
Under the ‘Amirt Kaal,’ i.e., the budget of 2023 is a stepping stone by the government to have a technology-driven knowledge-based economy and the seven priorities that have been set up by the government called ‘Saptarishi’ such as inclusive development, reaching the last mile, infrastructure investment, unleashing potential, green growth, youth power, and financial sector will guide the nation in this endeavor along with leading industry players that will partner in conducting interdisciplinary research, developing cutting edge applications and scalable problem solutions in such areas.
The government has already formed the roadmap for AI in the nation through MeitY, NASSCOM, and DRDO, indicating that the government has already started this AI revolution. For AI-related research and development, the Centre for Artificial Intelligence and Robotics (CAIR) has already been formed, and biometric identification, facial recognition, criminal investigation, crowd and traffic management, agriculture, healthcare, education, and other applications of AI are currently being used.
Even a task force on artificial intelligence (AI) was established on August 24, 2017. The government had promised to set up Centers of Excellence (CoEs) for research, education, and skill development in robotics, artificial intelligence (AI), digital manufacturing, big data analytics, quantum communication, and the Internet of Things (IoT) and by announcing the same in the current Union budget has planned to fulfill the same.
The government has also announced the development of 100 labs in engineering institutions for developing applications using 5G services that will collaborate with various authorities, regulators, banks, and other businesses.
Developing such labs aims to create new business models and employment opportunities. Among others, it will also create smart classrooms, precision farming, intelligent transport systems, and healthcare applications, as well as new pedagogy, curriculum, continual professional development dipstick survey, and ICT implementation will be introduced for training the teachers.
POSSIBLE ROLES OF AI:
The use of AI in top educational institutions will help students to learn at their own pace, using AI algorithms providing customised feedback and recommendations based on their performance, as it can also help students identify their strengths and weaknesses, allowing them to focus their study efforts more effectively and efficiently and will help train students in AI and make the country future-ready.
The main area of AI in healthcare, agriculture, and sustainable cities would be researching and developing practical AI applications in these sectors. In healthcare, AI can be effective by helping medical professionals diagnose diseases faster and more accurately by analysing medical images and patient data. It can also be used to identify the most effective treatments for specific patients based on their genetic and medical history.
Artificial Intelligence (AI) has the potential to revolutionise the agriculture industry by improving yields, reducing costs, and increasing efficiency. AI algorithms can collect and analyse data on soil moisture, crop health, and weather patterns to optimise crop management practices, improve yields and the health and well-being of livestock, predict potential health issues, and increase productivity. These algorithms can identify and target weeds and pests, reducing the need for harmful chemicals and increasing sustainability.
ROLE OF AI IN CYBERSPACE:
Artificial Intelligence (AI) plays a crucial role in cyberspace. AI technology can enhance security in cyberspace, prevent cyber-attacks, detect and respond to security threats, and improve overall cybersecurity. Some of the specific applications of AI in cyberspace include:
- Intrusion Detection: AI-powered systems can analyse large amounts of data and detect signs of potential cyber-attacks.
- Threat Analysis: AI algorithms can help identify patterns of behaviour that may indicate a potential threat and then take appropriate action.
- Fraud Detection: AI can identify and prevent fraudulent activities, such as identity theft and phishing, by analysing large amounts of data and detecting unusual behaviour patterns.
- Network Security: AI can monitor and secure networks against potential cyber-attacks by detecting and blocking malicious traffic.
- Data Security: AI can be used to protect sensitive data and ensure that it is only accessible to authorised personnel.
CONCLUSION:
Introducing AI in top educational institutions and partnering it with leading industries will prove to be a stepping stone to revolutionise the development of the country, as Artificial Intelligence (AI) has the potential to play a significant role in the development of a country by improving various sectors and addressing societal challenges. Overall, we hope to see an increase in efficiency and productivity across various industries, leading to increased economic growth and job creation, improved delivery of healthcare services by increasing access to care and, improving patient outcomes, making education more accessible and effective as AI has the potential to improve various sectors of a country and contribute to its overall development and progress. However, it’s important to ensure that AI is developed and used ethically, considering its potential consequences and impact on society.
References:

In the rich history of humanity, the advent of artificial intelligence (AI) has added a new, delicate aspect. The aspect of promising technological advancement has the potential to either enrich the nest of our society or destroy it entirely. The latest straw in this complex nest is generative AI, a frontier teeming with both potential and perils. It is a realm where the ethereal concepts of cyber peace and resilience are not just theoretical constructs but tangible necessities.
The spectre of generative AI looms large over the digital landscape, casting a long shadow on the sanctity of data privacy and the integrity of political processes. The seeds of this threat were sown in the fertile soil of the Cambridge Analytica scandal of 2018, a watershed moment that unveiled the extent to which personal data could be harvested and utilized to influence electoral outcomes. However despite the indignation, the scandal resulted in meagre alterations to modus operandi of digital platforms.
Fast forward to the present day, and the spectre has only grown more ominous. A recent report by Human Rights Watch has shed light on the continued exploitation of data-driven campaigning in Hungary's re-election of Viktor Orbán. The report paints a chilling picture of political parties leveraging voter databases for targeted social media advertising, with the ruling Fidesz party even resorting to the unethical use of public service data to bolster its voter database.
The Looming Threat of Disinformation
As we stand on the precipice of 2024, a year that will witness over 50 countries holding elections, the advancements in generative AI could exponentially amplify the ability of political campaigns to manipulate electoral outcomes. This is particularly concerning in countries where information disparities are stark, providing fertile ground for the seeds of disinformation to take root and flourish.
The media, the traditional watchdog of democracy, has already begun to sound the alarm about the potential threats posed by deepfakes and manipulative content in the upcoming elections. The limited use of generative AI in disinformation campaigns has raised concerns about the enforcement of policies against generating targeted political materials, such as those designed to sway specific demographic groups towards a particular candidate.
Yet, while the threat of bad actors using AI to generate and disseminate disinformation is real and present, there is another dimension that has largely remained unexplored: the intimate interactions with chatbots. These digital interlocutors, when armed with advanced generative AI, have the potential to manipulate individuals without any intermediaries. The more data they have about a person, the better they can tailor their manipulations.
Root of the Cause
To fully grasp the potential risks, we must journey back 30 years to the birth of online banner ads. The success of the first-ever banner ad for AT&T, which boasted an astounding 44% click rate, birthed a new era of digital advertising. This was followed by the advent of mobile advertising in the early 2000s. Since then, companies have been engaged in a perpetual quest to harness technology for manipulation, blurring the lines between commercial and political advertising in cyberspace.
Regrettably, the safeguards currently in place are woefully inadequate to prevent the rise of manipulative chatbots. Consider the case of Snapchat's My AI generative chatbot, which ostensibly assists users with trivia questions and gift suggestions. Unbeknownst to most users, their interactions with the chatbot are algorithmically harvested for targeted advertising. While this may not seem harmful in its current form, the profit motive could drive it towards more manipulative purposes.
If companies deploying chatbots like My AI face pressure to increase profitability, they may be tempted to subtly steer conversations to extract more user information, providing more fuel for advertising and higher earnings. This kind of nudging is not clearly illegal in the U.S. or the EU, even after the AI Act comes into effect. The market size of AI in India is projected to touch US$4.11bn in 2023.
Taking this further, chatbots may be inclined to guide users towards purchasing specific products or even influencing significant life decisions, such as religious conversions or voting choices. The legal boundaries here remain unclear, especially when manipulation is not detectable by the user.
The Crucial Dos/Dont's
It is crucial to set rules and safeguards in order to manage the possible threats related to manipulative chatbots in the context of the general election in 2024.
First and foremost, candor and transparency are essential. Chatbots, particularly when employed for political or electoral matters, ought to make it clear to users what they are for and why they are automated. By being transparent, people are guaranteed to be aware that they are interacting with automated processes.
Second, getting user consent is crucial. Before collecting user data for any reason, including advertising or political profiling, users should be asked for their informed consent. Giving consumers easy ways to opt-in and opt-out gives them control over their data.
Furthermore, moral use is essential. It's crucial to create an ethics code for chatbot interactions that forbids manipulation, disseminating false information, and trying to sway users' political opinions. This guarantees that chatbots follow moral guidelines.
In order to preserve transparency and accountability, independent audits need to be carried out. Users might feel more confident knowing that chatbot behavior and data collecting procedures are regularly audited by impartial third parties to ensure compliance with legal and ethical norms.
Important "don'ts" to take into account. Coercion and manipulation ought to be outlawed completely. Chatbots should refrain from using misleading or manipulative approaches to sway users' political opinions or religious convictions.
Another hazard to watch out for is unlawful data collecting. Businesses must obtain consumers' express agreement before collecting personal information, and they must not sell or share this information for political reasons.
At all costs, one should steer clear of fake identities. Impersonating people or political figures is not something chatbots should do because it can result in manipulation and false information.
It is essential to be impartial. Bots shouldn't advocate for or take part in political activities that give preference to one political party over another. In encounters, impartiality and equity are crucial.
Finally, one should refrain from using invasive advertising techniques. Chatbots should ensure that advertising tactics comply with legal norms by refraining from displaying political advertisements or messaging without explicit user agreement.
Present Scenario
As we approach the critical 2024 elections and generative AI tools proliferate faster than regulatory measures can keep pace, companies must take an active role in building user trust, transparency, and accountability. This includes comprehensive disclosure about a chatbot's programmed business goals in conversations, ensuring users are fully aware of the chatbot's intended purposes.
To address the regulatory gap, stronger laws are needed. Both the EU AI Act and analogous laws across jurisdictions should be expanded to address the potential for manipulation in various forms. This effort should be driven by public demand, as the interests of lawmakers have been influenced by intensive Big Tech lobbying campaigns.
At present, India doesn’t have any specific laws pertaining to AI regulation. Ministry of Electronics and Information Technology (MEITY), is the executive body responsible for AI strategies and is constantly working towards a policy framework for AI. The Niti Ayog has presented seven principles for responsible AI which includes equality , inclusivity, safety, privacy, transparency, accountability, dependability and protection of positive human values.
Conclusion
We are at a pivotal juncture in history. As generative AI gains more power, we must proactively establish effective strategies to protect our privacy, rights and democracy. The public's waning confidence in Big Tech and the lessons learned from the techlash underscore the need for stronger regulations that hold tech companies accountable. Let's ensure that the power of generative AI is harnessed for the betterment of society and not exploited for manipulation.
Reference
McCallum, B. S. (2022, December 23). Meta settles Cambridge Analytica scandal case for $725m. BBC News. https://www.bbc.com/news/technology-64075067
Hungary: Data misused for political campaigns. (2022, December 1). Human Rights Watch. https://www.hrw.org/news/2022/12/01/hungary-data-misused-political-campaigns
Statista. (n.d.). Artificial Intelligence - India | Statista Market forecast. https://www.statista.com/outlook/tmo/artificial-intelligence/india

About Customs Scam:
The Customs Scam is a type of fraud where the scammers pretend to be from the renowned courier office company (DTDC, etc.), or customs department or other government entities. They try to deceive the targets to transfer the money to resolve the fake customs related concerns. The Research Wing at CyberPeace along with the Research Wing of Autobot Infosec Private Ltd. delved into this case through Open Source Intelligence methods and undercover interactions with the scammers and concluded with some credible information.
Case Study:
The victim receives a phone call posing as a renowned courier office (DTDC, etc.) employee (in some case custom’s officer) that a parcel in the name of the victim has been taken into custody because of inappropriate content. The scammer provides the victim an employee ID, FIR number to prove the authenticity of the case and also they show empathy towards the victim. The scammer pretends to help the victim to connect with a police officer for further action. This so-called police officer shows transparency in his work. He asks him to join a skype video call and he even provides time to install the skype app. He instructs the victim to connect with the skype id provided by the fake police officer where the scammer created a fake police station environment. He also claims that he contacted the headquarters and the victim’s phone number is associated with many illegal activities to create panic to the victim. Then the scammers also ask the victim to give their personal details such as home address, office address, aadhar card number, PAN card number and screenshot of their bank accounts along with their available account balance for the sake of so-called investigation. Sometimes scammers also demand a high amount of money to resolve the issue and create fake urgency to trap the victim in making the payment. He sternly warns the victim not to contact any other police officials or professionals, making it clear that doing so would only lead to more trouble.
Analysis & Findings:
After receiving these kinds of complaints from multiple sources, the analysis was done on the collection of phone numbers from where the calls originated. These phone numbers were analysed for alias name, location, Telecom operator, etc. Further, we have verified the number to check whether the number is linked with any social media account on reputed platforms like Google, Facebook, Whatsapp, Twitter, Instagram, Linkedin, and other classified platforms such as Locanto.
- Phone Number Analysis: Each phone number looks authentic, cleverly concealing the fraud. Sometimes scammers use virtual/temporary phone numbers for these kinds of scams. In this case the victim was from Delhi, so the scammer posed themselves from Delhi Police station, while the phone numbers belong to a different place.
- Undercover Interactions: The interactions with the suspects reveals their chilling way of modus operandi. These scammers are masters of psychological manipulation. They threaten the victims and act as if they are genuine LEA officers.
- Exploitation Tactics: They target unsuspecting individuals and create fear and fake urgency among the targets to extract sensitive information such as Aadhaar, PAN card and bank account details.
- Fraud Execution: The scammers demand for the payment to resolve this issue and they make use of the stolen personally identifiable information. Once the victims transfer the money, the fraudsters cut off all the communication.
- Outcome for Victims: The scammers act so genuine and they frame the incidents so realistic, victims don't realise that they are trapped in this scam. They suffer severe financial loss and psychological trauma.
Recommendations:
- Verify Identities: It is important to verify the identity of any individual, especially if they demand personal information or payment. Contact the official agency directly using verified contact details to confirm the authenticity of the communication.
- Education on Personal Information: Provide education to people to protect their personal identity numbers like Aadhaar and PAN card number. Always emphasise the possible dangers connected to sharing such data in the course of phone conversations.
- Report Suspicious Activity: Prompt reporting of suspicious phone calls or messages to relevant authorities and consumer protection agencies helps in tracking down scammers and prevents people from falling. Report to https://cybercrime.gov.in or reach out to helpline@cyberpeace.net for further assistance.
- Enhanced Cybersecurity Measures: Implement robust cybersecurity measures to detect and mitigate phishing attempts and fraudulent activities. This includes monitoring and blocking suspicious phone numbers and IP addresses associated with scams.
Conclusion:
In the Customs Scam fraud, the scammers pretend to be a custom or any government official and sometimes threaten the targets to get the details such as Aadhaar, PAN card details, screenshot of their bank accounts along with their available balance in their account. The phone numbers used for these kinds of scams were analysed for any suspicious activity. It is found that all the phone numbers look authentic concealing the fraudentent activities. The interactions made with them reveals that they create fearness and urgency between the individuals. They act as if they are genuine officer’s and ask for money to resolve this issue. It is important to stay vigilant and not to share any personal or financial information. When facing these kinds of scams, report and spread awareness among individuals.