#FactCheck - Viral Images of Indian Army Eating Near Border area Revealed as AI-Generated Fabrication
Executive Summary:
The viral social media posts circulating several photos of Indian Army soldiers eating their lunch in the extremely hot weather near the border area in Barmer/ Jaisalmer, Rajasthan, have been detected as AI generated and proven to be false. The images contain various faults such as missing shadows, distorted hand positioning and misrepresentation of the Indian flag and soldiers body features. The various AI generated tools were also used to validate the same. Before sharing any pictures in social media, it is necessary to validate the originality to avoid misinformation.




Claims:
The photographs of Indian Army soldiers having their lunch in extreme high temperatures at the border area near to the district of Barmer/Jaisalmer, Rajasthan have been circulated through social media.




Fact Check:
Upon the study of the given images, it can be observed that the images have a lot of similar anomalies that are usually found in any AI generated image. The abnormalities are lack of accuracy in the body features of the soldiers, the national flag with the wrong combination of colors, the unusual size of spoon, and the absence of Army soldiers’ shadows.




Additionally it is noticed that the flag on Indian soldiers’ shoulder appears wrong and it is not the traditional tricolor pattern. Another anomaly, soldiers with three arms, strengtheness the idea of the AI generated image.
Furthermore, we used the HIVE AI image detection tool and it was found that each photo was generated using an Artificial Intelligence algorithm.


We also checked with another AI Image detection tool named Isitai, it was also found to be AI-generated.


After thorough analysis, it was found that the claim made in each of the viral posts is misleading and fake, the recent viral images of Indian Army soldiers eating food on the border in the extremely hot afternoon of Badmer were generated using the AI Image creation tool.
Conclusion:
In conclusion, the analysis of the viral photographs claiming to show Indian army soldiers having their lunch in scorching heat in Barmer, Rajasthan reveals many anomalies consistent with AI-generated images. The absence of shadows, distorted hand placement, irregular showing of the Indian flag, and the presence of an extra arm on a soldier, all point to the fact that the images are artificially created. Therefore, the claim that this image captures real-life events is debunked, emphasizing the importance of analyzing and fact-checking before sharing in the era of common widespread digital misinformation.
- Claim: The photo shows Indian army soldiers having their lunch in extreme heat near the border area in Barmer/Jaisalmer, Rajasthan.
- Claimed on: X (formerly known as Twitter), Instagram, Facebook
- Fact Check: Fake & Misleading
Related Blogs

Introduction
In today's era of digitalised community and connections, social media has become an integral part of our lives. A large number of teenagers are also active and have their accounts on social media. They use social media to connect with their friends and family. Social media offers ease to connect and communicate with larger communities and even showcase your creativity. On the other hand, it also poses some challenges or issues such as inappropriate content, online harassment, online stalking, misuse of personal information, abusive and dishearted content etc. There could be unindented consequences on teenagers' mental health by such threats or overuse of social media. The data shows some teens spend hours a day on social media hence it has a larger impact on them whether we notice it or not. Social media addiction and its negative repercussions such as overuse of social media by teens and online threats and vulnerabilities is a growing concern that needs to be taken seriously by social media platforms, regulatory policies and even user's responsibilities. Recently Colorado and California led a joint lawsuit filed by 33 states in the U.S. District Court for the Northern District of California against meta on the concern of child safety.
Meta and concern of child users safety
Recently Meta, the company that owns Facebook, Instagram, WhatsApp, and Messenger, has been sued by more than three dozen states for allegedly using features to hook children to its platforms. The lawsuit claims that Meta violated consumer protection laws and deceived users about the safety of its platforms. The states accuse Meta of designing manipulative features to induce young users' compulsive and extended use, pushing them into harmful content. However, Meta has responded by stating that it is working to provide a safer environment for teenagers and expressing disappointment in the lawsuit.
According to the complaint filed by the states, Meta “designed psychologically manipulative product features to induce young users’ compulsive and extended use" of platforms like Instagram. The states allege that Meta's algorithms were designed to push children and teenagers into rabbit holes of toxic and harmful content, with features like "infinite scroll" and persistent alerts used to hook young users. However, meta responded with disappointment with a lawsuit stating that meta working productively with companies across the industry to create clear, age-appropriate standards for the many apps.
Unplug for sometime
Overuse of social media is associated with increased mental health repercussions along with online threats and risks. Social media’s effect on teenagers is driven by factors such as inadequate sleep, exposure to cyberbullying and online threats and lack of physical activity. Its admitted that social media can help teens feel more connected to their friends and their support system and showcase their creativity to the online world. However, social media overuse by teens is often linked with underlying issues that require attention. To help teenagers, encourage them for responsible use and unplug from social media for some time, encourage them to get outside in nature, do physical activities, and express themselves creatively.
Understanding the threats & risks
- Psychological effects
- Addiction: Excessive use of social media will lead to procrastination and excessively using social media can lead to physical and psychological addiction because it triggers the brain's reward system.
- Mental Conditions Associated: Excessively using social media can be harmful for mental well-being which can also lead to depression and anxiety, self-consciousness and may also lead to social anxiety disorder.
- Eyes, Carpal tunnel syndrome: Excessive spending time on screen may lead to put a real strain on your eyes. Eye problems caused by computer/phone screen use fall under computer vision syndrome (CVS). Carpal tunnel syndrome is caused by pressure on the median nerve.
- Cyberbullying: Cyberbullying is one of the major concerns faced in online interactions on social media. Cyberbullying takes place using the internet or other digital communication technology to bully, harass, or intimidate others and it has become a major concern of online harassment on popular social media platforms. Cyberbullying may include spreading rumours or posting hurtful comments. Cyberbullying has emerged as a phenomenon that has a socio-psychological impact on the victims.
- Online grooming: Online grooming is defined as the tactics abusers deploy through the internet to sexually exploit children. The average time for a bad actor to lure children into his trap is 3 minutes, which is a very alarming number.
- Ransomware/Malware/Spyware: Cybercrooks impose threats such as ransomware, malware and spyware by deploying malicious links on social media. This poses serious cyber threats, and it causes consequences such as financial losses, data loss, and reputation damage. Ransomware is a type of malware which is designed to deny a user or organisation access to their files on the computer. On social media, cyber crooks post malicious links which contain malware, and spyware threats. Hence it is important to be cautious before clicking on any such suspicious link.
- Sextortion: Sextortion is a crime where the perpetrator threatens the victim and demands ransom or asks for sexual favours by threatening the victim to expose or reveal the victim’s sexual activity. It is a kind of sexual blackmail, it may take place on social media and youngsters are mostly targeted. The cyber crooks also misuse the advanced AI Deepfake technology which is capable of creating realistic images or videos which in actuality are created by machine algorithms. Deepfakes technology since easily accessible, is misused by fraudsters to commit various crimes including sextortion or deceiving and scamming people through fake images or videos which look realistic.
- Child sexual abuse material(CSAM): CSAM is inappropriate or illicit content which is prohibited by the laws and regulatory guidelines. Child while using the internet if encounters age-restricted or inappropriate content which may be harmful to them child. Through regulatory guidelines, internet service providers are refrained from hosting the CSAM content on the websites and blocking such inappropriate or CSAM content.
- In App purchases: The teen user also engages in-app purchases on social media or online gaming where they might fall into financial fraud or easy money scams. Where fraudster targets through offering exciting job offers such as part-time job, work-from-home job, small investments, liking content on social media, and earning money out of this. This has been prevalent on social media and fraudsters target innocent people ask for their personal and financial information, and commit financial fraud by scamming people on the pretext of offering exciting offers.
Safety tips:
To stay safe while using social media teens or users are encouraged to follow the best practices and stay aware of the online threats. Users must keep in regard to the best practices. Such as;
- Safe web browsing.
- Utilising privacy settings of your social media accounts.
- Using strong passwords and enabling two-factor authentication.
- Be careful about what you post or share.
- Becoming familiar with the privacy policy of the social media platforms.
- Being selective of adding unknown users to your social media network.
- Reporting any suspicious activity to the platform or relevant forum.
Conclusion:
Child safety is a major concern on social media platforms. Social media-related offences such as cyberstalking, hacking, online harassment and threats, sextortion, and financial fraud are seen as the most occurring cyber crimes on social media. The tech giants must ensure the safety of teen users on social media by implementing and adopting the best mechanisms on the platform. CyberPeace Foundation is working towards advocating for a Child-friendly SIM to protect from the illicit influence of the internet and Social Media.
References:
- https://www.scientificamerican.com/article/heres-why-states-are-suing-meta-for-hurting-teens-with-facebook-and-instagram/
- https://www.nytimes.com/2023/10/24/technology/states-lawsuit-children-instagram-facebook.html

Introduction
Misinformation and disinformation are significant issues in today's digital age. The challenge is not limited to any one sector or industry, and has been seen to affect everyone that deals with data of any sort. In recent times, we have seen a rise in misinformation about all manner of subjects, from product and corporate misinformation to manipulated content about regulatory or policy developments.
Micro, Small, and Medium Enterprises (MSMEs) play an important role in economies, particularly in developing nations, by promoting employment, innovation, and growth. However, in the evolving digital landscape, they also confront tremendous hurdles, such as the dissemination of mis/disinformation which may harm reputations, disrupt businesses, and reduce consumer trust. MSMEs are particularly susceptible since they have minimal resources at their disposal and cannot afford to invest in the kind of talent, technology and training that is needed for a business to be able to protect itself in today’s digital-first ecosystem. Mis/disinformation for MSMEs can arise from internal communications, supply chain partners, social media, competitors, etc. To address these dangers, MSMEs must take proactive steps such as adopting frameworks to counter misinformation and prioritising best practices like digital literacy and training, monitoring and social listening, transparency protocols and robust communication practices.
Assessing the Impact of Misinformation on MSMEs
To assess the impact of misinformation on MSMEs, it is essential to get a full sense of the challenges. To begin with, one must consider the categories of damage which can include financial loss, reputational damage, operational damages, and regulatory noncompliance. Various assessment methodologies can be used to analyze the impact of misinformation, including surveys, interviews, case studies, social media and news data analysis, and risk analysis practices.
Policy Framework and Gaps in Addressing Misinformation
The Digital India Initiative, a flagship program of the Government of India, aims to transform India into a digitally empowered society and knowledge economy. The Information Technology Act, 2000 and the rules made therein govern the technology space and serve as the legal framework for cyber security and data protection. The Bhartiya Nyay Sanhita, 2023 also contains provisions regarding ‘fake news’. The Digital Personal Data Protection Act, 2023 is a brand new law aimed at protecting personal data. Fact-check units (FCUs) are government and private independent bodies that verify claims about government policies, regulations, announcements, and measures. However, these policy measures are not sector-specific and lack specific guidelines, which have limited impact on their awareness initiatives on misinformation and insufficient support structure for MSMEs to verify information and protect themselves.
Recommendations for Countering Misinformation in the MSME Sector
To counter misinformation for MSMEs, recommendations include creating a dedicated Misinformation Helpline, promoting awareness campaigns, creating regulatory support and guidelines, and collaborating with tech platforms and expert organisations for the identification and curbing of misinformation.
Organisational recommendations include the Information Verification Protocols for the consumers of Information for the verification of critical information before acting upon it, engaging in employee training for regular training on the identification and management of misinformation, creating a crisis management plan to deal with misinformation crisis, form collaboration networks with other MSMEs to share verified information and best practices.
Engage with technological solutions like AI and ML tools for the detection and flagging of potential misinformation along with fact-checking tools and engaging with cyber security measures to prevent misinformation via digital channels.
Conclusion: Developing a Vulnerability Assessment Framework for MSMEs
Creating a vulnerability assessment framework for misinformation in Micro, Small, and Medium Enterprises (MSMEs) in India involves several key components which include the understanding of the sources and types of misinformation, assessing the impact on MSMEs, identifying the current policies and gaps, and providing actionable recommendations. The implementation strategy for policies to counter misinformation in the MSME sector can be by starting with pilot programs in key MSME clusters, and stakeholder engagement by involving industry associations, tech companies and government bodies. Initiating a feedback mechanism for constant improvement of the framework and finally, developing a plan to scale successful initiatives across the country.
References
- https://publications.ut-capitole.fr/id/eprint/48849/1/wp_tse_1516.pdf
- https://techinformed.com/how-misinformation-can-impact-businesses/
- https://pib.gov.in/aboutfactchecke.aspx

THREE CENTRES OF EXCELLENCE IN ARTIFICIAL INTELLIGENCE:
India’s Finance Minister, Mrs. Nirmala Sitharaman, with a vision of ‘Make AI for India’ and ‘Make AI work for India, ’ announced during the presentation of Union Budget 2023 that the Indian Government is planning to set up three ‘Centre of Excellence’ for Artificial Intelligence in top Educational Institutions to revolutionise fields such as health, agriculture, etc.
Under the ‘Amirt Kaal,’ i.e., the budget of 2023 is a stepping stone by the government to have a technology-driven knowledge-based economy and the seven priorities that have been set up by the government called ‘Saptarishi’ such as inclusive development, reaching the last mile, infrastructure investment, unleashing potential, green growth, youth power, and financial sector will guide the nation in this endeavor along with leading industry players that will partner in conducting interdisciplinary research, developing cutting edge applications and scalable problem solutions in such areas.
The government has already formed the roadmap for AI in the nation through MeitY, NASSCOM, and DRDO, indicating that the government has already started this AI revolution. For AI-related research and development, the Centre for Artificial Intelligence and Robotics (CAIR) has already been formed, and biometric identification, facial recognition, criminal investigation, crowd and traffic management, agriculture, healthcare, education, and other applications of AI are currently being used.
Even a task force on artificial intelligence (AI) was established on August 24, 2017. The government had promised to set up Centers of Excellence (CoEs) for research, education, and skill development in robotics, artificial intelligence (AI), digital manufacturing, big data analytics, quantum communication, and the Internet of Things (IoT) and by announcing the same in the current Union budget has planned to fulfill the same.
The government has also announced the development of 100 labs in engineering institutions for developing applications using 5G services that will collaborate with various authorities, regulators, banks, and other businesses.
Developing such labs aims to create new business models and employment opportunities. Among others, it will also create smart classrooms, precision farming, intelligent transport systems, and healthcare applications, as well as new pedagogy, curriculum, continual professional development dipstick survey, and ICT implementation will be introduced for training the teachers.
POSSIBLE ROLES OF AI:
The use of AI in top educational institutions will help students to learn at their own pace, using AI algorithms providing customised feedback and recommendations based on their performance, as it can also help students identify their strengths and weaknesses, allowing them to focus their study efforts more effectively and efficiently and will help train students in AI and make the country future-ready.
The main area of AI in healthcare, agriculture, and sustainable cities would be researching and developing practical AI applications in these sectors. In healthcare, AI can be effective by helping medical professionals diagnose diseases faster and more accurately by analysing medical images and patient data. It can also be used to identify the most effective treatments for specific patients based on their genetic and medical history.
Artificial Intelligence (AI) has the potential to revolutionise the agriculture industry by improving yields, reducing costs, and increasing efficiency. AI algorithms can collect and analyse data on soil moisture, crop health, and weather patterns to optimise crop management practices, improve yields and the health and well-being of livestock, predict potential health issues, and increase productivity. These algorithms can identify and target weeds and pests, reducing the need for harmful chemicals and increasing sustainability.
ROLE OF AI IN CYBERSPACE:
Artificial Intelligence (AI) plays a crucial role in cyberspace. AI technology can enhance security in cyberspace, prevent cyber-attacks, detect and respond to security threats, and improve overall cybersecurity. Some of the specific applications of AI in cyberspace include:
- Intrusion Detection: AI-powered systems can analyse large amounts of data and detect signs of potential cyber-attacks.
- Threat Analysis: AI algorithms can help identify patterns of behaviour that may indicate a potential threat and then take appropriate action.
- Fraud Detection: AI can identify and prevent fraudulent activities, such as identity theft and phishing, by analysing large amounts of data and detecting unusual behaviour patterns.
- Network Security: AI can monitor and secure networks against potential cyber-attacks by detecting and blocking malicious traffic.
- Data Security: AI can be used to protect sensitive data and ensure that it is only accessible to authorised personnel.
CONCLUSION:
Introducing AI in top educational institutions and partnering it with leading industries will prove to be a stepping stone to revolutionise the development of the country, as Artificial Intelligence (AI) has the potential to play a significant role in the development of a country by improving various sectors and addressing societal challenges. Overall, we hope to see an increase in efficiency and productivity across various industries, leading to increased economic growth and job creation, improved delivery of healthcare services by increasing access to care and, improving patient outcomes, making education more accessible and effective as AI has the potential to improve various sectors of a country and contribute to its overall development and progress. However, it’s important to ensure that AI is developed and used ethically, considering its potential consequences and impact on society.