#FactCheck - Viral Images of Indian Army Eating Near Border area Revealed as AI-Generated Fabrication
Executive Summary:
The viral social media posts circulating several photos of Indian Army soldiers eating their lunch in the extremely hot weather near the border area in Barmer/ Jaisalmer, Rajasthan, have been detected as AI generated and proven to be false. The images contain various faults such as missing shadows, distorted hand positioning and misrepresentation of the Indian flag and soldiers body features. The various AI generated tools were also used to validate the same. Before sharing any pictures in social media, it is necessary to validate the originality to avoid misinformation.




Claims:
The photographs of Indian Army soldiers having their lunch in extreme high temperatures at the border area near to the district of Barmer/Jaisalmer, Rajasthan have been circulated through social media.




Fact Check:
Upon the study of the given images, it can be observed that the images have a lot of similar anomalies that are usually found in any AI generated image. The abnormalities are lack of accuracy in the body features of the soldiers, the national flag with the wrong combination of colors, the unusual size of spoon, and the absence of Army soldiers’ shadows.




Additionally it is noticed that the flag on Indian soldiers’ shoulder appears wrong and it is not the traditional tricolor pattern. Another anomaly, soldiers with three arms, strengtheness the idea of the AI generated image.
Furthermore, we used the HIVE AI image detection tool and it was found that each photo was generated using an Artificial Intelligence algorithm.


We also checked with another AI Image detection tool named Isitai, it was also found to be AI-generated.


After thorough analysis, it was found that the claim made in each of the viral posts is misleading and fake, the recent viral images of Indian Army soldiers eating food on the border in the extremely hot afternoon of Badmer were generated using the AI Image creation tool.
Conclusion:
In conclusion, the analysis of the viral photographs claiming to show Indian army soldiers having their lunch in scorching heat in Barmer, Rajasthan reveals many anomalies consistent with AI-generated images. The absence of shadows, distorted hand placement, irregular showing of the Indian flag, and the presence of an extra arm on a soldier, all point to the fact that the images are artificially created. Therefore, the claim that this image captures real-life events is debunked, emphasizing the importance of analyzing and fact-checking before sharing in the era of common widespread digital misinformation.
- Claim: The photo shows Indian army soldiers having their lunch in extreme heat near the border area in Barmer/Jaisalmer, Rajasthan.
- Claimed on: X (formerly known as Twitter), Instagram, Facebook
- Fact Check: Fake & Misleading
Related Blogs

THREE CENTRES OF EXCELLENCE IN ARTIFICIAL INTELLIGENCE:
India’s Finance Minister, Mrs. Nirmala Sitharaman, with a vision of ‘Make AI for India’ and ‘Make AI work for India, ’ announced during the presentation of Union Budget 2023 that the Indian Government is planning to set up three ‘Centre of Excellence’ for Artificial Intelligence in top Educational Institutions to revolutionise fields such as health, agriculture, etc.
Under the ‘Amirt Kaal,’ i.e., the budget of 2023 is a stepping stone by the government to have a technology-driven knowledge-based economy and the seven priorities that have been set up by the government called ‘Saptarishi’ such as inclusive development, reaching the last mile, infrastructure investment, unleashing potential, green growth, youth power, and financial sector will guide the nation in this endeavor along with leading industry players that will partner in conducting interdisciplinary research, developing cutting edge applications and scalable problem solutions in such areas.
The government has already formed the roadmap for AI in the nation through MeitY, NASSCOM, and DRDO, indicating that the government has already started this AI revolution. For AI-related research and development, the Centre for Artificial Intelligence and Robotics (CAIR) has already been formed, and biometric identification, facial recognition, criminal investigation, crowd and traffic management, agriculture, healthcare, education, and other applications of AI are currently being used.
Even a task force on artificial intelligence (AI) was established on August 24, 2017. The government had promised to set up Centers of Excellence (CoEs) for research, education, and skill development in robotics, artificial intelligence (AI), digital manufacturing, big data analytics, quantum communication, and the Internet of Things (IoT) and by announcing the same in the current Union budget has planned to fulfill the same.
The government has also announced the development of 100 labs in engineering institutions for developing applications using 5G services that will collaborate with various authorities, regulators, banks, and other businesses.
Developing such labs aims to create new business models and employment opportunities. Among others, it will also create smart classrooms, precision farming, intelligent transport systems, and healthcare applications, as well as new pedagogy, curriculum, continual professional development dipstick survey, and ICT implementation will be introduced for training the teachers.
POSSIBLE ROLES OF AI:
The use of AI in top educational institutions will help students to learn at their own pace, using AI algorithms providing customised feedback and recommendations based on their performance, as it can also help students identify their strengths and weaknesses, allowing them to focus their study efforts more effectively and efficiently and will help train students in AI and make the country future-ready.
The main area of AI in healthcare, agriculture, and sustainable cities would be researching and developing practical AI applications in these sectors. In healthcare, AI can be effective by helping medical professionals diagnose diseases faster and more accurately by analysing medical images and patient data. It can also be used to identify the most effective treatments for specific patients based on their genetic and medical history.
Artificial Intelligence (AI) has the potential to revolutionise the agriculture industry by improving yields, reducing costs, and increasing efficiency. AI algorithms can collect and analyse data on soil moisture, crop health, and weather patterns to optimise crop management practices, improve yields and the health and well-being of livestock, predict potential health issues, and increase productivity. These algorithms can identify and target weeds and pests, reducing the need for harmful chemicals and increasing sustainability.
ROLE OF AI IN CYBERSPACE:
Artificial Intelligence (AI) plays a crucial role in cyberspace. AI technology can enhance security in cyberspace, prevent cyber-attacks, detect and respond to security threats, and improve overall cybersecurity. Some of the specific applications of AI in cyberspace include:
- Intrusion Detection: AI-powered systems can analyse large amounts of data and detect signs of potential cyber-attacks.
- Threat Analysis: AI algorithms can help identify patterns of behaviour that may indicate a potential threat and then take appropriate action.
- Fraud Detection: AI can identify and prevent fraudulent activities, such as identity theft and phishing, by analysing large amounts of data and detecting unusual behaviour patterns.
- Network Security: AI can monitor and secure networks against potential cyber-attacks by detecting and blocking malicious traffic.
- Data Security: AI can be used to protect sensitive data and ensure that it is only accessible to authorised personnel.
CONCLUSION:
Introducing AI in top educational institutions and partnering it with leading industries will prove to be a stepping stone to revolutionise the development of the country, as Artificial Intelligence (AI) has the potential to play a significant role in the development of a country by improving various sectors and addressing societal challenges. Overall, we hope to see an increase in efficiency and productivity across various industries, leading to increased economic growth and job creation, improved delivery of healthcare services by increasing access to care and, improving patient outcomes, making education more accessible and effective as AI has the potential to improve various sectors of a country and contribute to its overall development and progress. However, it’s important to ensure that AI is developed and used ethically, considering its potential consequences and impact on society.
References:

Introduction
In a world where Artificial Intelligence (AI) is already changing the creation and consumption of content at a breathtaking pace, distinguishing between genuine media and false or doctored content is a serious issue of international concern. AI-generated content in the form of deepfakes, synthetic text and photorealistic images is being used to disseminate misinformation, shape public opinion and commit fraud. As a response, governments, tech companies and regulatory bodies are exploring ‘watermarking’ as a key mechanism to promote transparency and accountability in AI-generated media. Watermarking embeds identifiable information into content to indicate its artificial origin.
Government Strategies Worldwide
Governments worldwide have pursued different strategies to address AI-generated media through watermarking standards. In the US, President Biden's 2023 Executive Order on AI directed the Department of Commerce and the National Institute of Standards and Technology (NIST) to establish clear guidelines for digital watermarking of AI-generated content. This action puts a big responsibility on large technology firms to put identifiers in media produced by generative models. These identifiers should help fight misinformation and address digital trust.
The European Union, in its Artificial Intelligence Act of 2024, requires AI-generated content to be labelled. Article 50 of the Act specifically demands that developers indicate whenever users engage with synthetic content. In addition, the EU is a proponent of the Coalition for Content Provenance and Authenticity (C2PA), an organisation that produces secure metadata standards to track the origin and changes of digital content.
India is currently in the process of developing policy frameworks to address AI and synthetic content, guided by judicial decisions that are helping shape the approach. In 2024, the Delhi High Court directed the central government to appoint members for a committee responsible for regulating deepfakes. Such moves indicate the government's willingness to regulate AI-generated content.
China, has already implemented mandatory watermarking on all deep synthesis content. Digital identifiers must be embedded in AI media by service providers, and China is one of the first countries to adopt stern watermarking legislation.
Understanding the Technical Feasibility
Watermarking AI media means inserting recognisable markers into digital material. They can be perceptible, such as logos or overlays or imperceptible, such as cryptographic tags or metadata. Sophisticated methods such as Google's SynthID apply imperceptible pixel-level changes that remain intact against standard image manipulation such as resizing or compression. Likewise, C2PA metadata standards enable the user to track the source and provenance of an item of content.
Nonetheless, watermarking is not an infallible process. Most watermarking methods are susceptible to tampering. Aforementioned adversaries with expertise, for instance, can use cropping editing or AI software to delete visible watermarks or remove metadata. Further, the absence of interoperability between different watermarking systems and platforms hampers their effectiveness. Scalability is also an issue enacting and authenticating watermarks for billions of units of online content necessitates huge computational efforts and routine policy enforcement across platforms. Scientists are currently working on solutions such as blockchain-based content authentication and zero-knowledge watermarking, which maintain authenticity without sacrificing privacy. These new techniques have potential for overcoming technical deficiencies and making watermarking more secure.
Challenges in Enforcement
Though increasing agreement exists for watermarking, implementation of such policies is still a major issue. Jurisdictional constraints prevent enforceability globally. A watermarking policy within one nation might not extend to content created or stored in another, particularly across decentralised or anonymous domains. This creates an exigency for international coordination and the development of worldwide digital trust standards. While it is a welcome step that platforms like Meta, YouTube, and TikTok have begun flagging AI-generated content, there remains a pressing need for a standardised policy that ensures consistency and accountability across all platforms. Voluntary compliance alone is insufficient without clear global mandates.
User literacy is also a significant hurdle. Even when content is properly watermarked, users might not see or comprehend its meaning. This aligns with issues of dealing with misinformation, wherein it's not sufficient just to mark off fake content, users need to be taught how to think critically about the information they're using. Public education campaigns, digital media literacy and embedding watermarking labels within user-friendly UI elements are necessary to ensure this technology is actually effective.
Balancing Privacy and Transparency
While watermarking serves to achieve digital transparency, it also presents privacy issues. In certain instances, watermarking might necessitate the embedding of metadata that will disclose the source or identity of the content producer. This threatens journalists, whistleblowers, activists, and artists utilising AI tools for creative or informative reasons. Governments have a responsibility to ensure that watermarking norms do not violate freedom of expression or facilitate surveillance. The solution is to achieve a balance by employing privacy-protection watermarking strategies that verify the origin of the content without revealing personally identifiable data. "Zero-knowledge proofs" in cryptography may assist in creating watermarking systems that guarantee authentication without undermining user anonymity.
On the transparency side, watermarking can be an effective antidote to misinformation and manipulation. For example, during the COVID-19 crisis, misinformation spread by AI on vaccines, treatments and public health interventions caused widespread impact on public behaviour and policy uptake. Watermarked content would have helped distinguish between authentic sources and manipulated media and protected public health efforts accordingly.
Best Practices and Emerging Solutions
Several programs and frameworks are at the forefront of watermarking norms. Adobe, Microsoft and others' collaborative C2PA framework puts tamper-proof metadata into images and videos, enabling complete traceability of content origin. SynthID from Google is already implemented on its Imagen text-to-image model and secretly watermarks images generated by AI without any susceptibility to tampering. The Partnership on AI (PAI) is also taking a leadership role by building out ethical standards for synthetic content, including standards around provenance and watermarking. These frameworks become guides for governments seeking to introduce equitable, effective policies. In addition, India's new legal mechanisms on misinformation and deepfake regulation present a timely point to integrate watermarking standards consistent with global practices while safeguarding civil liberties.
Conclusion
Watermarking regulations for synthetic media content are an essential step toward creating a safer and more credible digital world. As artificial media becomes increasingly indistinguishable from authentic content, the demand for transparency, origin, and responsibility increases. Governments, platforms, and civil society organisations will have to collaborate to deploy watermarking mechanisms that are technically feasible, compliant and privacy-friendly. India is especially at a turning point, with courts calling for action and regulatory agencies starting to take on the challenge. Empowering themselves with global lessons, applying best-in-class watermarking platforms and promoting public awareness can enable the nation to acquire a level of resilience against digital deception.
References
- https://artificialintelligenceact.eu/
- https://www.cyberpeace.org/resources/blogs/delhi-high-court-directs-centre-to-nominate-members-for-deepfake-committee
- https://c2pa.org
- https://www.cyberpeace.org/resources/blogs/misinformations-impact-on-public-health-policy-decisions
- https://deepmind.google/technologies/synthid/
- https://www.imatag.com/blog/china-regulates-ai-generated-content-towards-a-new-global-standard-for-transparency
.webp)
Introduction
As the 2024 Diwali festive season approaches, netizens eagerly embrace the spirit of celebration with online shopping, gifting, and searching for the best festive deals on online platforms. Historical web data from India shows that netizens' online activity spikes at this time as people shop online to upgrade their homes, buy unique presents for loved ones and look for services and products to make their celebrations more joyful.
However, with the increase in online transactions and digital interactions, cybercriminals take advantage of the festive rush by enticing users with fake schemes, fake coupons offering freebies, fake offers of discounted jewellery, counterfeit product sales, festival lotteries, fake lucky draws and charity appeals, malicious websites and more. Cybercrimes, especially phishing attempts, also spike in proportion to user activity and shopping trends at this time.
Hence, it becomes important for all netizens to stay alert, making sure their personal information and financial data is protected and ensure that they exercise due care and caution before clicking on any suspicious links or offers. Additionally, brands and platforms also must make strong cybersecurity a top priority to safeguard their customers and build trust.
Diwali Season and Phishing Attempts
Last year's report from CloudSEK's research team noted an uptick in cyber threats during the Diwali period, where cybercriminals leveraged the festive mood to launch phishing, betting and crypto scams. The report revealed that phishing attempts target the e-commerce industries and seek to damage the image of reputable brands. An astounding 828 distinct domains devoted to phishing activities were found in the Facebook Ads Library by CloudSEK's investigators. The report also highlighted the use of typosquatting techniques to create phony-but-plausible domains that trick users into believing they are legitimate websites, by exploiting common typing errors or misspellings of popular domain names. As fraudsters are increasingly misusing AI and deepfake technologies to their advantage, we expect even more of these dangers to surface this year over the festive season.
CyberPeace Advisory
It is important that netizens exercise caution, especially during the festive period and follow cyber safety practices to avoid cybercrimes and phishing attempts. Some of the cyber hygiene best practices suggested by CyberPeace are as follows:
- Netizens must verify the sender’s email, address, and domain with the official site for the brand/ entity the sender claims to be affiliated with.
- Netizens must avoid clicking links received through email, messages or shared on social media and consider visiting the official website directly.
- Beware of urgent, time-sensitive offers pressuring immediate action.
- Spot phishing signs like spelling errors and suspicious URLs to avoid typosquatting tactics used by cybercriminals.
- Netizens must enable two-factor authentication (2FA) for an additional layer of security.
- Have authenticated antivirus software and malware detection software installed on your devices.
- Be wary of unsolicited festive deals, gifts and offers.
- Stay informed on common tactics used by cybercriminals to launch phishing attacks and recognise the red flags of any phishing attempts.
- To report cybercrimes, file a complaint at cybercrime.gov.in or helpline number 1930. You can also seek assistance from the CyberPeace helpline at +91 9570000066.
References
- https://www.outlookmoney.com/plan/financial-plan/this-diwali-beware-of-these-financial-scams
- https://www.businesstoday.in/technology/news/story/diwali-and-pooja-domains-being-exploited-by-online-scams-see-tips-to-help-you-stay-safe-405323-2023-11-10
- https://www.abplive.com/states/bihar/bihar-crime-news-15-cyber-fraud-arrested-in-nawada-before-diwali-2024-ann-2805088
- https://economictimes.indiatimes.com/tech/technology/phishing-you-a-happy-diwali-ai-advancements-pave-way-for-cybercriminals/articleshow/113966675.cms?from=mdr