#Factcheck-Allu Arjun visits Shiva temple after success of Pushpa 2? No, image is from 2017
Executive Summary:
Recently, a viral post on social media claiming that actor Allu Arjun visited a Shiva temple to pray in celebration after the success of his film, PUSHPA 2. The post features an image of him visiting the temple. However, an investigation has determined that this photo is from 2017 and does not relate to the film's release.

Claims:
The claim states that Allu Arjun recently visited a Shiva temple to express his thanks for the success of Pushpa 2, featuring a photograph that allegedly captures this moment.

Fact Check:
The image circulating on social media, that Allu Arjun visited a Shiva temple to celebrate the success of Pushpa 2, is misleading.
After conducting a reverse image search, we confirmed that this photograph is from 2017, taken during the actor's visit to the Tirumala Temple for a personal event, well before Pushpa 2 was ever announced. The context has been altered to falsely connect it to the film's success. Additionally, there is no credible evidence or recent reports to support the claim that Allu Arjun visited a temple for this specific reason, making the assertion entirely baseless.

Before sharing viral posts, take a brief moment to verify the facts. Misinformation spreads quickly and it’s far better to rely on trusted fact-checking sources.
Conclusion:
The claim that Allu Arjun visited a Shiva temple to celebrate the success of Pushpa 2 is false. The image circulating is actually from an earlier time. This situation illustrates how misinformation can spread when an old photo is used to construct a misleading story. Before sharing viral posts, take a moment to verify the facts. Misinformation spreads quickly, and it is far better to rely on trusted fact-checking sources.
- Claim: The image claims Allu Arjun visited Shiva temple after Pushpa 2’s success.
- Claimed On: Facebook
- Fact Check: False and Misleading
Related Blogs

Introduction
All citizens are using tech to their advantage, and so we see a lot of upskilling among the population leading to innovation in India. As we go deeper into cyberspace, we must maintain our cyber security efficiently and effectively. When bad actors use technology to their advantage, we often see data loss or financial loss of the victim, In this blog, we will shine light upon two new forms of cyber attacks, causing havoc upon the innocent. The “Daam” Malware and a new malicious app are the two new issues.
Daam Botnet
Since 2021, the DAAM Android botnet has been used to acquire unauthorised access to targeted devices. Cybercriminals use it to carry out different destructive actions. Using the DAAM Android botnet’s APK binding service, threat actors can combine malicious code with a legitimate application. Keylogging, ransomware, VOIP call records, runtime code execution, browser history collecting, incoming call recording, PII data theft, phishing URL opening, photo capture, clipboard data theft, WiFi and data status switching, and browser history gathering are just a few of the functions offered by the DAAM Android botnet. The DAAM botnet tracks user activity using the Accessibility Service and stores keystrokes it has recorded together with the name of the programme package in a database. It also contains a ransomware module that encrypts and decrypts data on the infected device using the AES method.
Additionally, the botnet uses the Accessibility service to monitor the VOIP call-making features of social media apps like WhatsApp, Skype, Telegram, and others. When a user engages with these elements, the virus begins audio recording.
The Malware
CERT-IN, the central nodal institution that reacts to computer security-related issues, claims that Daam connects with various Android APK files to access a phone. The files on the phone are encrypted using the AES encryption technique, and it is distributed through third-party websites.
It is claimed that the malware can damage call recordings and contacts, gain access to the camera, change passwords, take screenshots, steal SMS, download/upload files, and perform a variety of other things.

Safeguards and Guidelines by Cert-In
Cert-In has released the guideline for combating malware. These were issued in the public interest. The recommendations by Cert-In are as follows-
Only download from official app stores to limit the risk of potentially harmful apps.
Before downloading an app, always read the details and user reviews; likewise, always give permissions that are related to the program’s purpose.
Install Android updates solely from Android device vendors as they become available.
Avoid visiting untrustworthy websites or clicking on untrustworthy
Install and keep anti-virus and anti-spyware software up to date.
Be cautious if you see mobile numbers that appear to be something other than genuine/regular mobile numbers.
Conduct sufficient investigation Before clicking on a link supplied in a communication.
Only click on URLs that clearly display the website domain; avoid abbreviated URLs, particularly those employing bit.ly and tinyurl.
Use secure browsing technologies and filtering tools in antivirus, firewall, and filtering services.
Before providing sensitive information, look for authentic encryption certificates by looking for the green lock in your browser’s URL information, look for authentic encryption certificates by looking for the green lock in your browser’s URL bar.
Any ‘strange’ activity in a user’s bank account must be reported immediately to the appropriate bank.
New Malicious App
From the remote parts of Jharkhand, a new form of malicious application has been circulated among people on the pretext of a bank account closure. The bad actors have always used messaging platforms like Whatsapp and Telegram to circulate malicious links among unaware and uneducated people to dupe them of their hard-earned money.
They send an ordinary-looking message on Whatsapp or Telegram where they mention that the user has a bank account at ICICI bank and, due to irregularity with the credentials, their account is being deactivated. Further, they ask users to update their PAN card to reactivate their account by uploading the PAN card on an application. This app, in turn, is a malicious app that downloads all the user’s personal credentials and shares them with the bad actors via text message, allowing them to bypass banks’ two-factor authentication and drain the money from their accounts. The Jharkhand Police Cyber Cells have registered numerous FIRs pertaining to this type of cybercrime and are conducting full-scale investigations to apprehend the criminals.
Conclusion
Malware and phishing attacks have gained momentum in the previous years and have become a major contributor to the tally of cybercrimes in the country. DaaM malware is one of the examples brought into light due to the timely action by Cert-In, but still, a lot of such malware are deployed by bad actors, and we as netizens need to use our best practices to keep such criminals at bay. Phishing crimes are often substantiated by exploiting vulnerabilities and social engineering. Thus working towards a rise in awareness is the need of the hour to safeguard the population by and large.

Introduction
Robotic or Robo dogs are created to resemble dogs in conduct and appearance, usually comprising canine features including barking and wagging tails. Some examples include Rhex (hexapod robot), Littledog and BigDog (created by Boston Dynamics robot). Robodogs, on the whole, can even respond to commands and look at a person with large LED-lit puppy eyes.
A four-legged robotic solution was recently concluded through its foremost successful radiation protection test inside the most extensive experimental area at the European Organization for Nuclear Research known as CERN. Each robot created at CERN is carefully crafted to fulfil exceptional challenges and complement each other. Unlike the previous wheeled, tracked or monorail robots, the robodogs will be capable of penetrating unexplored dimensions of the caverns, expanding the spectrum of surroundings that CERN robots can act as a guide. Also, Incorporating the robodog with the existing monorail robots in the Large Hadron Collider (LHC) tunnel will expand the range of places available for monitoring and supervision, improving the security and efficiency of the operation of CERN. Lenovo too has designed a six-legged robot called the "Daystar Bot GS" to be launched this year, which promises "comprehensive data collection."
Use of Robodogs in diverse domains
Due to the enhancement of Artificial Intelligence (AI), robodogs can be a boon for those with exceptional requirements. The advantage of AI is the dependability of its features, which can be programmed to answer certain commands detailed to the user.
In the context of health and well-being, they can be useful if they are programmed to take care of a person with distinct or special requirements (elderly person or visually impaired person). For this reason, they are considered more advantageous than the real dogs. Recently, New Stanford has designed robodogs that can perform several physical activities, including dancing and may also one day assist in putting pediatric patients in comfort during their hospital stays. Similarly, the robodog, "Pupper", is a revamped version of another robotic dog designed at Stanford called "Doggo", an open-source bot with 3D printed elements that one could create on a fairly small budget. They were also created to interact with humans. Furthermore, Robots as friends are a more comfortable hop for the Japanese. The oldest and most successful social robot in Japan is called "Paro", resembling an ordinary plush toy that can help in treating depression, stress, anxiety and also mood swings in a person. Following 1998, several Paro robots were exported overseas and put into service globally, reducing stress among children in ICUs, treating American veterans suffering from Post Traumatic Stress Disorder (PTSD), and assisting dementia patients.
Post-pandemic, the Japanese experiencing loneliness and isolation have been clinging to social robots for mind healing and comfort. Likewise, at a cafe in Japan, proud owners of the AI-driven robot dog "Aibo" have pawed its course into the minds and hearts of the people. Presently, robots are replacing the conventional class guinea pig or bunny at Moriyama Kindergarten in the central Japanese city of Nagoya. According to the teachers here, the bots apparently reduce stress and teach kids to be more humane.
In the security and defence domain, the unique skills of robodogs allow them to be used in hazardous and challenging circumstances. They can even navigate through rugged topography with reassurance to save stranded individuals from natural catastrophes. They could correspondingly help with search and rescue procedures, surveillance, and other circumstances that could be dangerous for humans. Researchers or experts are still fine-tuning the algorithm to develop them by devising the technology and employing affordable off-shelf robots that are already functional. Robodogs are further used for providing surveillance in hostage crises, defusing bombs, besides killing people to stop them from attacking other individuals. Similarly, a breakthrough in AI is being tested by the Australian military that reportedly allows soldiers to control robodogs solely with their minds. Cities like Florida and St. Petersburg also seem bound to keep police robodogs. The U.S. Department of Homeland Security is further seeking plans to deploy robot dogs at the borderlands. Also, the New York City Police Department (NYPD) intends to once again deploy four-legged 'Robodogs' to deal with high-risk circumstances like hostage negotiations. The NYPD has previously employed alike robodogs for high-octane duties in examining unsafe environments where human officers should not be exposed. The U.S. Marine Corps is additionally experimenting with a new breed of robotic canine that can be helpful in the battleground, enhance the safety and mobility of soldiers, and aid in other tasks. The Unitree Go1 robot dog (Nicknamed GOAT-Grounded Open-Air Transport) by the Marines is a four-legged machine that has a built-in AI system, which can be equipped to carry an infantry anti-armour rocket launcher on its back. The GOAT robot dog is designed to help the Marines move hefty loads, analyse topography, and deliver fire support in distant and dangerous places.
However, on the contrary, robodogs may pose ethical and moral predicaments regarding who is accountable for their actions and how to ensure their adherence to the laws of warfare. This may further increase security and privacy situations on how to safeguard the data of the robotic dogs and contain hacking or sabotage.
Conclusion
Teaching robots to traverse the world conventionally has been an extravagant challenge. Though the world has been seeing an increase in their manufacturing, it is simply a machine and can never replace the feeling of owning a real dog. Designers state that intelligent social robots will never replace humans, though robots provide the assurance of social harmony without social contact. Also, they may not be capable of managing complicated or unforeseen circumstances that need instinct or human decision-making. Nevertheless, owning robodogs in the coming decades is expected to become even more common and cost-effective as they evolve or advance with new algorithms being tested and implemented.
References:
- https://home.cern/news/news/engineering/introducing-cerns-robodog
- https://news.stanford.edu/2023/10/04/ai-approach-yields-athletically-intelligent-robotic-dog/
- https://nypost.com/2023/02/17/combat-ai-robodogs-follow-telepathic-commands-from-soldiers/
- https://www.popsci.com/technology/parkour-algorithm-robodog/
- https://ggba.swiss/en/cern-unveils-its-innovative-robodog-for-radiation-detection/
- https://www.themarshallproject.org/2022/12/10/san-francisco-killer-robots-policing-debate
- https://www.cbsnews.com/news/robo-dogs-therapy-bots-artificial-intelligence/
- https://news.stanford.edu/report/2023/08/01/robo-dogs-unleash-fun-joy-stanford-hospital/
- https://www.pcmag.com/news/lenovo-creates-six-legged-daystar-gs-robot
- https://www.foxnews.com/tech/new-breed-military-ai-robo-dogs-could-marines-secret-weapon
- https://www.wptv.com/news/national/new-york-police-will-use-four-legged-robodogs-again
- https://www.dailystar.co.uk/news/us-news/creepy-robodogs-controlled-soldiers-minds-29638615
- https://www.newarab.com/news/robodogs-part-israels-army-robots-gaza-war
- https://us.aibo.com/

Introduction
With the rise of AI deepfakes and manipulated media, it has become difficult for the average internet user to know what they can trust online. Synthetic media can have serious consequences, from virally spreading election disinformation or medical misinformation to serious consequences like revenge porn and financial fraud. Recently, a Pune man lost ₹43 lakh when he invested money based on a deepfake video of Infosys founder Narayana Murthy. In another case, that of Babydoll Archi, a woman from Assam had her likeness deepfaked by an ex-boyfriend to create revenge porn.
Image or video manipulation used to leave observable traces. Online sources may advise examining the edges of objects in the image, checking for inconsistent patterns, lighting differences, observing the lip movements of the speaker in a video or counting the number of fingers on a person’s hand. Unfortunately, as the technology improves, such folk advice might not always help users identify synthetic and manipulated media.
The Coalition for Content Provenance and Authenticity (C2PA)
One interesting project in the area of trust-building under these circumstances has been the Coalition for Content Provenance and Authenticity (C2PA). Started in 2019 by Adobe and Microsoft, C2PA is a collaboration between major players in AI, social media, journalism, and photography, among others. It set out to create a standard for publishers of digital media to prove the authenticity of digital media and track changes as they occur.
When photos and videos are captured, they generally store metadata like the date and time of capture, the location, the device it was taken on, etc. C2PA developed a standard for sharing and checking the validity of this metadata, and adding additional layers of metadata whenever a new user makes any edits. This creates a digital record of any and all changes made. Additionally, the original media is bundled with this metadata. This makes it easy to verify the source of the image and check if the edits change the meaning or impact of the media. This standard allows different validation software, content publishers and content creation tools to be interoperable in terms of maintaining and displaying proof of authenticity.

The standard is intended to be used on an opt-in basis and can be likened to a nutrition label for digital media. Importantly, it does not limit the creativity of fledgling photo editors or generative AI enthusiasts; it simply provides consumers with more information about the media they come across.
Could C2PA be Useful in an Indian Context?
The World Economic Forum’s Global Risk Report 2024, identifies India as a significant hotspot for misinformation. The recent AI Regulation report by MeitY indicates an interest in tools for watermarking AI-based synthetic content for ease of detecting and tracking harmful outcomes. Perhaps C2PA can be useful in this regard as it takes a holistic approach to tracking media manipulation, even in cases where AI is not the medium.
Currently, 26 India-based organisations like the Times of India or Truefy AI have signed up to the Content Authenticity Initiative (CAI), a community that contributes to the development and adoption of tools and standards like C2PA. However, people are increasingly using social media sites like WhatsApp and Instagram as sources of information, both of which are owned by Meta and have not yet implemented the standard in their products.
India also has low digital literacy rates and low resistance to misinformation. Part of the challenge would be showing people how to read this nutrition label, to empower people to make better decisions online. As such, C2PA is just one part of an online trust-building strategy. It is crucial that education around digital literacy and policy around organisational adoption of the standard are also part of the strategy.
The standard is also not foolproof. Current iterations may still struggle when presented with screenshots of digital media and other non-technical digital manipulation. Linking media to their creator may also put journalists and whistleblowers at risk. Actual use in context will show us more about how to improve future versions of digital provenance tools, though these improvements are not guarantees of a safer internet.
The largest advantage of C2PA adoption would be the democratisation of fact-checking infrastructure. Since media is shared at a significantly faster rate than it can be verified by professionals, putting the verification tools in the hands of people makes the process a lot more scalable. It empowers citizen journalists and leaves a public trail for any media consumer to look into.
Conclusion
From basic colour filters to make a scene more engaging, to removing a crowd from a social media post, to editing together videos of a politician to make it sound like they are singing a song, we are so accustomed to seeing the media we consume be altered in some way. The C2PA is just one way to bring transparency to how media is altered. It is not a one-stop solution, but it is a viable starting point for creating a fairer and democratic internet and increasing trust online. While there are risks to its adoption, it is promising to see that organisations across different sectors are collaborating on this project to be more transparent about the media we consume.
References
- https://c2pa.org/
- https://contentauthenticity.org/
- https://indianexpress.com/article/technology/tech-news-technology/kate-middleton-9-signs-edited-photo-9211799/
- https://photography.tutsplus.com/articles/fakes-frauds-and-forgeries-how-to-detect-image-manipulation--cms-22230
- https://www.media.mit.edu/projects/detect-fakes/overview/
- https://www.youtube.com/watch?v=qO0WvudbO04&pp=0gcJCbAJAYcqIYzv
- https://www3.weforum.org/docs/WEF_The_Global_Risks_Report_2024.pdf
- https://indianexpress.com/article/technology/tech-news-technology/ai-law-may-not-prescribe-penal-consequences-for-violations-9457780/
- https://thesecretariat.in/article/meity-s-ai-regulation-report-ambitious-but-no-concrete-solutions
- https://www.ndtv.com/lifestyle/assam-what-babydoll-archi-viral-fame-says-about-india-porn-problem-8878689
- https://www.meity.gov.in/static/uploads/2024/02/9f6e99572739a3024c9cdaec53a0a0ef.pdf