Regulations on CDR
Introduction:
CDR is a term that refers to Call detail records, The Telecom Industries holds the call details data of the users. As it amounts to a large amount of data, the telecom companies retain the data for a period of 6 months. CDR plays a significant role in investigations and cases in the courts. It can be used as pivotal evidence in court proceedings to prove or disprove certain facts & circumstances. Power of Interception of Call detail records is allowed for reasonable grounds and only by the authorized authority as per the laws.
Admissibility of CDR’s in Courts:
Call Details Records (CDRs) can be used as effective pieces of evidence to assist the court in ascertaining the facts of the particular case and inquiring about the commission of an offence, and according to the judicial pronouncements, it is made clear that CDRs can be used supporting or secondary evidence in the court. However, it cannot be the sole basis of the conviction. Section 92 of the Criminal Procedure Code 1973 provides procedure and empowers certain authorities to apply for court or competent authority intervention to seek the CDR.
Legal provisions to obtain CDR:
The CDR can be obtained under the statutory provisions of law contained in section 92 Criminal Procedure Code, 1973. Or under section 5(2) of Indian Telegraph Act 1885, read with rule 419(A) Indian Telegraph Amendment rule 2007. The guidelines were also issued in 2016 by Ministry of Ministry of Home Affairs for seeking Call details records (CDRs)
How long is CDR stored with telecom Companies (Data Retention)
Call Data is retained by telecom companies for a period of 6 months. As the data amounts to high storage, almost several Petabytes per year, telecom companies store the call details data for a period of 6 months and archive the rest of it to tapes.
New Delhi 25Cr jewellery heist
Recently, an incident took place where a 25-crore jewellery theft was carried out in a jewellery shop in Delhi, It was planned and executed by a man from Chhattisgarh. After committing the crime, the criminal went back to Chhattisgarh. It was a case of a 25Cr heist, and the police started their search & investigation. Police used technology and analysed the mobile numbers which were active at the crime scene. Delhi police used advanced software to analyse data. The police were able to trace the mobile number of thieves or suspects active at the crime scene. They discovered suspected contacts who were active within the range of the crime scene, and it helped in the arrest of the main suspects. From around 5,000 mobile numbers active around the crime scene, police have used advanced software that analyses huge data, and then police found a number registered outside of Delhi. The surveillance on the number has revealed that the suspected criminal has moved to the MP from Delhi, then moved further to Bhilai Chattisgarh. Police have successfully arrested the suspected criminal. This incident highlights how technology or call data can assist law enforcement agencies in investigating and finding the real culprits.
Conclusion:
CDR refers to call detail records retained by telecom companies for a period of 6 months, it can be obtained through lawful procedure and by competent authorities only. CDR can be helpful in cases before the court or law enforcement agencies, to assist the court and law enforcement agencies in ascertaining the facts of the case or to prove or disprove certain things. It is important to reiterated that unauthorized seeking of CDR is not allowed; the intervention of the court or competent authority is required to seek the CDR from the telecom companies. CDRs cannot be unauthorizedly obtained, and there has to be a directive from the court or competent authority to do so.
References:
- https://indianlegalsystem.org/cdr-the-wonder-word/#:~:text=CDR%20is%20admissible%20as%20secondary,the%20Indian%20Evidence%20Act%2C%201872.
- https://timesofindia.indiatimes.com/city/delhi/needle-in-a-haystack-how-cops-scanned-5k-mobile-numbers-to-crack-rs-25cr-heist/articleshow/104055687.cms?from=mdr
- https://www.ndtv.com/delhi-news/just-one-man-planned-executed-rs-25-crore-delhi-heist-another-thief-did-him-in-4436494
Related Blogs

Introduction
Rajeev Chandrasekhar, the Union minister of state for information technology (IT), said that the Global Partnership on Artificial Intelligence (GPAI) Summit, which brings together 29 member governments, including the European Union, announced on 13th December 2023 that the New Delhi Declaration had been adopted. The proclamation committed to developing AI applications for medical treatment and agribusiness jointly and taking the needs of the Global South into account when developing AI.
In addition, signing countries committed to leveraging the GPAI infrastructure to establish a worldwide structure for AI safety and trust, as well as to make AI advantages and approaches accessible to all. In order to complete the recommended structure in six months, India also submitted a proposal to host the GPAI Global Governance Summit.
“The New Delhi Declaration, which aims to place GPAI at the forefront of defining the future of AI in terms of both development and building cooperative AI across the partner states, has been unanimously endorsed by 29 GPAI member countries. Nations have come to an agreement to develop AI applications in healthcare, agriculture, and numerous other fields that affect all of our nations and citizens,” Chandrasekhar stated.
The statement highlights GPAI's critical role in tackling modern AI difficulties, such as generative AI, through submitted AI projects meant to maximize benefits and minimize related risks while solving community problems and worldwide difficulties.
GPAI
Global Partnership on Artificial Intelligence (GPAI) is an organisation of 29 countries from the Americas (North and South), Europe and Asia. It has important players such as the US, France, Japan and India, but it excludes China. The previous meeting took place in Japan. In 2024, India will preside over GPAI.
In order to promote and steer the responsible implementation of artificial intelligence based on human rights, multiculturalism, gender equality, innovation, economic growth, the surroundings, and social impact, this forum was established in 2020. Its goal is to bring together elected officials and experts in order to make tangible contributions to the 2030 Agenda and the UN Sustainable Development Goals (SDGs).
Given the quick and significant advancements in artificial intelligence over the previous year, the meeting in New Delhi attracted particular attention. They have sparked worries about its misuse as well as enthusiasm about its possible advantages.
The Summit
The G20 summit, which India hosted in September 2023, provided an atmosphere for the discussions at the GPAI summit. There, participants of this esteemed worldwide economic conference came to an agreement on how to safely use AI for "Good and for All."
In order to safeguard people's freedoms and security, member governments pledged to address AI-related issues "in a responsible, inclusive, and human-centric manner."
The key tactic devised is to distribute AI's advantages fairly while reducing its hazards. Promoting international collaboration and discourse on global management for AI is the first step toward accomplishing this goal.
A major milestone in that approach was the GPAI summit.
The conversation on AI was started by India's Prime Minister Narendra Modi, who is undoubtedly one of the most tech-aware and tech-conscious international authorities.
He noted that every system needs to be revolutionary, honest, and trustworthy in order to be sustained.
"There is no doubt that AI is transformative, but it is up to us to make it more and more transparent." He continued by saying that when associated social, ethical, and financial concerns are appropriately addressed, trust will increase.
After extensive discussions, the summit attendees decided on a strategy to establish global collaboration on a number of AI-related issues. The proclamation pledged to place GPAI at the leading edge of defining AI in terms of creativity and cooperation while expanding possibilities for AI in healthcare, agriculture, and other areas of interest, according to Union Minister Rajeev Chandrasekhar.
There was an open discussion of a number of issues, including disinformation, joblessness and bias, protection of sensitive information, and violations of human rights. The participants reaffirmed their dedication to fostering dependable, safe, and secure AI within their respective domains.
Concerns raised by AI
- The issue of legislation comes first. There are now three methods in use. In order to best promote inventiveness, the UK government takes a "less is more" approach to regulation. Conversely, the European Union (EU) is taking a strong stance, planning to propose a new Artificial Intelligence Act that might categorize AI 'in accordance with use-case situations based essentially on the degree of interference and vulnerability'.
- Second, analysts say that India has the potential to lead the world in discussions about AI. For example, India has an advantage when it comes to AI discussions because of its personnel, educational system, technological stack, and populace, according to Markham Erickson of Google's Centers for Excellence. However, he voiced the hope that Indian regulations will be “interoperable” with those of other countries in order to maximize the benefits for small and medium-sized enterprises in the nation.
- Third, there is a general fear about how AI will affect jobs, just as there was in the early years of the Internet's development. Most people appear to agree that while many jobs won't be impacted, certain jobs might be lost as artificial intelligence develops and gets smarter. According to Erickson, the solution to the new circumstances is to create "a more AI-skilled workforce."
- Finally, a major concern relates to deepfakes defined as 'digital media, video, audio and images, edited and manipulated, using Artificial Intelligence (AI).'
Need for AI Strategy in Commercial Businesses
Firstly, astute or mobile corporate executives such as Shailendra Singh, managing director of Peak XV Partners, feel that all organisations must now have 'an AI strategy'.
Second, it is now impossible to isolate the influence of digital technology and artificial intelligence from the study of international relations (IR), foreign policy, and diplomacy. Academics have been contemplating and penning works of "the geopolitics of AI."
Combat Strategies
"We will talk about how to combine OECD capabilities to maximize our capacity to develop the finest approaches to the application and management of AI for the benefit of our people. The French Minister of Digital Transition and Telecommunications", Jean-Noël Barrot, informed reporters.
Vice-Minister of International Affairs for Japan's Ministry of Internal Affairs and Communications Hiroshi Yoshida stated, "We particularly think GPAI should be more inclusive so that we encourage more developing countries to join." Mr Chandrasekhar stated, "Inclusion of lower and middle-income countries is absolutely core to the GPAI mission," and added that Senegal has become a member of the steering group.
India's role in integrating agribusiness into the AI agenda was covered in a paragraph. The proclamation states, "We embrace the use of AI innovation in supporting sustainable agriculture as a new thematic priority for GPAI."
Conclusion
The New Delhi Declaration, which was adopted at the GPAI Summit, highlights the cooperative determination of 29 member nations to use AI for the benefit of all people. GPAI, which will be led by India in 2024, intends to influence AI research with an emphasis on healthcare, agriculture, and resolving ethical issues. Prime Minister Narendra Modi stressed the need to use AI responsibly and build clarity and confidence. Legislative concerns, India's potential for leadership, employment effects, and the difficulty of deepfakes were noted. The conference emphasized the importance of having an AI strategy in enterprises and covered battle tactics, with a focus on GPAI's objective, which includes tolerance for developing nations. Taken as a whole, the summit presents GPAI as an essential tool for navigating the rapidly changing AI field.
References
- https://www.thehindu.com/news/national/ai-summit-adopts-new-delhi-declaration-on-inclusiveness-collaboration/article67635398.ece
- https://www.livemint.com/news/india/gpai-meet-adopts-new-delhi-ai-declaration-11702487342900.html
- https://startup.outlookindia.com/sector/policy/global-partnership-on-ai-member-nations-unanimously-adopt-new-delhi-declaration-news-10065
- https://gpai.ai/
.webp)
Executive Summary:
In late 2024 an Indian healthcare provider experienced a severe cybersecurity attack that demonstrated how powerful AI ransomware is. This blog discusses the background to the attack, how it took place and the effects it caused (both medical and financial), how organisations reacted, and the final result of it all, stressing on possible dangers in the healthcare industry with a lack of sufficiently adequate cybersecurity measures in place. The incident also interrupted the normal functioning of business and explained the possible economic and image losses from cyber threats. Other technical results of the study also provide more evidence and analysis of the advanced AI malware and best practices for defending against them.
1. Introduction
The integration of artificial intelligence (AI) in cybersecurity has revolutionised both defence mechanisms and the strategies employed by cybercriminals. AI-powered attacks, particularly ransomware, have become increasingly sophisticated, posing significant threats to various sectors, including healthcare. This report delves into a case study of an AI-powered ransomware attack on a prominent Indian healthcare provider in 2024, analysing the attack's execution, impact, and the subsequent response, along with key technical findings.
2. Background
In late 2024, a leading healthcare organisation in India which is involved in the research and development of AI techniques fell prey to a ransomware attack that was AI driven to get the most out of it. With many businesses today relying on data especially in the healthcare industry that requires real-time operations, health care has become the favourite of cyber criminals. AI aided attackers were able to cause far more detailed and damaging attack that severely affected the operation of the provider whilst jeopardising the safety of the patient information.
3. Attack Execution
The attack began with the launch of a phishing email designed to target a hospital administrator. They received an email with an infected attachment which when clicked in some cases injected the AI enabled ransomware into the hospitals network. AI incorporated ransomware was not as blasé as traditional ransomware, which sends copies to anyone, this studied the hospital’s IT network. First, it focused and targeted important systems which involved implementation of encryption such as the electronic health records and the billing departments.
The fact that the malware had an AI feature allowed it to learn and adjust its way of propagation in the network, and prioritise the encryption of most valuable data. This accuracy did not only increase the possibility of the potential ransom demand but also it allowed reducing the risks of the possibility of early discovery.
4. Impact
- The consequences of the attack were immediate and severe: The consequences of the attack were immediate and severe.
- Operational Disruption: The centralization of important systems made the hospital cease its functionality through the acts of encrypting the respective components. Operations such as surgeries, routine medical procedures and admitting of patients were slowed or in some cases referred to other hospitals.
- Data Security: Electronic patient records and associated billing data became off-limit because of the vulnerability of patient confidentiality. The danger of data loss was on the verge of becoming permanent, much to the concern of both the healthcare provider and its patients.
- Financial Loss: The attackers asked for 100 crore Indian rupees (approximately 12 USD million) for the decryption key. Despite the hospital not paying for it, there were certain losses that include the operational loss due to the server being down, loss incurred by the patients who were affected in one way or the other, loss incurred in responding to such an incident and the loss due to bad reputation.
5. Response
As soon as the hotel’s management was informed about the presence of ransomware, its IT department joined forces with cybersecurity professionals and local police. The team decided not to pay the ransom and instead recover the systems from backup. Despite the fact that this was an ethically and strategically correct decision, it was not without some challenges. Reconstruction was gradual, and certain elements of the patients’ records were permanently erased.
In order to avoid such attacks in the future, the healthcare provider put into force several organisational and technical actions such as network isolation and increase of cybersecurity measures. Even so, the attack revealed serious breaches in the provider’s IT systems security measures and protocols.
6. Outcome
The attack had far-reaching consequences:
- Financial Impact: A healthcare provider suffers a lot of crashes in its reckoning due to substantial service disruption as well as bolstering cybersecurity and compensating patients.
- Reputational Damage: The leakage of the data had a potential of causing a complete loss of confidence from patients and the public this affecting the reputation of the provider. This, of course, had an effect on patient care, and ultimately resulted in long-term effects on revenue as patients were retained.
- Industry Awareness: The breakthrough fed discussions across the country on how to improve cybersecurity provisions in the healthcare industry. It woke up the other care providers to review and improve their cyber defence status.
7. Technical Findings
The AI-powered ransomware attack on the healthcare provider revealed several technical vulnerabilities and provided insights into the sophisticated mechanisms employed by the attackers. These findings highlight the evolving threat landscape and the importance of advanced cybersecurity measures.
7.1 Phishing Vector and Initial Penetration
- Sophisticated Phishing Tactics: The phishing email was crafted with precision, utilising AI to mimic the communication style of trusted contacts within the organisation. The email bypassed standard email filters, indicating a high level of customization and adaptation, likely due to AI-driven analysis of previous successful phishing attempts.
- Exploitation of Human Error: The phishing email targeted an administrative user with access to critical systems, exploiting the lack of stringent access controls and user awareness. The successful penetration into the network highlighted the need for multi-factor authentication (MFA) and continuous training on identifying phishing attempts.
7.2 AI-Driven Malware Behavior
- Dynamic Network Mapping: Once inside the network, the AI-powered malware executed a sophisticated mapping of the hospital's IT infrastructure. Using machine learning algorithms, the malware identified the most critical systems—such as Electronic Health Records (EHR) and the billing system—prioritising them for encryption. This dynamic mapping capability allowed the malware to maximise damage while minimising its footprint, delaying detection.
- Adaptive Encryption Techniques: The malware employed adaptive encryption techniques, adjusting its encryption strategy based on the system's response. For instance, if it detected attempts to isolate the network or initiate backup protocols, it accelerated the encryption process or targeted backup systems directly, demonstrating an ability to anticipate and counteract defensive measures.
- Evasive Tactics: The ransomware utilised advanced evasion tactics, such as polymorphic code and anti-forensic features, to avoid detection by traditional antivirus software and security monitoring tools. The AI component allowed the malware to alter its code and behaviour in real time, making signature-based detection methods ineffective.
7.3 Vulnerability Exploitation
- Weaknesses in Network Segmentation: The hospital’s network was insufficiently segmented, allowing the ransomware to spread rapidly across various departments. The malware exploited this lack of segmentation to access critical systems that should have been isolated from each other, indicating the need for stronger network architecture and micro-segmentation.
- Inadequate Patch Management: The attackers exploited unpatched vulnerabilities in the hospital’s IT infrastructure, particularly within outdated software used for managing patient records and billing. The failure to apply timely patches allowed the ransomware to penetrate and escalate privileges within the network, underlining the importance of rigorous patch management policies.
7.4 Data Recovery and Backup Failures
- Inaccessible Backups: The malware specifically targeted backup servers, encrypting them alongside primary systems. This revealed weaknesses in the backup strategy, including the lack of offline or immutable backups that could have been used for recovery. The healthcare provider’s reliance on connected backups left them vulnerable to such targeted attacks.
- Slow Recovery Process: The restoration of systems from backups was hindered by the sheer volume of encrypted data and the complexity of the hospital’s IT environment. The investigation found that the backups were not regularly tested for integrity and completeness, resulting in partial data loss and extended downtime during recovery.
7.5 Incident Response and Containment
- Delayed Detection and Response: The initial response was delayed due to the sophisticated nature of the attack, with traditional security measures failing to identify the ransomware until significant damage had occurred. The AI-powered malware’s ability to adapt and camouflage its activities contributed to this delay, highlighting the need for AI-enhanced detection and response tools.
- Forensic Analysis Challenges: The anti-forensic capabilities of the malware, including log wiping and data obfuscation, complicated the post-incident forensic analysis. Investigators had to rely on advanced techniques, such as memory forensics and machine learning-based anomaly detection, to trace the malware’s activities and identify the attack vector.
8. Recommendations Based on Technical Findings
To prevent similar incidents, the following measures are recommended:
- AI-Powered Threat Detection: Implement AI-driven threat detection systems capable of identifying and responding to AI-powered attacks in real time. These systems should include behavioural analysis, anomaly detection, and machine learning models trained on diverse datasets.
- Enhanced Backup Strategies: Develop a more resilient backup strategy that includes offline, air-gapped, or immutable backups. Regularly test backup systems to ensure they can be restored quickly and effectively in the event of a ransomware attack.
- Strengthened Network Segmentation: Re-architect the network with robust segmentation and micro-segmentation to limit the spread of malware. Critical systems should be isolated, and access should be tightly controlled and monitored.
- Regular Vulnerability Assessments: Conduct frequent vulnerability assessments and patch management audits to ensure all systems are up to date. Implement automated patch management tools where possible to reduce the window of exposure to known vulnerabilities.
- Advanced Phishing Defences: Deploy AI-powered anti-phishing tools that can detect and block sophisticated phishing attempts. Train staff regularly on the latest phishing tactics, including how to recognize AI-generated phishing emails.
9. Conclusion
The AI empowered ransomware attack on the Indian healthcare provider in 2024 makes it clear that the threat of advanced cyber attacks has grown in the healthcare facilities. Sophisticated technical brief outlines the steps used by hackers hence underlining the importance of ongoing active and strong security. This event is a stark message to all about the importance of not only remaining alert and implementing strong investments in cybersecurity but also embarking on the formulation of measures on how best to counter such incidents with limited harm. AI is now being used by cybercriminals to increase the effectiveness of the attacks they make and it is now high time all healthcare organisations ensure that their crucial systems and data are well protected from such attacks.

Introduction
Robotic or Robo dogs are created to resemble dogs in conduct and appearance, usually comprising canine features including barking and wagging tails. Some examples include Rhex (hexapod robot), Littledog and BigDog (created by Boston Dynamics robot). Robodogs, on the whole, can even respond to commands and look at a person with large LED-lit puppy eyes.
A four-legged robotic solution was recently concluded through its foremost successful radiation protection test inside the most extensive experimental area at the European Organization for Nuclear Research known as CERN. Each robot created at CERN is carefully crafted to fulfil exceptional challenges and complement each other. Unlike the previous wheeled, tracked or monorail robots, the robodogs will be capable of penetrating unexplored dimensions of the caverns, expanding the spectrum of surroundings that CERN robots can act as a guide. Also, Incorporating the robodog with the existing monorail robots in the Large Hadron Collider (LHC) tunnel will expand the range of places available for monitoring and supervision, improving the security and efficiency of the operation of CERN. Lenovo too has designed a six-legged robot called the "Daystar Bot GS" to be launched this year, which promises "comprehensive data collection."
Use of Robodogs in diverse domains
Due to the enhancement of Artificial Intelligence (AI), robodogs can be a boon for those with exceptional requirements. The advantage of AI is the dependability of its features, which can be programmed to answer certain commands detailed to the user.
In the context of health and well-being, they can be useful if they are programmed to take care of a person with distinct or special requirements (elderly person or visually impaired person). For this reason, they are considered more advantageous than the real dogs. Recently, New Stanford has designed robodogs that can perform several physical activities, including dancing and may also one day assist in putting pediatric patients in comfort during their hospital stays. Similarly, the robodog, "Pupper", is a revamped version of another robotic dog designed at Stanford called "Doggo", an open-source bot with 3D printed elements that one could create on a fairly small budget. They were also created to interact with humans. Furthermore, Robots as friends are a more comfortable hop for the Japanese. The oldest and most successful social robot in Japan is called "Paro", resembling an ordinary plush toy that can help in treating depression, stress, anxiety and also mood swings in a person. Following 1998, several Paro robots were exported overseas and put into service globally, reducing stress among children in ICUs, treating American veterans suffering from Post Traumatic Stress Disorder (PTSD), and assisting dementia patients.
Post-pandemic, the Japanese experiencing loneliness and isolation have been clinging to social robots for mind healing and comfort. Likewise, at a cafe in Japan, proud owners of the AI-driven robot dog "Aibo" have pawed its course into the minds and hearts of the people. Presently, robots are replacing the conventional class guinea pig or bunny at Moriyama Kindergarten in the central Japanese city of Nagoya. According to the teachers here, the bots apparently reduce stress and teach kids to be more humane.
In the security and defence domain, the unique skills of robodogs allow them to be used in hazardous and challenging circumstances. They can even navigate through rugged topography with reassurance to save stranded individuals from natural catastrophes. They could correspondingly help with search and rescue procedures, surveillance, and other circumstances that could be dangerous for humans. Researchers or experts are still fine-tuning the algorithm to develop them by devising the technology and employing affordable off-shelf robots that are already functional. Robodogs are further used for providing surveillance in hostage crises, defusing bombs, besides killing people to stop them from attacking other individuals. Similarly, a breakthrough in AI is being tested by the Australian military that reportedly allows soldiers to control robodogs solely with their minds. Cities like Florida and St. Petersburg also seem bound to keep police robodogs. The U.S. Department of Homeland Security is further seeking plans to deploy robot dogs at the borderlands. Also, the New York City Police Department (NYPD) intends to once again deploy four-legged 'Robodogs' to deal with high-risk circumstances like hostage negotiations. The NYPD has previously employed alike robodogs for high-octane duties in examining unsafe environments where human officers should not be exposed. The U.S. Marine Corps is additionally experimenting with a new breed of robotic canine that can be helpful in the battleground, enhance the safety and mobility of soldiers, and aid in other tasks. The Unitree Go1 robot dog (Nicknamed GOAT-Grounded Open-Air Transport) by the Marines is a four-legged machine that has a built-in AI system, which can be equipped to carry an infantry anti-armour rocket launcher on its back. The GOAT robot dog is designed to help the Marines move hefty loads, analyse topography, and deliver fire support in distant and dangerous places.
However, on the contrary, robodogs may pose ethical and moral predicaments regarding who is accountable for their actions and how to ensure their adherence to the laws of warfare. This may further increase security and privacy situations on how to safeguard the data of the robotic dogs and contain hacking or sabotage.
Conclusion
Teaching robots to traverse the world conventionally has been an extravagant challenge. Though the world has been seeing an increase in their manufacturing, it is simply a machine and can never replace the feeling of owning a real dog. Designers state that intelligent social robots will never replace humans, though robots provide the assurance of social harmony without social contact. Also, they may not be capable of managing complicated or unforeseen circumstances that need instinct or human decision-making. Nevertheless, owning robodogs in the coming decades is expected to become even more common and cost-effective as they evolve or advance with new algorithms being tested and implemented.
References:
- https://home.cern/news/news/engineering/introducing-cerns-robodog
- https://news.stanford.edu/2023/10/04/ai-approach-yields-athletically-intelligent-robotic-dog/
- https://nypost.com/2023/02/17/combat-ai-robodogs-follow-telepathic-commands-from-soldiers/
- https://www.popsci.com/technology/parkour-algorithm-robodog/
- https://ggba.swiss/en/cern-unveils-its-innovative-robodog-for-radiation-detection/
- https://www.themarshallproject.org/2022/12/10/san-francisco-killer-robots-policing-debate
- https://www.cbsnews.com/news/robo-dogs-therapy-bots-artificial-intelligence/
- https://news.stanford.edu/report/2023/08/01/robo-dogs-unleash-fun-joy-stanford-hospital/
- https://www.pcmag.com/news/lenovo-creates-six-legged-daystar-gs-robot
- https://www.foxnews.com/tech/new-breed-military-ai-robo-dogs-could-marines-secret-weapon
- https://www.wptv.com/news/national/new-york-police-will-use-four-legged-robodogs-again
- https://www.dailystar.co.uk/news/us-news/creepy-robodogs-controlled-soldiers-minds-29638615
- https://www.newarab.com/news/robodogs-part-israels-army-robots-gaza-war
- https://us.aibo.com/