New Criminal Laws to be Effective From 1st July 2024
Mr. Neeraj Soni
Sr. Researcher - Policy & Advocacy, CyberPeace
PUBLISHED ON
Feb 29, 2024
10
Introduction
The government has announced that the new criminal laws will come into force on 1st July 2024. The Union Government notified that three recently enacted criminal laws, viz. Bhartiya Nyaya Sanhita 2023, Bharatiya Nagarik Suraksha Sanhita 2023, and Bharatiya Sakshya Adhiniyam 2023 will be effective from 1st July 2024. The Indian Penal Code 1860, Code of Criminal Procedure 1973, and Indian Evidence Act 1872 have been replaced by these new criminal laws.
On 23 February 2024, the Ministry of Home Affairs Announced the Effective Date of new criminal laws as follows:
Bharatiya Nyaya Sanhita, 2023Effective from 1-7-2024, except Section 106(2).
Bharatiya Sakshya Adhiniyam, 2023Effective from 1-7-2024.
Bharatiya Nagarik Suraksha Sanhita, 2023 The provisions will come into force on 1-7-2024 except the provisions of the entry relating to section 106(2) of the Bharatiya Nyaya Sanhita, 2023, in the First Schedule.
Section 106(2) Will Not Be Enforced
Truckers protested against this provision, which provides 10 years imprisonment and fines for those who cause death by rash and negligent driving of a vehicle not amounting to culpable homicide, and escape without reporting it to a police officer. As of now, the government has promised truckers and transporters that subsection 2 of Section 106 of Bharatiya Nyay Sanhita (BNS) will not come into force. This subsection deals with fatal hit-and-run cases and prescribes higher penalties for not informing authorities immediately after an accident.
Section 106(2) of Bharatiya Nyaya Sanhita, 2023 read as follows;
106. Causing death by negligence.—
(2) Whoever causes death of any person by rash and negligent driving of vehicle not amounting to culpable homicide, and escapes without reporting it to a police officer or a Magistrate soon after the incident, shall be punished with imprisonment of either description of aterm which may extend to ten years, and shall also be liable to fine.
BHARATIYA SAKSHYA ADHINIYAM, 2023
The Bhartiya Sakshya Adhiniyam 2023 will replace the Indian Evidence Act 1872. The Act has undergone significant modification to maintain its fundamental principles for fair legal proceedings and adapt to technological advancements and changes in societal norms. This Act recognises electronic records as primary evidence under Section 57. It also allows the electronic presentation of oral evidence, enabling remote testimony and ensuring that electronic records will have the same legal effect as paper records.
Bharatiya Nagarik Suraksha Sanhita, 2023
The Bharatiya Nagarik Suraksha Sanhita, 2023 replaces the 1973 Code of Criminal Procedure, introducing certain modifications. This Act, under section 176, requires forensic investigation for crimes punished with seven years' imprisonment or more. Section 530 of BNSS, 2023 is a newly inserted provision which envisages the use of electronic communication audio-video electronic means for trials, inquiries, proceedings, service and issuance of summons. Electronic mode is permitted for all trials, inquiries, and proceedings under section 173 of this Act. The concept of Zero FIR is also introduced under section 173(1) and mandates police stations to register the FIR, irrespective of jurisdiction.
Conclusion
India's new criminal laws are set to take effect on 1st July 2024. These laws modernise the country's legal framework, replacing outdated statutes and incorporating technological advancements. The concerns from stakeholders led to the withholding of enforcement of Section 106(2) of Bharatiya Nyaya Sanhita 2023. The new criminal laws aim to address contemporary society's complexities while upholding justice and fairness.
Welcome to the second edition of our blog on Digital forensics series. In our previous blog we discussed what digital forensics is, the process followed by the tools, and the subsequent challenges faced in the field. Further, we looked at how the future of Digital Forensics will hold in the current scenario. Today, we will explore differences between 3 particular similar sounding terms that vary significantly in functionality when implemented: Copying, Cloning and Imaging.
In Digital Forensics, the preservation and analysis of electronic evidence are important for investigations and legal proceedings. Replication of the data and devices is one of the fundamental tasks in this domain, without compromising the integrity of the original evidence.
Three primary techniques -- copying, cloning, and imaging -- are used for this purpose. Each technique has its own strengths and is applied according to the needs of the investigation.
In this blog, we will examine the differences between copying, cloning and imaging. We will talk about the importance of each technique, their applications and why imaging is considered the best for forensic investigations.
Copying
Copying means duplicating data or files from one location to another. When one does copying, it implies that one is using standard copy commands. However, when dealing with evidence, it might be hard to use copy only. It is because the standard copy can alter the metadata and change the hidden or deleted data .
The characteristics of copying include:
Speed: copying is simpler and faster,compared to cloning or imaging.
Risk: The risk involved in copying is that the metadata might be altered and all the data might be captured.
Cloning
It is the process where the transfer of the entire contents of a hard drive or a storage device is done on another storage device. This process is known as cloning . This way, the cloning process captures both the active data and the unallocated space and hidden partitions, thus containing the whole structure of the original device. Cloning is generally used at the sector level of the device. Clones can be used as the working copy of a device .
Characteristics of cloning:
bit-for-bit replication: cloning keeps the exact content and the whole structure of the original device.
Use cases: cloning is used when it is needed to keep the original device intact for further examination or a legal affair.
Time consuming: Cloning is usually longer in comparison to simple copying since it involves the whole detailed replication. Though it depends on various factors like the size of the storage device, the speed of the devices involved, and the method of cloning.
Imaging:
It is the process of creating a forensic image of a storage device. A forensic image is a replica copy of every bit of data that was on the source device, this including the allocated, unallocated, and the available slack space .
The image is then used for analysis and investigation, and the original evidence is left untouched. Images can’t be used as the working copies of a device. Unlike cloning, which produces working copies, forensic images are typically used for analysis and investigation purposes and are not intended for regular use as working copies.
Characteristics of Imaging:
Integrity: Imaging ensures the integrity and authenticity of the evidence produced
Flexibility: Forensic image replicas can be mounted as a virtual drive to create image-specific mode for analysis of data without affecting the original evidence .
Metadata: Imaging captures metadata associated with the data, thus promoting forensic analysis.
Key Differences
Purpose: Copying is for everyday use but not good for forensic investigations requiring data integrity. Cloning and imaging are made for forensic preservation.
Depth of Replication: Cloning and imaging captures the entire storage device including hidden, unallocated, and deleted data whereas copying may miss crucial forensic data.
Data Integrity: Imaging and cloning keep the integrity of the original evidence thus making them suitable for legal and forensic use. Which is a critical aspect of forensic investigations.
Forensic Soundness: Imaging is considered the best in digital forensics due to its comprehensive and non-invasive nature.
Cloning is generally from one hard disk to another, where as imaging creates a compressed file that contains a snapshot of the entire hard drive or a specific partitions
Conclusion
Therefore, copying, cloning, and imaging all deal with duplication of data or storage devices with significant variations, especially in digital forensic. However, for forensic investigations, imaging is the most selected approach due to the correct preservation of the evidence state for any analysis or legal use . Therefore, it is essential for forensic investigators to understand these rigorous differences to avail of real and uncontaminated digital evidence for their investigation and legal argument.
A viral post on X (formerly twitter) shared with misleading captions about Gautam Adani being arrested in public for fraud, bribery and corruption. The charges accuse him, his nephew Sagar Adani and 6 others of his group allegedly defrauding American investors and orchestrating a bribery scheme to secure a multi-billion-dollar solar energy project awarded by the Indian government. Always verify claims before sharing posts/photos as this came out to be AI-generated.
Claim:
An image circulating of public arrest after a US court accused Gautam Adani and executives of bribery.
Fact Check:
There are multiple anomalies as we can see in the picture attached below, (highlighted in red circle) the police officer grabbing Adani’s arm has six fingers. Adani’s other hand is completely absent. The left eye of an officer (marked in blue) is inconsistent with the right. The faces of officers (marked in yellow and green circles) appear distorted, and another officer (shown in pink circle) appears to have a fully covered face. With all this evidence the picture is too distorted for an image to be clicked by a camera.
A thorough examination utilizing AI detection software concluded that the image was synthetically produced.
Conclusion:
A viral image circulating of the public arrest of Gautam Adani after a US court accused of bribery. After analysing the image, it is proved to be an AI-Generated image and there is no authentic information in any news articles. Such misinformation spreads fast and can confuse and harm public perception. Always verify the image by checking for visual inconsistency and using trusted sources to confirm authenticity.
Claim: Gautam Adani arrested in public by law enforcement agencies
Claimed On: Instagram and X (Formerly Known As Twitter)
Agentic AI systems are autonomous systems that can plan, make decisions, and take actions by interacting with external tools and environments. But they shift the nature of risk by blurring the lines among input, decision, and execution. A conventional model generates an output and stops. An agent takes input, makes plans, invokes tools, updates its state and repeats the cycle. This creates a system where decisions are continuously revised through interaction with external tools and environments, rather than being fixed at the point of input.
This means the attack surface expands in size and becomes more dynamic. Instead of remaining confined to components as in traditional computational systems, they spread in layers and can continue to grow through time. To understand this shift, the system can be analysed through functional layers such as inputs, memory, reasoning, and execution, while recognising that risk does not remain isolated within these layers but emerges through their interaction.
Agentic AI Attack Surface
A layered view of how risks emerge across input, memory, reasoning, execution, and system integration, including feedback loops and cross-system dependencies that amplify vulnerabilities.
Input Layer: Where Untrusted Data Becomes Control
The entry point of an agent is no longer one prompt. The documents, APIs, files, system logs and the outputs of other agents can now be considered input. This diversity is significant due to the fact that every source of input carries its own trust assumptions, and in the majority of cases, they are weak.
The most obvious threat is prompt injection, where inputs are treated as instructions rather than data. Since inputs are treated as instructions, a virus, a malicious webpage, or a document can contain instructions that override system goals without necessarily being detected as something harmful.
Indirect prompt injection extends this risk beyond direct user interaction. Instead of targeting the interface, attackers compromise the retrieval process by embedding malicious instructions within external data sources. When the agent retrieves and processes the data, it treats the embedded content as legitimate input. As a result, the attack is executed through normal reasoning processes, allowing the system to act on untrusted data without recognising the manipulation.
Data poisoning also occurs at runtime. In contrast to classical poisoning (where training data is manipulated), runtime poisoning distorts the agent’s perception of its environment as it runs. This can change decisions without causing apparent failures.
Obfuscation introduces another indirect attacker vector. Encoded instructions or complicated forms may bypass human review but remain readable to the model. This creates asymmetry whereby the system knows more about the attack than those operating it. Once compromised at this layer, the agent implements compromised instructions which affect downstream operations.
Context and Memory: Persistence of Influence
Agentic systems depend on memory to operate efficiently. They often retain context across sessions and frequently store information between sessions.
This introduces a different type of risk: persistence. Through memory poisoning, attackers can insert false or adversarial information into sorted context, which then influences future decisions. Unlike prompt injection, which is often limited to a single interaction, this effect carries forward. Over time, the agent begins to operate on a distorted internal state, shaping decisions in ways that may not be immediately visible.
Another issue is cross-session leakage. Information in a particular context may be replayed in a different context when memory is being shared or there is insufficient memory separation. This is specifically dangerous in those systems that combine retrieval and long-term storage. The context management in itself becomes a weakness. Agents are required to make decisions on what to retain and what to discard. This is susceptible to attackers who can flood the context or manipulate what is still visible and indirectly affect reasoning.
The underlying problem is structural. Memory turns data into a state. Once state is corrupted, the system cannot easily distinguish valid knowledge from adversarial influence.
The issue is structural. Memory converts temporary data into a persistent state. Once this state is weakened, the system cannot reliably separate valid information from adversarial influence, making recovery significantly more difficult.
Reasoning and Planning: Manipulating Intent Without Breaking Logic
The reasoning layer is where agentic AI stands apart from traditional systems. The model no longer reacts to inputs alone. It actively breaks down objectives, analyses alternatives, and ranks actions.
At the reasoning stage, the nature of risk shifts. The concern is no longer limited to injecting instructions, but to influencing how decisions are made. One example is goal manipulation, where the agent subtly reinterprets its objective and produces outcomes that are technically correct but strategically harmful. Reasoning hijacking operates within intermediate steps, altering how constraints are evaluated or how trade-offs are prioritised. The system may remain internally consistent, which makes such deviations difficult to detect.
Tool selection becomes a critical control point. Agents decide which tools to use and when, so influencing these choices can redirect execution without directly accessing the tools themselves. Hallucinations also take on a different role here. In static systems, they remain errors. In agentic systems, they can trigger actions. A perceived need or incorrect judgement can translate into real-world consequences.
This layer introduces probabilistic failure. The system is not fully weakened, but it is nudged towards decisions that appear reasonable yet are incorrect. The risk lies in how those decisions are justified.
Tool and Execution: When Decisions Gain Reach
Once an agent begins interacting with tools, its behaviour extends beyond the model into external systems. APIs, databases, and services become part of the execution path.
One key risk is the use of unauthorised tools. When agents operate with broad permissions, any manipulation of the upstream can be converted into real-world actions. This makes access control a central security concern. Command injection also takes a different form here. The agent generates commands based on its reasoning, so if that reasoning is compromised, the resulting actions may still appear valid despite being harmful.
External tool outputs introduce another risk. If these systems return corrupted or misleading data, the agent may accept it without verification and incorporate it into its decisions. It is also becoming increasingly reliant on third-part tools and plugins adds to this exposure. If these components are compromised, they can affect behaviour without directly attacking the core system, creating a supply-side risk.
At this stage, the agent effectively operates as an insider. It holds legitimate credentials and interacts with systems in expected ways, making misuse harder to identify.
Application and Integration: System-Level Exposure
Agentic systems rarely operate in isolation. They are embedded in larger environments, interacting with identity systems, business logic, and operational workflows.
Access control becomes a major vulnerability. Agents tend to operate across multiple systems with various permission models, creating irregularities that can be exploited. Risks also arise from identity and delegation. In case an agent is operating on behalf of a user, then any vulnerabilities in authentication or session management can allow attackers to assume that authority.
Workflow execution amplifies these risks. Agents can initiate multi-step processes such as transactions, updates, or approvals. Manipulating a single step can change the result of the entire workflow. As integrations increase, so do the number of interaction points, making cumulative risk harder to track.
At this layer, failures are not isolated. They propagate into business operations, making consequences harder to contain.
Output and Action: Where Failures Become Visible
The output layer is where failures become visible, though they rarely originate there.
Data leakage has been a key concern. Agents may disclose information they are allowed to access, especially when tasks boundaries are not clearly defined. Misinformation and unsafe outputs are also important, particularly when outputs directly influence actions or decisions.
Generated code and commands introduce execution risk. If outputs are used without validation, errors or manipulations can have system-level effects. The shift towards autonomous action increases this risk, as small upstream deviations can lead to significant consequences without human intervention. This layer reflects symptoms rather than root causes. Addressing it alone does not reduce the underlying risk.
Beyond Layers: The Missing Dimension
A layered view helps, but it does not capture the full picture. Agentic systems are defined by continuous interaction across layers.
The key missing dimension is the runtime loop. Inputs shape reasoning, reasoning drives action, and actions feed back into both reasoning and memory. These cycles create feedback loops, where small manipulations may escalate over time. This also reduces observability. With multiple interacting components, it becomes difficult to trace cause and effect or identify where failures originate.
Supply chain dependencies add another layer of risk. Models, datasets, APIs, and plugins each introduce their own points of failure. A compromise at any of these points can propagate across the system. The attack surface also includes governance. Weak supervision, unclear responsibility, or excessive autonomy increase overall risk. Human control is not external to the system; it is part of its security.
Conclusion: Structuring the Attack Surface
Agentic AI expands the attack surface beyond traditional systems. It is both recursive and stateful. Risk does not just accumulate across layers; it moves and changes as the system operates.
Any useful representation must go beyond a linear stack. It should capture feedback loops, persistent state, and cross-layer dependencies that characterise the way these systems actually behave. The system is not a pipeline but a cycle. That is where both its capability and its risk emerge.
Become a part of our vision to make the digital world safe for all!
Numerous avenues exist for individuals to unite with us and our collaborators in fostering global cyber security
Awareness
Stay Informed: Elevate Your Awareness with Our Latest Events and News Articles Promoting Cyber Peace and Security.
Your institution or organization can partner with us in any one of our initiatives or policy research activities and complement the region-specific resources and talent we need.