#FactCheck - Viral Photo of Dilapidated Bridge Misattributed to Kerala, Originally from Bangladesh
Executive Summary:
A viral photo on social media claims to show a ruined bridge in Kerala, India. But, a reality check shows that the bridge is in Amtali, Barguna district, Bangladesh. The reverse image search of this picture led to a Bengali news article detailing the bridge's critical condition. This bridge was built-in 2002 to 2006 over Jugia Khal in Arpangashia Union. It has not been repaired and experiences recurrent accidents and has the potential to collapse, which would disrupt local connectivity. Thus, the social media claims are false and misleading.

Claims:
Social Media users share a photo that shows a ruined bridge in Kerala, India.


Fact Check:
On receiving the posts, we reverse searched the image which leads to a Bengali News website named Manavjamin where the title displays, “19 dangerous bridges in Amtali, lakhs of people in fear”. We found the picture on this website similar to the viral image. On reading the whole article, we found that the bridge is located in Bangladesh's Amtali sub-district of Barguna district.

Taking a cue from this, we then searched for the bridge in that region. We found a similar bridge at the same location in Amtali, Bangladesh.
According to the article, The 40-meter bridge over Jugia Khal in Arpangashia Union, Amtali, was built in 2002 to 2006 and was never repaired. It is in a critical condition, causing frequent accidents and risking collapse. If the bridge collapses it will disrupt communication between multiple villages and the upazila town. Residents have made temporary repairs.
Hence, the claims made by social media users are fake and misleading.
Conclusion:
In conclusion, the viral photo claiming to show a ruined bridge in Kerala is actually from Amtali, Barguna district, Bangladesh. The bridge is in a critical state, with frequent accidents and the risk of collapse threatening local connectivity. Therefore, the claims made by social media users are false and misleading.
- Claim: A viral image shows a ruined bridge in Kerala, India.
- Claimed on: Facebook
- Fact Check: Fake & Misleading
Related Blogs
.webp)
Introduction: The Internet’s Foundational Ideal of Openness
The Internet was built as a decentralised network to foster open communication and global collaboration. Unlike traditional media or state infrastructure, no single government, company, or institution controls the Internet. Instead, it has historically been governed by a consensus of the multiple communities, like universities, independent researchers, and engineers, who were involved in building it. This bottom-up, cooperative approach was the foundation of Internet governance and ensured that the Internet remained open, interoperable, and accessible to all. As the Internet began to influence every aspect of life, including commerce, culture, education, and politics, it required a more organised governance model. This compelled the rise of the multi-stakeholder internet governance model in the early 2000s.
The Rise of Multistakeholder Internet Governance
Representatives from governments, civil society, technical experts, and the private sector congregated at the United Nations World Summit on Information Society (WSIS), and adopted the Tunis Agenda for the Information Society. Per this Agenda, internet governance was defined as “… the development and application by governments, the private sector, and civil society in their respective roles of shared principles, norms, rules, decision-making procedures, and programmes that shape the evolution and use of the Internet.” Internet issues are cross-cutting across technical, political, economic, and social domains, and no one actor can manage them alone. Thus, stakeholders with varying interests are meant to come together to give direction to issues in the digital environment, like data privacy, child safety, cybersecurity, freedom of expression, and more, while upholding human rights.
Internet Governance in Practice: A History of Power Shifts
While the idea of democratizing Internet governance is a noble one, the Tunis Agenda has been criticised for reflecting geopolitical asymmetries and relegating the roles of technical communities and civil society to the sidelines. Throughout the history of the internet, certain players have wielded more power in shaping how it is managed. Accordingly, internet governance can be said to have undergone three broad phases.
In the first phase, the Internet was managed primarily by technical experts in universities and private companies, which contributed to building and scaling it up. The standards and protocols set during this phase are in use today and make the Internet function the way it does. This was the time when the Internet was a transformative invention and optimistically hailed as the harbinger of a utopian society, especially in the USA, where it was invented.
In the second phase, the ideal of multistakeholderism was promoted, in which all those who benefit from the Internet work together to create processes that will govern it democratically. This model also aims to reduce the Internet’s vulnerability to unilateral decision-making, an ideal that has been under threat because this phase has seen the growth of Big Tech. What started as platforms enabling access to information, free speech, and creativity has turned into a breeding ground for misinformation, hate speech, cybercrime, Child Sexual Abuse Material (CSAM), and privacy concerns. The rise of generative AI only compounds these challenges. Tech giants like Google, Meta, X (formerly Twitter), OpenAI, Microsoft, Apple, etc. have amassed vast financial capital, technological monopoly, and user datasets. This gives them unprecedented influence not only over communications but also culture, society, and technology governance.
The anxieties surrounding Big Tech have fed into the third phase, with increasing calls for government regulation and digital nationalism. Governments worldwide are scrambling to regulate AI, data privacy, and cybersecurity, often through processes that lack transparency. An example is India’s Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, which was passed without parliamentary debate. Governments are also pressuring platforms to take down content through opaque takedown orders. Laws like the UK’s Investigatory Powers Act, 2016, are criticised for giving the government the power to indirectly mandate encryption backdoors, compromising the strength of end-to-end encryption systems. Further, the internet itself is fragmenting into the “splinternet” amid rising geopolitical tensions, in the form of Russia’s “sovereign internet” or through China’s Great Firewall.
Conclusion
While multistakeholderism is an ideal, Internet governance is a playground of contesting power relations in practice. As governments assert digital sovereignty and Big Tech consolidates influence, the space for meaningful participation of other stakeholders has been negligible. Consultation processes have often been symbolic. The principles of openness, inclusivity, and networked decision-making are once again at risk of being sidelined in favour of nationalism or profit. The promise of a decentralised, rights-respecting, and interoperable internet will only be fulfilled if we recommit to the spirit of Multi-Stakeholder Internet Governance, not just its structure. Efficient internet governance requires that the multiple stakeholders be empowered to carry out their roles, not just talk about them.
References
- https://www.newyorker.com/magazine/2024/02/05/can-the-internet-be-governed
- https://www.internetsociety.org/wp-content/uploads/2017/09/ISOC-PolicyBrief-InternetGovernance-20151030-nb.pdf
- https://itp.cdn.icann.org/en/files/government-engagement-ge/multistakeholder-model-internet-governance-fact-sheet-05-09-2024-en.pdf\
- https://nrs.help/post/internet-governance-and-its-importance/
- https://daidac.thecjid.org/how-data-power-is-skewing-internet-governance-to-big-tech-companies-and-ai-tech-guys/

Introduction
The digital realm is evolving at a rapid pace, revolutionising cyberspace at a breakneck speed. However, this dynamic growth has left several operational and regulatory lacunae in the fabric of cyberspace, which are exploited by cybercriminals for their ulterior motives. One of the threats that emerged rapidly in 2024 is proxyjacking, in which vulnerable systems are exploited by cyber criminals to sell their bandwidth to third-party proxy servers. This cyber threat poses a significant threat to organisations and individual servers.
Proxyjacking is a kind of cyber attack that leverages legit bandwidth sharing services such as Peer2Profit and HoneyGain. These are legitimate platforms but proxyjacking occurs when such services are exploited without user consent. These services provide the opportunity to monetize their surplus internet bandwidth by sharing with other users. The model itself is harmless but provides an avenue for numerous cyber hostilities. The participants install net-sharing software and add the participating system to the proxy network, enabling users to route their traffic through the system. This setup intends to enhance privacy and provide access to geo-locked content.
The Modus Operandi
These systems are hijacked by cybercriminals, who sell the bandwidth of infected devices. This is achieved by establishing Secure Shell (SSH) connections to vulnerable servers. While hackers rarely use honeypots to render elaborate scams, the technical possibility of them doing so cannot be discounted. Cowrie Honeypots, for instance, are engineered to emulate UNIX systems. Attackers can use similar tactics to gain unauthorized access to poorly secured systems. Once inside the system, attackers utilise legit tools such as public docker images to take over proxy monetization services. These tools are undetectable to anti-malware software due to being genuine software in and of themselves. Endpoint detection and response (EDR) tools also struggle with the same threats.
The Major Challenges
Limitation Of Current Safeguards – current malware detection software is unable to distinguish between malicious and genuine use of bandwidth services, as the nature of the attack is not inherently malicious.
Bigger Threat Than Crypto-Jacking – Proxyjacking poses a bigger threat than cryptojacking, where systems are compromised to mine crypto-currency. Proxyjacking uses minimal system resources rendering it more challenging to identify. As such, proxyjacking offers perpetrators a higher degree of stealth because it is a resource-light technique, whereas cryptojacking can leave CPU and GPU usage footprints.
Role of Technology in the Fight Against Proxyjacking
Advanced Safety Measures- Implementing advanced safety measures is crucial in combating proxyjacking. Network monitoring tools can help detect unusual traffic patterns indicative of proxyjacking. Key-based authentication for SSH can significantly reduce the risk of unauthorized access, ensuring that only trusted devices can establish connections. Intrusion Detection Systems and Intrusion Prevention Systems can go a long way towards monitoring unusual outbound traffic.
Robust Verification Processes- sharing services must adopt robust verification processes to ensure that only legitimate users are sharing bandwidth. This could include stricter identity verification methods and continuous monitoring of user activities to identify and block suspicious behaviour.
Policy Recommendations
Verification for Bandwidth Sharing Services – Mandatory verification standards should be enforced for bandwidth-sharing services, including stringent Know Your Customer (KYC) protocols to verify the identity of users. A strong regulatory body would ensure proper compliance with verification standards and impose penalties. The transparency reports must document the user base, verification processes and incidents.
Robust SSH Security Protocols – Key-based authentication for SSH across organisations should be mandated, to neutralize the risk of brute force attacks. Mandatory security audits of SSH configuration within organisations to ensure best practices are complied with and vulnerabilities are identified will help. Detailed logging of SSH attempts will streamline the process of identification and investigation of suspicious behaviour.
Effective Anomaly Detection System – Design a standard anomaly detection system to monitor networks. The industry-wide detection system should focus on detecting inconsistencies in traffic patterns indicating proxy-jacking. Establishing mandatory protocols for incident reporting to centralised authority should be implemented. The system should incorporate machine learning in order to stay abreast with evolving attack methodologies.
Framework for Incident Response – A national framework should include guidelines for investigation, response and remediation to be followed by organisations. A centralized database can be used for logging and tracking all proxy hacking incidents, allowing for information sharing on a real-time basis. This mechanism will aid in identifying emerging trends and common attack vectors.
Whistleblower Incentives – Enacting whistleblower protection laws will ensure the proper safety of individuals reporting proxyjacking activities. Monetary rewards provide extra incentives and motivate individuals to join whistleblowing programs. To provide further protection to whistleblowers, secure communication channels can be established which will ensure full anonymity to individuals.
Conclusion
Proxyjacking represents an insidious and complicated threat in cyberspace. By exploiting legitimate bandwidth-sharing services, cybercriminals can profit while remaining entirely anonymous. Addressing this issue requires a multifaceted approach, including advanced anomaly detection systems, effective verification systems, and comprehensive incident response frameworks. These measures of strong cyber awareness among netizens will ensure a healthy and robust cyberspace.
References
- https://gridinsoft.com/blogs/what-is-proxyjacking/
- https://www.darkreading.com/cyber-risk/ssh-servers-hit-in-proxyjacking-cyberattacks
- https://therecord.media/hackers-use-log4j-in-proxyjacking-scheme

Introduction:
Digital Forensics, as the term goes, “It is the process of collecting, preserving, identifying, analyzing, and presenting digital evidence in a way that the evidence is legally admitted.”
It is like a detective work in the digital realm, where investigators use various specific methods to find deleted files and to reveal destroyed messages.
The reason why Digital Forensics is an important field is because with the advancement of technology and the use of digital devices, the role of Digital Forensics in preserving the evidence and protecting our data from cybercrime is becoming more and more crucial.
Digital Forensics is used in various situations such as:
- Criminal Investigations: Digital Forensics enables investigators to trace back cyber threat actors and further identify victims of the crime to gather evidence needed to punish criminals.
- Legal issues: Digital Forensics might aid in legal matters involving intellectual property infringement and data breaches etc.
Types of Digital Data in Digital Forensics:
1.Persistent (Non-volatile) Data :-
- This type of Data Remains Intact When The Computer Is Turned Off.
- ex. Hard-disk, Flash-drives
2. Volatile Data :-
- These types of Data Would Be Lost When The Computer Is Turned Off.
- ex. Temp. Files, Unsaved OpenFiles, etc.
The Digital Forensics Process
The process is as follows

- Evidence Acquisition: This process involves making an exact copy (forensic image) of the storage devices such as hard drives, SSD or mobile devices. The goal is to preserve the original data without changing it.
- Data Recovery: After acquiring the forensic image, the analysts use tools to recover deleted, hidden or the encrypted data inside the device .
- Timeline Analysis: Analysts use timestamp information from files, and system logs to reconstruct the timeline of activities on a device. This helps in understanding how an incident spanned out and who was involved in it.
- Malware Analysis: In cases involving security breaches, analysts analyze malware samples to understand their behavior, impact, and origins. various reverse engineering techniques are used to analyze the malicious code.
Types of tools:
- Faraday Bags: Faraday bags are generally the first step in digital evidence capture. These bags are generally made of conductive materials, which are used to shield our electronic devices from external waves such as WiFi, Bluetooth, and mobile cellular signals, which in turn protects the digital evidence from external tampering.
- Data recovery : These types of software are generally used for the recovery of deleted files and their associated data. Ex. Magnet Forensics, Access data, X-Ways
- Disk imaging and analysis :These types of softwares are Generally used to replicate the data storage devices and then perform further analysis on it ex. FTKImager, Autopsy, and, Sleuth Kit
- File carving tools: They are generally used to extract information from the embedded files in the image made. Ex.Foremost, Binwalk, Scalpel
Some common tools:
- EnCase: It is a tool for acquiring, analyzing, and reporting digital evidence.
- Autopsy: It is an open-source platform generally used for analyzing hard drives and smartphones.
- Volatility: It is a framework used generally for memory forensics to analyze volatile memory dumps and extract info.
- Sleuth Kit: It is a package of CLI tools for investigating disk images and its associated file systems.
- Cellebrite UFED: It is a tool generally used for mobile forensics.
Challenges in the Field:
- Encryption: Encryption plays a major challenge as the encrypted data requires specialized techniques and tools for decryption.
- Anti-Forensic Techniques: Anti-Forensics techniques play a major challenge as the criminals often use anti-forensic methods to cover their tracks, making it challenging to get the digital evidence.
- Data Volume and Complexity: The large volume of digital data and the diversity of various devices create challenges in evidence collection and analysis.
The Future of Digital Forensics: A Perspective
With the growth of technology and the vast presence of digital data, the challenges and opportunities in Digital Forensics keep on updating themselves. Due to the onset of new technology and the ever growing necessity of cloud storage, mobile devices, and the IoT (Internet of Things), investigators will have to develop new strategies and should be ready to adapt and learn from the new shaping of the tech world.
Conclusion:
Digital Forensics is an essential field in the recent era for ensuring fairness in the digital era. By collecting, inspecting, and analyzing the digital data, the Digital Forensics investigators can arrive lawfully at the prosecution of criminals and the settlement of civil disputes. Nowadays with technology on one hand progressing continuously, the discipline of Digital Forensics will certainly become even more pivotal in the case of investigations in the years to come.