#FactCheck: Viral Photo Shows Sun Ways Project, Incorrectly Linked to Indian Railways
Executive Summary:
Social media has been overwhelmed by a viral post that claims Indian Railways is beginning to install solar panels directly on railway tracks all over the country for renewable energy purposes. The claim also purports that India will become the world's first country to undertake such a green effort in railway systems. Our research involved extensive reverse image searching, keyword analysis, government website searches, and global media verification. We found the claim to be completely false. The viral photos and information are all incorrectly credited to India. The images are actually from a pilot project by a Swiss start-up called Sun-Ways.

Claim:
According to a viral post on social media, Indian Railways has started an all-India initiative to install solar panels directly on railway tracks to generate renewable energy, limit power expenses, and make global history in environmentally sustainable rail operations.

Fact check:
We did a reverse image search of the viral image and were soon directed to international media and technology blogs referencing a project named Sun-Ways, based in Switzerland. The images circulated on Indian social media were the exact ones from the Sun-Ways pilot project, whereby a removable system of solar panels is being installed between railway tracks in Switzerland to evaluate the possibility of generating energy from rail infrastructure.

We also thoroughly searched all the official Indian Railways websites, the Ministry of Railways news article, and credible Indian media. At no point did we locate anything mentioning Indian Railways engaging or planning something similar by installing solar panels on railway tracks themselves.
Indian Railways has been engaged in green energy initiatives beyond just solar panel installation on program rooftops, and also on railway land alongside tracks and on train coach roofs. However, Indian Railways have never installed solar panels on railway tracks in India. Meanwhile, we found a report of solar panel installations on the train launched on 14th July 2025, first solar-powered DEMU (diesel electrical multiple unit) train from the Safdarjung railway station in Delhi. The train will run from Sarai Rohilla in Delhi to Farukh Nagar in Haryana. A total of 16 solar panels, each producing 300 Wp, are fitted in six coaches.


We also found multiple links to support our claim from various media links: Euro News, World Economy Forum, Institute of Mechanical Engineering, and NDTV.

Conclusion:
After extensive research conducted through several phases including examining facts and some technical facts, we can conclude that the claim that Indian Railways has installed solar panels on railway tracks is false. The concept and images originate from Sun-Ways, a Swiss company that was testing this concept in Switzerland, not India.
Indian Railways continues to use renewable energy in a number of forms but has not put any solar panels on railway tracks. We want to highlight how important it is to fact-check viral content and other unverified content.
- Claim: India’s solar track project will help Indian Railways run entirely on renewable energy.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
Google Play has announced its new policy which will ensure trust and transparency on google play by providing a new framework for developer verification and app details. The new policy requires that new developer accounts on Google Play will have to provide a D-U-N-S number to verify the business. So when an organisation will create a new Play Console developer account the organisation will need to provide a D-U-N-S number. Which is a nine-digit unique identifier which will be used to verify their business. The new google play policy aims to enhance user trust. And the developer will provide detailed developer details on the app’s listing page. Users will get to know who is behind the app which they are installing.
Verifying Developer Identity with D-U-N-S Numbers
To boost security the google play new policy requires the developer account to provide the D-U-N-S number when creating a new Play Console developer account. The D-U-N-S number assigned by Dun & Bradstreet will be used to verify the business. Once the developer creates his new Play Console developer account by providing a D-U-N-S number, Google Play will verify the developer’s details, and he will be able to start publishing the apps. Through this step, Google Play aims to validate the business information in a more authentic way.
If your organisation does not have a D-U-N-S number, you may check on or request for it for free on this website (https://www.dnb.com/duns-number/lookup.html). The request process for D-U-N-S can take up to 30 days. Developers are also required to keep the information up to date.
Building User Trust with Enhanced App Details
In addition to verifying developer identities in a more efficient way, google play also requires that developer provides sufficient app details to the users. There will be an “App Support” section on the app’s store listing page, where the developer will display the app’s support email address and even can include their website and phone number for support.
The new section “About the developer” will also be introduced to provide users with verified identity information, including the developer’s name, address, and contact details. Which will make the users more informed about the valuable information of the app developers.
Key highlights of the Google Play Polic
- Google Play came up with the policy to keep the platform safe by verifying the developers’ identity and it will also help to reduce the spread of malware apps and help the users to make confident informed decisions about the apps they download. Google Play announced the policy by expanding its developer verification requirement to strengthen Google Play as a platform and build user trust. When you create a new Play Console Developer account and choose organisation as your account type you will now need to provide a D-U-N-S number.
- Users will get detailed information about the developers’ identities and contact information, building more transparency and encouraging responsible app development practices.
- This policy will enable the users to make informed choices about the apps they download.
- The new “App support” section will provide enhanced communication between users and developers by displaying support email addresses, website and support phone numbers, streamlining the support process and user satisfaction.
Timeline and Implementation
The new policy requirements for D-U-N-S numbers will start rolling out on 31 August 2023 for all new Play Console developer accounts. The “About the developer” section will be visible to users as soon as a new app is published. and In October 2023, existing developers will also be required to update and verify their existing accounts to comply with the new verification policy.
Conclusion
Google Play’s new policy will aim to enhance the more transparent app ecosystem. This new policy will provide the users with more information about the developers. Google Play aims to establish a platform where users can confidently discover and download apps. This new policy will enhance the user experience on google play in terms of a reliable and trustworthy platform.

About Global Commission on Internet Governance
The Global Commission on Internet Governance was established in January 2014 with the goal of formulating and advancing a strategic vision for Internet governance going forward. Independent research on Internet-related issues of international public policy is carried out and supported over the two-year initiative. An official commission report with particular policy recommendations for the future of Internet governance will be made available as a result of this initiative.
There are two goals for the Global Commission on Internet Governance. First, it will encourage a broad and inclusive public discussion on how Internet governance will develop globally. Second, through its comprehensive policy-oriented report and the subsequent marketing of this final report, the Global Commission on Internet Governance will present its findings to key stakeholders at major Internet governance events.
The Internet: exploring the world wide web and the deep web
The Internet can be thought of as a vast networking infrastructure, or network of networks. By linking millions of computers worldwide, it creates a network that allows any two computers, provided they are both online, to speak with one another.
The Hypertext Transfer Protocol is the only language spoken over the Internet and is used by the Web to transfer data. Email, which depends on File Transfer Protocol, Usenet newsgroups, Simple Mail Transfer Protocol, and instant messaging, is also used on the Internet—not the Web. Thus, even though it's a sizable chunk, the Web is only a part of the Internet [1]. In summary, the deep Web is the portion of the Internet that is not visible to the naked eye. It is stuff from the World Wide Web that isn't available on the main Web. Standard search engines cannot reach it. More than 500 times larger than the visible Web is this enormous subset of the Internet [1-2].
The Global Commission on Internet Governance will concentrate on four principal themes:
• Improving the legitimacy of government, including standards and methods for regulation;
• Promoting economic innovation and expansion, including the development of infrastructure, competition laws, and vital Internet resources;
• Safeguarding online human rights, including establishing the idea of technological neutrality for rights to privacy, human rights, and freedom of expression;
• Preventing systemic risk includes setting standards for state behaviour, cooperating with law enforcement to combat cybercrime, preventing its spread, fostering confidence, and addressing disarmament-related issues.
Dark Web
The part of the deep Web that has been purposefully concealed and is unreachable using conventional Web browsers is known as the "dark Web." Dark Web sites are a platform for Internet users who value their anonymity since they shield users from prying eyes and typically utilize encryption to thwart monitoring. The Tor network is a well-known source for content that may be discovered on the dark web. Only a unique Web browser known as the Tor browser is required to access the anonymous Tor network (Tor 2014). It was a technique for anonymous online communication that the US Naval Research Laboratory first introduced as The Onion Routing (Tor) project in 2002. Many of the functionality offered by Tor are also available on I2P, another network. On the other hand, I2P was intended to function as a network inside the Internet, with traffic contained within its boundaries. Better anonymous access to the open Internet is offered by Tor, while a more dependable and stable "network within the network" is provided by I2P [3].
Cybersecurity in the dark web
Cyber crime is not any different than crime in the real world — it is just executed in a new medium: “Virtual criminality’ is basically the same as the terrestrial crime with which we are familiar. To be sure, some of the manifestations are new. But a great deal of crime committed with or against computers differs only in terms of the medium. While the technology of implementation, and particularly its efficiency, may be without precedent, the crime is fundamentally familiar. It is less a question of something completely different than a recognizable crime committed in a completely different way [4].”
Dark web monitoring
The dark Web, in general, and the Tor network, in particular, offer a secure platform for cybercriminals to support a vast amount of illegal activities — from anonymous marketplaces to secure means of communication, to an untraceable and difficult to shut down infrastructure for deploying malware and botnets.
As such, it has become increasingly important for security agencies to track and monitor the activities in the dark Web, focusing today on Tor networks, but possibly extending to other technologies in the near future. Due to its intricate webbing and design, monitoring the dark Web will continue to pose significant challenges. Efforts to address it should be focused on the areas discussed below [5].
Hidden service directory of dark web
A domain database used by both Tor and I2P is based on a distributed system called a "distributed hash table," or DHT. In order for a DHT to function, its nodes must cooperate to store and manage a portion of the database, which takes the shape of a key-value store. Owing to the distributed character of the domain resolution process for hidden services, nodes inside the DHT can be positioned to track requests originating from a certain domain [6].
Conclusion
The deep Web, and especially dark Web networks like Tor (2004), offer bad actors a practical means of transacting in products anonymously and lawfully.
The absence of discernible activity in non-traditional dark web networks is not evidence of their nonexistence. As per the guiding philosophy of the dark web, the actions are actually harder to identify and monitor. Critical mass is one of the market's driving forces. It seems unlikely that operators on the black Web will require a great degree of stealth until the repercussions are severe enough, should they be caught. It is possible that certain websites might go down, have a short trading window, and then reappear, which would make it harder to look into them.
References
- Ciancaglini, Vincenzo, Marco Balduzzi, Max Goncharov and Robert McArdle. 2013. “Deepweb and Cybercrime: It’s Not All About TOR.” Trend Micro Research Paper. October.
- Coughlin, Con. 2014. “How Social Media Is Helping Islamic State to Spread Its Poison.” The Telegraph, November 5.
- Dahl, Julia. 2014. “Identity Theft Ensnares Millions while the Law Plays Catch Up.” CBS News, July 14.
- Dean, Matt. 2014. “Digital Currencies Fueling Crime on the Dark Side of the Internet.” Fox Business, December 18.
- Falconer, Joel. 2012. “A Journey into the Dark Corners of the Deep Web.” The Next Web, October 8.
- Gehl, Robert W. 2014. “Power/Freedom on the Dark Web: A Digital Ethnography of the Dark Web Social Network.” New Media & Society, October 15. http://nms.sagepub.com/content/early/2014/ 10/16/1461444814554900.full#ref-38.

Introduction
The term ‘super spreader’ is used to refer to social media and digital platform accounts that are able to quickly transmit information to a significantly large audience base in a short duration. The analogy references the medical term, where a small group of individuals is able to rapidly amplify the spread of an infection across a huge population. The fact that a few handful accounts are able to impact and influence many is attributed to a number of factors like large follower bases, high engagement rates, content attractiveness or virality and perceived credibility.
Super spreader accounts have become a considerable threat on social media because they are responsible for generating a large amount of low-credibility material online. These individuals or groups may create or disseminate low-credibility content for a number of reasons, running from social media fame to garnering political influence, from intentionally spreading propaganda to seeking financial gains. Given the exponential reach of these accounts, identifying, tracing and categorising such accounts as the sources of misinformation can be tricky. It can be equally difficult to actually recognise the content they spread for the misinformation that it actually is.
How Do A Few Accounts Spark Widespread Misinformation?
Recent research suggests that misinformation superspreaders, who consistently distribute low-credibility content, may be the primary cause of the issue of widespread misinformation about different topics. A study[1] by a team of social media analysts at Indiana University has found that a significant portion of tweets spreading misinformation are sent by a small percentage of a given user base. The researchers conducted a review of 2,397,388 tweets posted on Twitter (now X) that were flagged as having low credibility and details on who was sending them. The study found that it does not take a lot of influencers to sway the beliefs and opinions of large numbers. This is attributed to the impact of what they describe as superspreaders. The researchers collected 10 months of data, which added up to 2,397,388 tweets sent by 448,103 users, and then reviewed it, looking for tweets that were flagged as containing low-credibility information. They found that approximately a third of the low-credibility tweets had been posted by people using just 10 accounts, and that just 1,000 accounts were responsible for posting approximately 70% of such tweets.[2]
Case Study
- How Misinformation ‘Superspreaders’ Seed False Election Theories
During the 2020 U.S. presidential election, a small group of "repeat spreaders" aggressively pushed false election claims across various social media platforms for political gain, and this even led to rallies and radicalisation in the U.S.[3] Superspreaders accounts were responsible for disseminating a disproportionately large amount of misinformation related to the election, influencing public opinion and potentially undermining the electoral process.
In the domestic context, India was ranked highest for the risk of misinformation and disinformation according to experts surveyed for the World Economic Forum’s 2024 Global Risk Report. In today's digital age, misinformation, deep fakes, and AI-generated fakes pose a significant threat to the integrity of elections and democratic processes worldwide. With 64 countries conducting elections in 2024, the dissemination of false information carries grave implications that could influence outcomes and shape long-term socio-political landscapes. During the 2024 Indian elections, we witnessed a notable surge in deepfake videos of political personalities, raising concerns about the influence of misinformation on election outcomes.
- Role of Superspreaders During Covid-19
Clarity in public health communication is important when any grey areas or gaps in information can be manipulated so quickly. During the COVID-19 pandemic, misinformation related to the virus, vaccines, and public health measures spread rapidly on social media platforms, including Twitter (Now X). Some prominent accounts or popular pages on platforms like Facebook and Twitter(now X) were identified as superspreaders of COVID-19 misinformation, contributing to public confusion and potentially hindering efforts to combat the pandemic.
As per the Center for Countering Digital Hate Inc (US), The "disinformation dozen," a group of 12 prominent anti-vaccine accounts[4], were found to be responsible for a large amount of anti-vaccine content circulating on social media platforms, highlighting the significant role of superspreaders in influencing public perceptions and behaviours during a health crisis.
There are also incidents where users are unknowingly engaged in spreading misinformation by forwarding information or content which are not always shared by the original source but often just propagated by amplifiers, using other sources, websites, or YouTube videos that help in dissemination. The intermediary sharers amplify these messages on their pages, which is where it takes off. Hence such users do not always have to be the ones creating or deliberately popularising the misinformation, but they are the ones who expose more people to it because of their broad reach. This was observed during the pandemic when a handful of people were able to create a heavy digital impact sharing vaccine/virus-related misinformation.
- Role of Superspreaders in Influencing Investments and Finance
Misinformation and rumours in finance may have a considerable influence on stock markets, investor behaviour, and national financial stability. Individuals or accounts with huge followings or influence in the financial niche can operate as superspreaders of erroneous information, potentially leading to market manipulation, panic selling, or incorrect impressions about individual firms or investments.
Superspreaders in the finance domain can cause volatility in markets, affect investor confidence, and even trigger regulatory responses to address the spread of false information that may harm market integrity. In fact, there has been a rise in deepfake videos, and fake endorsements, with multiple social media profiles providing unsanctioned investing advice and directing followers to particular channels. This leads investors into dangerous financial decisions. The issue intensifies when scammers employ deepfake videos of notable personalities to boost their reputation and can actually shape people’s financial decisions.
Bots and Misinformation Spread on Social Media
Bots are automated accounts that are designed to execute certain activities, such as liking, sharing, or retweeting material, and they can broaden the reach of misinformation by swiftly spreading false narratives and adding to the virality of a certain piece of content. They can also artificially boost the popularity of disinformation by posting phony likes, shares, and comments, making it look more genuine and trustworthy to unsuspecting users. Bots can exploit social network algorithms by establishing false identities that interact with one another and with real users, increasing the spread of disinformation and pushing it to the top of users' feeds and search results.
Bots can use current topics or hashtags to introduce misinformation into popular conversations, allowing misleading information to acquire traction and reach a broader audience. They can lead to the construction of echo chambers, in which users are exposed to a narrow variety of perspectives and information, exacerbating the spread of disinformation inside restricted online groups. There are incidents reported where bot's were found as the sharers of content from low-credibility sources.
Bots are frequently employed as part of planned misinformation campaigns designed to propagate false information for political, ideological, or commercial gain. Bots, by automating the distribution of misleading information, can make it impossible to trace the misinformation back to its source. Understanding how bots work and their influence on information ecosystems is critical for combatting disinformation and increasing digital literacy among social media users.
CyberPeace Policy Recommendations
- Recommendations/Advisory for Netizens:
- Educating oneself: Netizens need to stay informed about current events, reliable fact-checking sources, misinformation counter-strategies, and common misinformation tactics, so that they can verify potentially problematic content before sharing.
- Recognising the threats and vulnerabilities: It is important for netizens to understand the consequences of spreading or consuming inaccurate information, fake news, or misinformation. Netizens must be cautious of sensationalised content spreading on social media as it might attempt to provoke strong reactions or to mold public opinions. Netizens must consider questioning the credibility of information, verifying its sources, and developing cognitive skills to identify low-credibility content and counter misinformation.
- Practice caution and skepticism: Netizens are advised to develop a healthy skepticism towards online information, and critically analyse the veracity of all information sources. Before spreading any strong opinions or claims, one must seek supporting evidence, factual data, and expert opinions, and verify and validate claims with reliable sources or fact-checking entities.
- Good netiquette on the Internet, thinking before forwarding any information: It is important for netizens to practice good netiquette in the online information landscape. One must exercise caution while sharing any information, especially if the information seems incorrect, unverified or controversial. It's important to critically examine facts and recognise and understand the implications of sharing false, manipulative, misleading or fake information/content. Netizens must also promote critical thinking and encourage their loved ones to think critically, verify information, seek reliable sources and counter misinformation.
- Adopting and promoting Prebunking and Debunking strategies: Prebunking and debunking are two effective strategies to counter misinformation. Netizens are advised to engage in sharing only accurate information and do fact-checking to debunk any misinformation. They can rely on reputable fact-checking experts/entities who are regularly engaged in producing prebunking and debunking reports and material. Netizens are further advised to familiarise themselves with fact-checking websites, and resources and verify the information.
- Recommendations for tech/social media platforms
- Detect, report and block malicious accounts: Tech/social media platforms must implement strict user authentication mechanisms to verify account holders' identities to minimise the formation of fraudulent or malicious accounts. This is imperative to weed out suspicious social media accounts, misinformation superspreader accounts and bots accounts. Platforms must be capable of analysing public content, especially viral or suspicious content to ascertain whether it is misleading, AI-generated, fake or deliberately misleading. Upon detection, platform operators must block malicious/ superspreader accounts. The same approach must apply to other community guidelines’ violations as well.
- Algorithm Improvements: Tech/social media platform operators must develop and deploy advanced algorithm mechanisms to detect suspicious accounts and recognise repetitive posting of misinformation. They can utilise advanced algorithms to identify such patterns and flag any misleading, inaccurate, or fake information.
- Dedicated Reporting Tools: It is important for the tech/social media platforms to adopt robust policies to take action against social media accounts engaged in malicious activities such as spreading misinformation, disinformation, and propaganda. They must empower users on the platforms to flag/report suspicious accounts, and misleading content or misinformation through user-friendly reporting tools.
- Holistic Approach: The battle against online mis/disinformation necessitates a thorough examination of the processes through which it spreads. This involves investing in information literacy education, modifying algorithms to provide exposure to varied viewpoints, and working on detecting malevolent bots that spread misleading information. Social media sites can employ similar algorithms internally to eliminate accounts that appear to be bots. All stakeholders must encourage digital literacy efforts that enable consumers to critically analyse information, verify sources, and report suspect content. Implementing prebunking and debunking strategies. These efforts can be further supported by collaboration with relevant entities such as cybersecurity experts, fact-checking entities, researchers, policy analysts and the government to combat the misinformation warfare on the Internet.
References:
- https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0302201 {1}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette {2}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette {3}
- https://counterhate.com/research/the-disinformation-dozen/ {4}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette
- https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0302201
- https://www.nytimes.com/2020/11/23/technology/election-misinformation-facebook-twitter.html
- https://www.wbur.org/onpoint/2021/08/06/vaccine-misinformation-and-a-look-inside-the-disinformation-dozen
- https://healthfeedback.org/misinformation-superspreaders-thriving-on-musk-owned-twitter/
- https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8139392/
- https://www.jmir.org/2021/5/e26933/
- https://www.yahoo.com/news/7-ways-avoid-becoming-misinformation-121939834.html