Some developers foul open source software

gettyimages-1159346361-malicious-code-skull-crossbones.jpg

Getty Images

One of the most amazing things about open source isn’t that it produces great software. It’s that so many developers put their ego aside to create great programs with the help of others. Now, however, a handful of programmers are putting their own concerns ahead of the many potentially devastating open source software for everyone.

For example, Brandon Nozaki Miller, maintainer of the JavaScript package manager, RIAEvangelist, wrote and released an open-source npm source code package called peacenotwar. All it did was print a message of peace to desktop computers. So far so harmless.

Miller then inserted malicious code into the package to overwrite users’ file systems if their computer had an IP address in Russia or Belarus. He then added it as a dependency to his popular node-ipc program and instant chaos! Many servers and PCs have crashed while updating to the newest code, and then their systems have had their disks wiped.

Miller’s defense, “It’s all public, documented, licensed, and open source,” doesn’t hold up.

Liran Tal, the Snyk researcher who discovered the problem, said: “Even though the deliberate and dangerous act [is] Perceived by some as a legitimate act of protest, how does this affect the maintainer’s future reputation and interest in the developer community? Would this maintainer be trusted again not to follow through on future acts in such actions or even more aggressive actions for any projects they are involved in?”

Miller is not a random crank. He produced a lot of good code, like node-ipc and Node HTTP Server. But can you trust any of its codes not to be malicious? While he describes it as “not malware, [but] protestware that is fully documented,” others disagree with Venom.

As one GitHub programmer wrote, “What’s going to happen with this is that Western corporate security teams that have absolutely nothing to do with Russia or politics are going to start seeing free software and open source as an avenue for supply chain attacks (which it totally is) and just start banning free and open source software – all free and open source software – within their companies. ”

As another GitHub developer with the handle nm17 wrote, “The trust factor of open source, which used to be based on the goodwill of developers, is now all but gone, and now more and more people are realize that one day their library/application may possibly be exploited to do/say whatever a random developer on the internet thought ‘was the right thing to do'”.

Both make valid points. When you can’t use the source code unless you agree with the political position of its creator, how can you use it with confidence?

Miller’s heart may be in the right place: Slava Ukraini! — but is open-source software infected with a malicious payload the right way to protect against Russia’s invasion of Ukraine? No it is not.

The open-source method only works because we trust each other. When that trust is broken, for whatever cause, the fundamental framework of open source is broken. As Greg Kroah-Hartman, the Linux kernel maintainer for the stable branch, said when University of Minnesota students deliberately tried to insert bad code into the Linux kernel for an experiment in 2021, “This they are doing is intentionally malicious behavior and is not acceptable and totally unethical.”

People have long argued that open source should also include ethical provisions. For example, the Exception General Public License (eGPL) of 2009, a revision of GPLv2, attempted to prohibit “exceptions,” such as military users and vendors, from using its code. It failed. Other licenses such as the JSON license with its kindly naive “software should be used for good, not evil” clause still exist, but no one enforces it.

More recently, activist and software developer Coraline Ada Ehmke introduced an open source license that compels its users to act morally. Specifically, its Hippocratic License added to the MIT Open Source License a clause stating:

“The software may not be used by individuals, corporations, governments, or other groups for systems or activities that actively and knowingly endanger, harm, or otherwise threaten the physical, mental, economic, or general of disadvantaged individuals or groups in violation of the United Nations Universal Declaration of Human Rights.”

Sounds good, but it’s not open source. You see, open source itself is an ethical position. Its ethos is contained in the four essential freedoms of the Free Software Foundation (FSF). This is the basis of all open source licenses and their basic philosophy. As open source legal expert and Columbia law professor Eben Moglen said at the time that ethical licenses cannot be free software or open source licenses:

“Zero freedom, the right to run the program for any purpose, comes first in the four freedoms because if users don’t have that right with respect to the computer programs they run, they ultimately have no rights to those programs. Efforts to only allow good uses, or prohibit bad ones in the eyes of the licensor, violate the requirement to protect zero freedom.”

In other words, if you can’t share your code for some reason, your code isn’t truly open source.

Another more pragmatic argument about banning a group from using open source software is that blocking something like an IP address is a very broad brush. Like Florian Roth, Head of Research at security company Nextron Systems, who considered “disable my free tools on systems with certain language and timezone settings”, finally decided not to. Why? Because by doing this, “we would also deactivate the tools on the systems of critics and freethinkers who condemn the actions of their governments.”

Unfortunately, it’s not just people who try to use open source for what they see as a higher ethical goal that cause problems for open source software.

Earlier this year, JavaScript developer Marak Squires deliberately sabotaged his obscure but vitally important open-source JavaScript libraries “colors.js” and “faker.js”. Results ? Tens of thousands of JavaScript programs exploded.

Why? It’s not entirely clear yet, but in a since-deleted GitHub post, Squires wrote, “Respectfully, I will no longer be supporting the Fortune 500 (and other small businesses) with my free work. There’s not much else to Take It as an opportunity to send me a six-figure annual contract or fork the project off and have someone else work on it. As you can imagine, this blackmail attempt to get a paycheck didn’t work out so well for him.

And then there are people who deliberately put malware in their open source code for fun and profit. For example, DevOps security firm JFrog discovered 17 new malicious JavaScript packages in the NPM repository that deliberately attack and steal a user’s Discord tokens. These can then be used on the digital communication and distribution platform Discord.

In addition to creating new malicious open-source programs that appear innocent and useful, other attackers take old, derelict software and rewrite it to include crypto-theft backdoors. One such program was the event stream. Malicious code was inserted there to steal Bitcoin wallets and transfer their balances to a server in Kuala Lumpur. There have been several similar episodes over the years.

With each of these moves, trust in open source software is depleted. Since open-source is absolutely vital for the modern world, this is a bad trend.

What can we do about it? Well, on the one hand, we should consider very carefully when, if at all, we should block the use of open source code.

More concretely, we need to start adopting the use of Software Package Data Exchange (SPDX) and Software Bill of Materials (SBOM) from Linux Foundation. Together they will tell us exactly what code we use in our programs and where it comes from. Then we will be much better able to make informed decisions.

Today, very often people use open source code without knowing exactly what they are running or without checking it for problems. They assume everything is fine with it. It was never a smart assumption. Today is downright silly.

Even with all these recent changes, open source is still better and safer than proprietary black box software alternatives. But, we need to check and verify the code instead of blindly trusting it. It’s the only smart thing to do to move forward.

Related stories: