This reminds me of the time I unlocked the premium version of a trial fractal making program by bugging one of the shared libraries it came with.
The shared library in question was libcrypto, which is open source software. It wasn't hard to download the source for the correct version and bug one of the RSA functions to always return true, causing the program to think any correctly formatted license key is valid.
If it weren't for the vulnerable dependency idk if I could've made a key generator; they used the public key/private key infrastructure so to make a keygen I'd need their private key.
Increasingly I wonder if we should go back to a static linking for our software systems. Dependency management is obviously great, but it has been conflated with the practice of remotely downloading dependencies multiple times over a network, which is increasingly obviously insane.
A sane alternative to me (leaving aside the vagaries of git) would be using a dependency manager, but checking in the resolved libraries, delegating the auditing of said libraries to a specific person on the team. No remote resolution of libraries during deploys, or when a new dev starts working on the team, etc. but you still get the benefits of version coordination across libraries.
Yes, I know you can run your own nexus server, git is horrible w/ binaries, etc. I'm speaking conceptually here.
"static linking", as in compiling libraries into the final executable, wouldn't help here. If anything it makes the problem worse since it's harder to find all instances of a library when applying updates.
But maybe you mean "bundling the dependencies", or at least refering to exact versions by cryptographically secure hashes, including all transitive dependencies. I've seen lots of "package.json" files refering to ">=2.2" which is horrible from a reproducibility standpoint. I like the way composer writes out "composer.lock" files which specify exact versions. Thus, even if you don't want to check in the "vendor/" subdirectory into your VCS, you should at least more or less be able to reproduce your dependencies exactly. Of course, checking in your "vendor/" dir is even better since your build no longer depends on an internet connection and an available remote server.
Or maybe all those scripting languages and their associated package managers (npm, gem, composer, etc) should take a note from the dpkg/rpm/C-style "lib*.so" world. Settle for stable APIs and stick with a release for the lifetime of an OS edition.
Re: your issues with package.json, npm has long had the capability to produce 'shrinkwrap' files analogous to composer.lock/Gemfile.lock etc with the 'npm shrinkwrap' command. That doesn't excuse the use of '>=' version specifications (I frankly don't see any use case for it), but all these package managers are explicitly designed to allow flexibility in specifying dependency versions - in theory, if your dependencies follow semver, you should always be safe with a patch-level version bump. (Of course, in actual practice........)
The use-case for '>=' etc. is when writing an open-source library which depends on other libraries. Even with npm's ability to have multiple versions of each library at once, you don't want to have to ship a new version every time one of your dependencies does. Even if you don't mind the hassle, you'd still making it harder for your users to promptly install security fixes.
Using it for an open-source application can make sense for similar reasons, but for an internal application where cutting a release and deploying upstream security fixes are the same operation it's pretty pointless.
> Even with npm's ability to have multiple versions of each library at once, you don't want to have to ship a new version every time one of your dependencies does.
Nothing about using == forces you to ship a new version every time one of your dependencies does. When your dependencies ship a version, this changes nothing for you if you're using ==. In fact, >= is much more likely to force you to ship a new version when one of your dependencies does, because a breaking change in your dependencies can break your code. And when a breaking change breaks your code, it does so without warning. >= is a cause of your problem, not a solution.
> Even if you don't mind the hassle, you'd still making it harder for your users to promptly install security fixes.
If you're worried about security, then you shouldn't be shipping arbitrary dependency updates to your users, you should be evaluating the security of each release before pushing them out. And honestly, if you're writing security-critical code, you shouldn't be using a language that dumps everything into a global namespace. "JavaScript security" is an oxymoron.
This may be reasonable or not, depending on what are you developing. But there's no one solution for everything.
Is it an app just for you? Exact versions are great.
Is it an app for someone else? You should probably allow security fix updates, so at least x.y.*.
Is it a library? Anything close to a specific version is a terrible idea. Just define a minimal supported version and maybe an upper limit on (x+1).0.0.
I think the basic problem is that the ease of adding new dependencies means that dependencies are more frequently added without consideration. Delegating auditing of dependencies to a specific individual on the team can solve that problem if that person is very conservative about what they allow, but that seems to no longer be the norm.
It's not just security that this hurts: I've worked on a few projects now where one or two of the dependencies introduced early on later became the source of the majority of the work on the project. Often dependencies added to solve one problem are unable to solve other problems and are designed in such a way that they are tightly coupled to everything that uses them. The result: either you put in the time to rip out the dependency, or you pile workarounds on top of workarounds to the point that it would have been easier to write the dependency yourself.
There is, of course, a balance to be had here: a solid library can save you an immense amount of work. But only a handful of the libraries I have used are actually solid, and I'm increasingly unwilling to pull in dependencies I haven't used before.
As a rule of thumb: I never want to use a library that's not versioned 1.0 or higher. A 1.0 version doesn't mean that it's ready for use in production systems, but a 0.x version number has always meant it's not ready, in my experience.
Static linking is pretty common in the jvm world where you want to have a single war/jar/ear that you deploy so that can figure out all those things at compile time, rather than deploy time.
I think languages with a compile step naturally tends towards static linking for SaaS systems since single compiled blobs are much easier to manage and deploy.
But Docker is also pretty much static linking of the entire runtime environment, and that seems to be getting some traction where people have messier deployments.
I see dependency management as orthogonal to static vs dynamic linking, personally. Linking is about producing the final artifact, dependency management is about how you develop the software.
Static linking can _also_ be dangerous: with dynamic linking, update your openssl, all your stuff is fixed. With static, you have to recompile every single thing that uses openssl...
The shared library in question was libcrypto, which is open source software. It wasn't hard to download the source for the correct version and bug one of the RSA functions to always return true, causing the program to think any correctly formatted license key is valid.
If it weren't for the vulnerable dependency idk if I could've made a key generator; they used the public key/private key infrastructure so to make a keygen I'd need their private key.