> without relying on any third-party servers being available and without pulling in dependencies that might have changed.
There are two different issues here:
1. Not pulling in changed dependencies. This is what "lock files" are for: To limit builds to known version of every dependency. npm was terrible about this for a long time. Most other language package managers are better.
2. Not relying on third party servers to be available. Personally, I've mostly worked with Ruby's bundler and Rust's cargo, and in over 10 years, I've lost maybe two days of work because of package server outages. That's less than I've lost due to S3 outages, less than I've lost due to broken backup systems, and less than I've lost due to complex RAID failures. For the clients and employers in question, this was acceptable downtime.
In the rare cases where one day of noticeable build downtime every 5 years is unacceptable, then it's usually possible to just "vendor" the dependencies, usually by running a one-line command.
For many small to midsize businesses (and many growing startups), a small risk of brief outages is an acceptable engineering trade-off.
That's putting it lightly. To me "terrible" would mean they just didn't support it, "batshit insane" means they supported lock files but ignored them every time you ran npm install.
"Why would you expect something called package lock to lock the packages? Package lock is analogous to how, when you put any key into a door lock, the lock reshapes itself to match whatever key was put in, then opens the door. Now if you'll excuse me, I'm late for a tea party with a rabbit."
At work we also cache all 3rd party dependencies locally. Not because the 3rd party servers might be down but to be sure that one/two/... years from now we can still recreate the same software we delivered at that time to a customer. Our build machines typically didn't even have internet access. If a dependency was not available offline (due to developer error or whatever) we would know quickly.
In certain industry branches this is even mandatory if you want to be seriously considered as a supplier. If a build tool does not support reproducible builds in such a way (both fixing dependency version and getting it from a cache somehow) or makes it difficult then it is considered as a hobby toy that has no place in the workplace.
Even for small businesses I would advocate to take this seriously from the beginning. It's not that hard and will save you headaches later on when suddenly reproducible builds become important.
> At work we also cache all 3rd party dependencies locally [...]
Can you say a bit about your platform and tooling? Are you working in a single language or a polyglot world? Is the cacheing at the network level or are your build tools aware of your mirrors?
I work in a space (biotech/pharma/...) that shares these concerns. I've solved the Perl specific version with Pinto (https://metacpan.org/pod/Pinto) and the more generic version with Spack (https://spack.io) [which is neat also because it supports installing multiple versions of applications, doesn't require root, <other things>].
>I've lost maybe two days of work because of package server outages [...] For the clients and employers in question, this was acceptable downtime.
Even if the downtime is fine, I don't find it goes over too well if framed as "we rely on a 3rd party service, that we have no contract with nor any guarantees of reliability or product longevity."
npm has only gotten worse about lockfiles over time, not better. I wonder out loud some times if they know what the word 'lock' means.
If yarn would fix a particular bug that blocks our workflow I'd be burning political capital at work to get us off of npm as fast as humanly possible.
As to proxies, we have something misconfigured with ours, such that occasionally it gets latest of half of the React or Babel ecosystem and latest-1 of the other half, resulting in dependencies that can't be resolved for a few hours when they increment a minor version somewhere.
There are two different issues here:
1. Not pulling in changed dependencies. This is what "lock files" are for: To limit builds to known version of every dependency. npm was terrible about this for a long time. Most other language package managers are better.
2. Not relying on third party servers to be available. Personally, I've mostly worked with Ruby's bundler and Rust's cargo, and in over 10 years, I've lost maybe two days of work because of package server outages. That's less than I've lost due to S3 outages, less than I've lost due to broken backup systems, and less than I've lost due to complex RAID failures. For the clients and employers in question, this was acceptable downtime.
In the rare cases where one day of noticeable build downtime every 5 years is unacceptable, then it's usually possible to just "vendor" the dependencies, usually by running a one-line command.
For many small to midsize businesses (and many growing startups), a small risk of brief outages is an acceptable engineering trade-off.