Well, traditionally, there was no Python/pip, JS/npm in Linux development, and for C/C++ development, the package manager approach worked surprisingly well for a long time.
However, there were version problems: some Linux distributions had only stable packages and therefore lacked the latest updates, and some had problems with multiple versions of the same library. This gave rise to the language-specific package managers. It solved one problem but created a ton of new ones.
Sometimes I wish we could just go back to system package managers, because at times, language-specific package managers do not even solve the version problem, which is their raison d'être.
Nix devShells works quite well for Python development (don't know about JS)
Nixpkgs is also quite up to date.
I haven't looked back, since adopting Nix for my dev environments.
> Also: isn't the Arch wiki the new Gentoo wiki? Because that was the wiki early 2000s and, again, I've never used Gentoo!
Exactly my thought! 20 years ago, I used Gentoo, and their wiki was the best. Somewhen the Arch wiki appeared and became better and better. At some point, I was tired of compiling for hours and switched one machine at a time to Arch, and today, the Arch wiki is the number one.
Interestingly enough, the ArchWiki itself seems to slowly be getting augmented by NixOS its wiki. Due to the way NixOS works, new packages constantly hit weird edge cases, which then requires deep diving into the package to write a workaround, the info of which either ends up in the wiki or the .nix package comments.
Arch and its wikin were already pretty good when it happened, but the real turning point was when the Gentoo wiki got hacked. After that, it never really recovered, and the Arch wiki must have absorbed a lot of that expertise because that's when it really took off.
as I recall anyway. can't believe it's been so long.
What bothers me most about YT is that it constantly plays videos with an AI-generated voice.
In general, I like AI features and use AI daily to build prototypes, but this feature looks so stupid to me and feels so wrong. I have no problem with it being an option, but by default I just want to watch the videos with their original soundtracks. But instead YT decides that I should watch the videos with some mediocre AI translation...
Maybe I could disable it using an account, but I still prefer not to have one when it's not necessary.
I don't like this particular phrase, as it suggests that LLMs are just replicating code. As far as I understand, LLMs can also create new code and algorithms (some better than others). When we think of them as copy-paste-maschines, we judge their capabilities unfairly and also underestimate their capabilities.
Especially in the age of AI tools, I also thought about this a few times. The current idea I have is something like a parking meter. Every expensive transaction (like calling a model) would subtract from the money pool, and every visitor could see how much is still left in the pool. In addition, a list of the top 5 donors with their amounts might improve the group dynamic (like on pay-what-you-want pages like humblebundle.com).
It would be more about covering the cost than about making someone rich, but I think that is what most of the people who build stuff care about. Sadly, I don't know a service yet that offers this model.
This won't work when the meter is at zero due to human psychology. New visitors will say: "no one subsidized my experience (indeed I don't even know what $thing does) but <creator> wants me to subsidize $thing for others".
The whole "subsidize for other visitors" concept is weaker than "pay <creator>".
One of my favorite recent KDE features: Press Meta+t to design a custom window layout, and later hold Shift while you drag a window to place it in a slot in that layout.
I mean, if you let the LLM build a testris bot, it would be 1000x better than what the LLMs are doing. So yes, it is fun to win against an AI, but to be fair against such processing power, you should not be able to win. It is only possible because LLMs are not built for such tasks.
While Qwen2.5 was pre-trained on 18 trillion tokens, Qwen3 uses nearly twice that amount, with approximately 36 trillion tokens covering 119 languages and dialects.
Thanks for the info, but I don't think it answers the question. I mean, you could train a 20-node network on 36 trillion tokens. Wouldn't make much sense, but you could. So I was asking more about the number of nodes / parameters or GB of file size.
This is the Max series models with unreleased weights, so probably larger than the largest released one. Also when refering to models, use huggingface or modelscope (wherever it is published) ollama is a really poor source on model info. they have some some bad naming (like confusing people on the deepseek R1 models), renaming, and more on model names, and they default to q4 quants, witch is a good sweet-spot but really degrades performance compared to the raw weigths.
However, there were version problems: some Linux distributions had only stable packages and therefore lacked the latest updates, and some had problems with multiple versions of the same library. This gave rise to the language-specific package managers. It solved one problem but created a ton of new ones.
Sometimes I wish we could just go back to system package managers, because at times, language-specific package managers do not even solve the version problem, which is their raison d'être.
reply