I see it as worse because you could have put just as much effort in - less even - and gotten a better result just sticking it in a machine translator and pasting that.
There's clearly a gap in how or for what LLM-enthusiasts and I would use LLMs. When I've tried it, I've found it just as frustrating as you describe, and it takes away the elements of programming that make it tolerable for me to do. I don't even think I have especially high standards - I can be pretty lazy for anything outside of work.
I don't view LLMs as a substitute for thinking; I view them as an aid to research and study, and as a translator from pseudocode to syntax. That is, instead of trawling through all the documentation myself and double-checking everything manually, an LLM can pop up a solution of some quality, and if that agrees with how my mental model assumes it should work, I'll accept it or improve on it. And if I know what I want to do but don't know some exact syntax, like has happened in Swift recently as I explore macOS development, an LLM can translate my implementation ideas into something that compiles.
More to the point of the article, though, LLM-enthusiasts do seem to view it as a substitute for thinking. They're not augmenting their application of knowledge with shortcuts and fast-paths; they're entirely trusting the LLM to engineer things on its own. LLMs are great at creating the impression that they are suitable for this; after all, they are trained on tons of perfectly reasonable engineering data, and start to show all the same signals that a naïve user would use to tell quality of engineering... just without the quality.
This is the way I would consider using them; I just haven't really been able to figure out what I would need to get a reasonably fast and useful local setup without spending a ton of money.
And because of that, we check in the generated code, not the high-level abstraction. So to understand your program, you have to read the output, not the input.
Totally possible and we can already do it ! Simply put, just set the temperature to 0 and reuse the same seed. But it's just not what people really want, and providers are reluctant because they cost up to 5x more to generate.
It's also not 100% non-deterministic, because cloud providers don't run on the same hardware, with the same conditions required for producing the same output. So, in practice, not so good, but in theory if you need it and can afford it, you can.
I definitely appreciate the mindset, but I think keeping content-apps off or restricted, and being very aggressive against notifications gets most of the way. I'd be kind of lost without the ability to take a quick note or todo and have it auto-synced with my other devices
I agree that too many sites now will narrow the text area and pad too much. The issue here is a fixed pixel size that will look quite different depending on the specific monitor setup you have.
And honestly if this type of thing bothers you as much as it does me, unfortunately it means adding a bunch of stylus sheets everywhere...
>I can't maintain a large and complex project supported by lots of maintainers on my own, as a fork
Do you need to "maintain" a complex project? Why can't you just add the patches you want on your fork and update as far as it suits you? Just as the upstream doesn't have to review or accept PRs, neither do you. Users can still see the network of forks, and ime there are few that are actively updated.
How can i tell why every fork was created? How can I tell it'll fix my issue?
The idea of maintaining a fork for the sake of a patch affecting only one version of the original software is silly. Not only that, others mentioned "networks" but how do users tell what I patched, diff every fork one by one? Perhaps there is a feature I don't know about since PRs just work for me.
reply