Think of the LLM as a slightly lossy compression algorithm fed by various pattern classifiers that weight and bin inputs and outputs.
The user of the LLM provides a new input, which might or might not closely match the existing smudged together inputs to produce an output that's in the same general pattern as the outputs which would be expected among the training dataset.
Ignoring your last line, which is poorly defined, this view contradicts observable reality. It can’t explain an LLM’s ability to diagnose bugs in code it hasn’t seen before, exhibit a functional understanding of code it hasn’t seen before, explain what it’s seeing and doing to a human user, etc.
Functionally, on many suitably scoped tasks in areas like coding and mathematics, LLMs are already superintelligent relative to most humans - which may be part of why you’re having difficulty recognizing that.
I get your sentiment but a lot of people on this forum forget that a lot of us are just working for the paycheck - I don't owe my company anything.
Do I know the code base like the back of my hand? Nope. Can I confidently talk to how certain functions work? Not a chance.
Can I deploy what the business wants? Yep. Can I throw error logs into LLMs and work out the cause of issues? Mostly.
I get some of you may want to go above and beyond for your company and truly create something beautiful but then guess what - That codebase is theirs. They aren't your family. Get paid and move on
A lot of things are "so much faster" than the right thing. "Vibe traffic safety laws" are much faster than ones that increase actual traffic safety: http://propublica.org/article/trump-artificial-intelligence-... . You, your team, and colleagues are producing shiny trash at unbelievable velocity. Is that valuable?
I mean, people who use LLMs to crank out code are cranking it out by the millions of lines. Even if you have never seen it used toward a net positive result, you have to admit there is a LOT of it.
Got anything to back up this wild statement?