Humans are the beneficiaries of millions of years of evolution, and are born with innate pattern matching abilities that we don't need "training" for; essentially our pre-training. Of course, it is superior to the current generation of LLMs, but is it fundamentally different? I don't know one way or the other to be honest, but judging from how amazing LLMs are given all their limitations and paucity of evolution, I wouldn't bet against it.
The other problem with LLMs today, is that they don't persist any learning they do from their everyday inference and interaction with users; at least not in real-time. So it makes them harder to instruct in a useful way.
But it seems inevitable that both their pre-training, and ability to seamlessly continue to learn afterward, should improve over the coming years.
The other problem with LLMs today, is that they don't persist any learning they do from their everyday inference and interaction with users; at least not in real-time. So it makes them harder to instruct in a useful way.
But it seems inevitable that both their pre-training, and ability to seamlessly continue to learn afterward, should improve over the coming years.