Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don’t see that as a contradiction but I do appreciate how some might.

You can train anything to be really good at specialised fields. But that doesn’t mean they’re a good generalist.

For example:

you can train anything child to memorise the 10 times table. But that doesn’t mean they’re can perform long division.

Being an olympic-class cyclist doesn’t mean you’re any good as an F1 driver nor swimming nor Fencing.

Being highly specialised usually means you’re not as good at general things. And that’s as true for humans as it is for computers.



Though in your examples cyclists can learn to drive as humans have similar abilities.

I'll give you current GPT stuff has it's limitations - it can't come fix your plumbing say and pre-trained transformers aren't good at learning things after their pre-training but I'm not sure they are nowhere near human capabilities such that they can't be fixed up.


You cannot use an LLM to solve mathematic equations.

That’s not a training issue, that’s a limitation of a technology that’s at its core, a text prediction engine.


Yet if you look at deepmind getting gold in the IMO it seems quite equationish.

Questions and answers: https://storage.googleapis.com/deepmind-media/gemini/IMO_202...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: