Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But isn't this a great thing? I mean, this piece has been missing (no, I am not kidding). Computers have always had a hard time coping with situations that weren't 100% predefined.

Now, we have technology capable of handling cases that were not predefined. Yes, it makes mistakes, as do humans, but the range of problems we can solve with technology has been tremendously broadened.

The problem is how we apply AI. Currently, we throw LLMs on everything they might be able to handle without understanding how or if they have the capabilities to handle such a task. And that is not the LLM's fault but a human fault. Consequently, we see poor results, and then we blame the AI for not being able to solve a problem it wasn't designed to solve.

Sounds stupid, doesn't it?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: