Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That is easily fixed, ask it to summarize it's learnings, store it somewhere, and make it searchable through vector indexes. An LLM is part of a bigger system that needs not just a model, but context and long term memory. Just like human needs to write things down.

LLMs are actually pretty good at creating knowledge: if you give it a trial and error feedback loop it can figure things out, and then summarize the learnings and store it in long term memory (markdown, RAG, etc).



You’re making the assumption that there’s one, and only one, objective summarization, this is entirely different than “writing things down.”


Why do you assume i assume that?


My bad if I misunderstood. I assumed by your use of “it” and approximation methods.


This runs into the limitation that nobody has RL'd the models to do this really well.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: