Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah I'd imagine the problem is not verifying the output against retrieved documents. If it just hallucinates it would ignore the given context, something that can absolutely be verified by another LLM.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: