Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Are there any successful examples of LLM text adventures? Last time I heard someone here said it's hard to develop robust puzzles and interactions, because it's hard to control and predict what the LLM will do in a dialogue setting. E.g. the user can submit reasonable but unintended solutions to a puzzle, which breaks the game.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: