Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The shape analogy doesn't really apply with modern language models. Each word gets its own context dependent high dimensional point. With everything being context dependent, simple transformations like rotations are impossible. A more accurate perception is that any concept expressible in language now has its own high dimensional representation, which can then be decoded into any other language.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: