Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Even with prompting to act like a college professor critiquing a grad student, eventually it devolves back to "helpful / sycophantic".

Not in my experience. My global prompt asks it to be provide objective and neutral responses rather than agreeing, zero flattery, to communicate like an academic, zero emotional content.

Works great. Doesn't "devolve" to anything else even after 20 exchanges. Continues to point out wherever it thinks I'm wrong, sloppy, or inconsistent. I use ChatGPT mainly, but also Gemini.



Would you be willing to share that prompt? Sure sounds useful!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: