Hacker Newsnew | past | comments | ask | show | jobs | submit | vufas's commentslogin

Link discussing the more general statistical phenomenon in greater detail: https://en.wikipedia.org/wiki/Regression_toward_the_mean


I believe the OP is referring to the 'paperclip maximizer' argument. More info at http://www.nickbostrom.com/ethics/ai.html & http://wiki.lesswrong.com/wiki/Paperclip_maximizer.

In short, the argument isn't that the AI will become more AntiHuman as it evolves. Rather, the AI's existing utility functions might not be aligned with human utility functions from the outset, which could have negative consequences. It's hard to make an AI do what we actually want it to.


Asimov: http://en.wikipedia.org/wiki/The_Martian_Way.

Not exactly what you asked for, but pretty close. Deals with bringing water to Earth.

Edit: Added very brief description


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: