In short, the argument isn't that the AI will become more AntiHuman as it evolves. Rather, the AI's existing utility functions might not be aligned with human utility functions from the outset, which could have negative consequences. It's hard to make an AI do what we actually want it to.