Robin Hanson is weird. He paints a picture of a grim future where all nice human values are eroded away, replaced with endless frontier replicators optimized and optimizing only for more replication. And then he just accepts it as if that was fine.
Will Macaskill seems to think AI risk is real. He just thinks alignment is easy. He has a specific proposal involving making anthropomorphic AI and raising it like a human child that he seems keen on.
Robin Hanson is weird. He paints a picture of a grim future where all nice human values are eroded away, replaced with endless frontier replicators optimized and optimizing only for more replication. And then he just accepts it as if that was fine.
Will Macaskill seems to think AI risk is real. He just thinks alignment is easy. He has a specific proposal involving making anthropomorphic AI and raising it like a human child that he seems keen on.