I don’t spend much time thinking about different specific value alignment targets because I think we should first focus on how to achieve any of them. I couldn’t see exactly what the world values survey was from that link at a quick glance, but I’m not sure the details matter. It’s would probably produce a vastly better future than a value target like “solve hard problems” or “make me a lot of money” would; there are probably better-future-proofed targets that would be even better; but steering away from the worst and toward the better is my primary goal right now, because I don’t hink we have that in hand at all.
I don’t spend much time thinking about different specific value alignment targets because I think we should first focus on how to achieve any of them. I couldn’t see exactly what the world values survey was from that link at a quick glance, but I’m not sure the details matter. It’s would probably produce a vastly better future than a value target like “solve hard problems” or “make me a lot of money” would; there are probably better-future-proofed targets that would be even better; but steering away from the worst and toward the better is my primary goal right now, because I don’t hink we have that in hand at all.