Technological progress will continue to advance faster than philosophical progress, making it hard or impossible for humans to have the wisdom to handle new technologies correctly. I see AI development itself as an instance of this, for example the e/acc crowd trying to advance AI without regard to safety because they think it will automatically align with their values (something about “free energy”). What if, e.g., value lock-in becomes possible in the future and many decide to lock in their current values (based on their religions and/or ideologies) to signal their faith/loyalty?
AIs will be optimized for persuasion and humans won’t know how to defend against bad but persuasive philosophical arguments aimed to manipulate them.
but I’m not sure why you’d find this problem particularly pressing compared to other problems of evaluation, e.g. generating economic policies that look good to us but are actually bad
Bad economic policies can probably be recovered from and are therefore not (high) x-risks.
My answers to many of your other questions are “I’m pretty uncertain, and that uncertainty leaves a lot of room for risk.” See also Some Thoughts on Metaphilosophy if you haven’t already read that, as it may help you better understand my perspective. And, it’s also possible that in the alternate sane universe, a lot of philosophy professors have worked with AI researchers on the questions you raised here, and adequately resolved the uncertainties in the direction of “no risk”, and AI development has continued based on that understanding, but I’m not seeing that happening here either.
Let me know if you want me to go into more detail on any of the questions.
The super-alignment effort will fail.
Technological progress will continue to advance faster than philosophical progress, making it hard or impossible for humans to have the wisdom to handle new technologies correctly. I see AI development itself as an instance of this, for example the e/acc crowd trying to advance AI without regard to safety because they think it will automatically align with their values (something about “free energy”). What if, e.g., value lock-in becomes possible in the future and many decide to lock in their current values (based on their religions and/or ideologies) to signal their faith/loyalty?
AIs will be optimized for persuasion and humans won’t know how to defend against bad but persuasive philosophical arguments aimed to manipulate them.
Bad economic policies can probably be recovered from and are therefore not (high) x-risks.
My answers to many of your other questions are “I’m pretty uncertain, and that uncertainty leaves a lot of room for risk.” See also Some Thoughts on Metaphilosophy if you haven’t already read that, as it may help you better understand my perspective. And, it’s also possible that in the alternate sane universe, a lot of philosophy professors have worked with AI researchers on the questions you raised here, and adequately resolved the uncertainties in the direction of “no risk”, and AI development has continued based on that understanding, but I’m not seeing that happening here either.
Let me know if you want me to go into more detail on any of the questions.