I don’t have the same reaction to power/control/monitoring being per se very bad. It doesn’t seem comparable to me to pre-emptively nuking your enemy before even trying diplomacy.
Edit: To elaborate on why, part of it might be that I think the default of open competition is incredibly bad and ugly. (Themes being: Dawkins’ “Nature red in tooth and claw” passage about there being no purpose in nature and so much suffering, Moloch, bargaining failures getting worse and worse if you don’t somehow reign things in or dial down the maximizing.)
I also think there’s maybe a bit of a third option? Instead of having one central entity that controls everything, you could have a coalition of agents under the umbrella of peacefulness/cooperation and “not maximizing too hard,” and they together enforce some kind of monitoring and control, but it still has a value-pluralistic and somewhat Democratic feel to it?
Edit: To elaborate on why, part of it might be that I think the default of open competition is incredibly bad and ugly. (Themes being: Dawkins’ “Nature red in tooth and claw” passage about there being no purpose in nature and so much suffering, Moloch, bargaining failures getting worse and worse if you don’t somehow reign things in or dial down the maximizing.)
Something close to this is also my view, and the big reason we avoided it is we are in a regime where wealth grows faster than population, but we have good reasons to expect that in the absence of coordination, we will come back to subsistence living because population will grow as fast or faster than wealth.
More generally, one of my divergences with lots of the “we will muddle through with AI for an indefinitely long period through our current system” is that I think the 18th-21st century conditions are by and large dream-time creations, which will collapse in the absence of coordination post-AI takeover (assuming it does happen).
On @Lukas_Gloor’s democracy point: I think the big divergence here is that I don’t expect enough people to buy into a regime of peacefulness/cooperation absent dictators because identity issues become much more salient relative to material issues, and democracy/non-dictatorial systems rely on people being willing to preserve the system that exists, and most of the reasons why they are preserved is almost certainly a combination of instrumental usefulness that will drastically decline with AI tech, and identity issues being less salient than material issues, which has held up imperfectly through the 20th century.
Identity issues are very, very easy to make existential, and groups of people believing that their group is existentially threatened by democracy will turn to anti-democratic means to save their group (which is already happening), and one of the most consistent trends is as people get wealthier, identity/status matters much more than material/economic issues.
I don’t have the same reaction to power/control/monitoring being per se very bad. It doesn’t seem comparable to me to pre-emptively nuking your enemy before even trying diplomacy.
Edit: To elaborate on why, part of it might be that I think the default of open competition is incredibly bad and ugly. (Themes being: Dawkins’ “Nature red in tooth and claw” passage about there being no purpose in nature and so much suffering, Moloch, bargaining failures getting worse and worse if you don’t somehow reign things in or dial down the maximizing.)
I also think there’s maybe a bit of a third option? Instead of having one central entity that controls everything, you could have a coalition of agents under the umbrella of peacefulness/cooperation and “not maximizing too hard,” and they together enforce some kind of monitoring and control, but it still has a value-pluralistic and somewhat Democratic feel to it?
Something close to this is also my view, and the big reason we avoided it is we are in a regime where wealth grows faster than population, but we have good reasons to expect that in the absence of coordination, we will come back to subsistence living because population will grow as fast or faster than wealth.
More generally, one of my divergences with lots of the “we will muddle through with AI for an indefinitely long period through our current system” is that I think the 18th-21st century conditions are by and large dream-time creations, which will collapse in the absence of coordination post-AI takeover (assuming it does happen).
On @Lukas_Gloor’s democracy point: I think the big divergence here is that I don’t expect enough people to buy into a regime of peacefulness/cooperation absent dictators because identity issues become much more salient relative to material issues, and democracy/non-dictatorial systems rely on people being willing to preserve the system that exists, and most of the reasons why they are preserved is almost certainly a combination of instrumental usefulness that will drastically decline with AI tech, and identity issues being less salient than material issues, which has held up imperfectly through the 20th century.
Identity issues are very, very easy to make existential, and groups of people believing that their group is existentially threatened by democracy will turn to anti-democratic means to save their group (which is already happening), and one of the most consistent trends is as people get wealthier, identity/status matters much more than material/economic issues.