What do you think about the vulnerable world hypothesis? Bostrom defines the vulnerable world hypothesis as:
If technological development continues then a set of capabilities will at some point be attained that make the devastation of civilization extremely likely, unless civilization sufficiently exits the semian-archic default condition.
(There’s a good collection of links about the VWH on the EA forum). And he defines “semi-anarchic default condition” as having 3 features:
1. Limited capacity for preventive policing. States do not have sufficiently reliable means of real-time surveillance and interception to make it virtually impossible for any individual or small group within their territory to carry out illegal actions – particularly actions that are very strongly disfavored by > 99 per cent of the population.
2. Limited capacity for global governance. There is no reliable mechanism for solving global coordination problems and protecting global commons – particularly in high-stakes situations where vital national security interests are involved.
3. Diverse motivations. There is a wide and recognizably human distribution of motives represented by a large population of actors (at both the individual and state level) – in particular, there are many actors motivated, to a substantial degree, by perceived self-interest (e.g. money, power, status, comfort and convenience) and there are some actors (‘the apocalyptic residual’) who would act in ways that destroy civilization even at high cost to themselves.
To me, the idea that we’re in a vulnerable world is the strongest challenge to the value of technological progress. If we are in a vulnerable world, the time we have left before civilizational devastation is partly determined by our rate of “progress.”
Bostrom doesn’t give us his probability estimate that the hypothesis true. But to me it seems quite likely that at some point we’ll invent the technology that will screw us over (if we haven’t already). AI and engineered pandemics are the scariest potential examples for me.
Do you disagree with me about the probability of us being in a vulnerable world? Do think we can somehow avoid discovering the civilization destroying tech while only finding the beneficial stuff?
Or do you think we are in a vulnerable world, but that we can exit the “semi-anarchic default condition?” Bostrom’s suggestions (like having complete surveillance combined with a police state) for exiting the semi-anarchic default condition seem quite terrifying.
If you’ve written or spoken about this somewhere else, feel free to just point me there.
Sorry for coming in late, but I think that this isn’t as strong of an argument as I once thought, since there are issues with the solutions of VWH, which I’ll describe here:
My usual answer to VWH concerns is that it unreasonably assumes that we can align global states and make sure they stay aligned. The default state is a narrow interest group usually takes it over. Also, states have more reason to pursue black-ball technologies like nukes, pandemics and more, and it has no incentive to pursue gray or white-ball technologies, so a global surveillance state will in time try to kill billions based on narrow interest groups.
What do you think about the vulnerable world hypothesis? Bostrom defines the vulnerable world hypothesis as:
(There’s a good collection of links about the VWH on the EA forum). And he defines “semi-anarchic default condition” as having 3 features:
To me, the idea that we’re in a vulnerable world is the strongest challenge to the value of technological progress. If we are in a vulnerable world, the time we have left before civilizational devastation is partly determined by our rate of “progress.”
Bostrom doesn’t give us his probability estimate that the hypothesis true. But to me it seems quite likely that at some point we’ll invent the technology that will screw us over (if we haven’t already). AI and engineered pandemics are the scariest potential examples for me.
Do you disagree with me about the probability of us being in a vulnerable world? Do think we can somehow avoid discovering the civilization destroying tech while only finding the beneficial stuff?
Or do you think we are in a vulnerable world, but that we can exit the “semi-anarchic default condition?” Bostrom’s suggestions (like having complete surveillance combined with a police state) for exiting the semi-anarchic default condition seem quite terrifying.
If you’ve written or spoken about this somewhere else, feel free to just point me there.
Sorry for coming in late, but I think that this isn’t as strong of an argument as I once thought, since there are issues with the solutions of VWH, which I’ll describe here:
What is the “this” you’re referring to? As far as I can tell I haven’t presented an argument.
Specifically, I was talking about the argument against technological progress based on the Vulnerable World Hypothesis.