Why would a good AI policy be one which takes as a model a universe where world destroying weapons in the hands of incredibly unstable governments controlled by glorified tribal chieftains is not that bad of a situation? Almost but not quite destroying ourselves does not reflect well on our abilities. The Cold War as a good example of averting bad outcomes? Eh.
The point is that I would have expected things to be worse, and that I imagine that a lot of others would have as well.
This is assuming that people understand what makes an AI so dangerous—calling an AI a global catastrophic risk isn’t going to motivate anyone who thinks you can just unplug the thing (and even worse if it does motivate them, since then you have someone who is running around thinking the AI problem is trivial).
I think that people will understand what makes AI dangerous. The arguments aren’t difficult to understand.
The fact that someone is powerful is evidence that they are good at gaining a reputation in their specific field, but I don’t see how this is evidence for rationality as such (and if we are redefining it to include dictators and crony politicians, I don’t know what to say),
Broadly, the most powerful countries are the ones with the most rational leadership (where here I mean “rational with respect to being able to run a country,” which is relevant), and I expect this trend to continue.
Also, wealth is skewing toward more rational people over time, and wealthy people have political bargaining power.
why would someone who has no experience with these kind of issues suddenly grab it out of the space of all possible ideas he could possibly be thinking about?
Political leaders have policy advisors, and policy advisors listen to scientists. I expect that AI safety issues will percolate through the scientific community before long.
It seems like you are claiming that AI safety does not require a substantial shift in perspective (I’m taking this as the reason why you are optimistic, since my cynicism tells me that expecting a drastic shift is a rather improbable event) - rather, we can just keep chugging along because nice things can be “expected to increase over time”, and this somehow will result in the kind of society we need. [...]
I agree that AI safety requires a substantial shift in perspective — what I’m claiming is that this change in perspective will occur organically substantially before the creation of AI is imminent.
Also I really don’t know where you got that last idea—I can’t imagine that most people would find AI safety more glamorous then, you know, actually building a robot.
You don’t need “most people” to work on AI safety. It might suffice for 10% or fewer of the people who are working on AI to work on safety. There are lots of people who like to be big fish in a small pond, and this will motivate some AI researchers to work on safety even if safety isn’t the most prestigious field.
If political leaders are sufficiently rational (as I expect them to be), they’ll give research grants and prestige to people who work on AI safety.
Things were a lot worse then everyone knew: Russia almost invaded Yugoslavia, which would have triggered a war according to newly declassified NSA journals, in the 1950′s. The Cuban Missile Crisis could easily have gone hot, and several times early warning systems were triggered by accident. Of course, estimating what could have happened is quite hard.
I agree that there were close calls. Nevertheless, things turned out better than I would have guessed, and indeed, probably better than a large fraction of military and civilian people would have guessed.
World war three seems certain to significantly decrease human population. From my point of view, I can’t eliminate anthropic reasoning for why there wasn’t such a war before I was born.
There’s a difference between “sufficiently difficult so that a few readers of one person’s exposition can’t follow it” and “sufficiently difficult so that after being in the public domain for 30 years, the arguments won’t have been distilled so as to be accessible to policy makers.”
I don’t think that the arguments are any more difficult than the arguments for anthropogenic global warming. One could argue that the difficulty of these arguments has been a limiting factor in climate change policy, but I believe that by far the dominant issue has been misaligned incentives, though I’d concede that this is not immediately obvious.
Thanks for engaging.
The point is that I would have expected things to be worse, and that I imagine that a lot of others would have as well.
I think that people will understand what makes AI dangerous. The arguments aren’t difficult to understand.
Broadly, the most powerful countries are the ones with the most rational leadership (where here I mean “rational with respect to being able to run a country,” which is relevant), and I expect this trend to continue.
Also, wealth is skewing toward more rational people over time, and wealthy people have political bargaining power.
Political leaders have policy advisors, and policy advisors listen to scientists. I expect that AI safety issues will percolate through the scientific community before long.
I agree that AI safety requires a substantial shift in perspective — what I’m claiming is that this change in perspective will occur organically substantially before the creation of AI is imminent.
You don’t need “most people” to work on AI safety. It might suffice for 10% or fewer of the people who are working on AI to work on safety. There are lots of people who like to be big fish in a small pond, and this will motivate some AI researchers to work on safety even if safety isn’t the most prestigious field.
If political leaders are sufficiently rational (as I expect them to be), they’ll give research grants and prestige to people who work on AI safety.
Things were a lot worse then everyone knew: Russia almost invaded Yugoslavia, which would have triggered a war according to newly declassified NSA journals, in the 1950′s. The Cuban Missile Crisis could easily have gone hot, and several times early warning systems were triggered by accident. Of course, estimating what could have happened is quite hard.
I agree that there were close calls. Nevertheless, things turned out better than I would have guessed, and indeed, probably better than a large fraction of military and civilian people would have guessed.
World war three seems certain to significantly decrease human population. From my point of view, I can’t eliminate anthropic reasoning for why there wasn’t such a war before I was born.
We still get people occasionally who argue the point while reading through the Sequences, and that’s a heavily filtered audience to begin with.
There’s a difference between “sufficiently difficult so that a few readers of one person’s exposition can’t follow it” and “sufficiently difficult so that after being in the public domain for 30 years, the arguments won’t have been distilled so as to be accessible to policy makers.”
I don’t think that the arguments are any more difficult than the arguments for anthropogenic global warming. One could argue that the difficulty of these arguments has been a limiting factor in climate change policy, but I believe that by far the dominant issue has been misaligned incentives, though I’d concede that this is not immediately obvious.