I ask because you’re one of the most prolific participants here but don’t fall into one of the existing “camps” on AI risk for whom I already have good models for.
Seems right, I think my opinions fall closest to Paul’s, though it’s also hard for me to tell what Paul’s opinions are. I think this older thread is a relatively good summary of the considerations I tend to think about, though I’d place different emphases now. (Sadly I don’t have the time to write a proper post about what I think about AI strategy—it’s a pretty big topic.)
The current situation seems to be that we have two good (relatively clear) terms “technical accidental AI risk” and “AI-caused x-risk” and the dispute is over what plain “AI risk” should be shorthand for. Does that seem fair?
Yes, though I would frame it as “the ~5 people reading these comments have two clear terms, while everyone else uses a confusing mishmash of terms”. The hard part is in getting everyone else to use the terms. I am generally skeptical of deciding on definitions and getting everyone else to use them, and usually try to use terms the way other people use terms.
In other words I don’t think this is strong evidence that all 4 people would endorse defining “AI risk” as “technical accidental AI risk”. It also seems notable that I’ve been using “AI risk” in a broad sense for a while and no one has objected to that usage until now.
Agreed with this, but see above about trying to conform with the way terms are used, rather than defining terms and trying to drag everyone else along.
I don’t think “soft/slow takeoff” has a canonical meaning—some people (e.g. Paul) interpret it as not having discontinuities, while others interpret it as capabilities increasing slowly past human intelligence over (say) centuries (e.g. Superintelligence). If I say “slow takeoff” I don’t know which one the listener is going to hear it as. (And if I had to guess, I’d expect they think about the centuries-long version, which is usually not the one I mean.)
In contrast, I think “AI risk” has a much more canonical meaning, in that if I say “AI risk” I expect most listeners to interpret it as accidental risk caused by the AI system optimizing for goals that are not our own.
(Perhaps an important point is that I’m trying to communicate to a much wider audience than the people who read all the Alignment Forum posts and comments. I’d feel more okay about “slow takeoff” if I was just speaking to people who have read many of the posts already arguing about takeoff speeds.)
Seems right, I think my opinions fall closest to Paul’s, though it’s also hard for me to tell what Paul’s opinions are. I think this older thread is a relatively good summary of the considerations I tend to think about, though I’d place different emphases now. (Sadly I don’t have the time to write a proper post about what I think about AI strategy—it’s a pretty big topic.)
Yes, though I would frame it as “the ~5 people reading these comments have two clear terms, while everyone else uses a confusing mishmash of terms”. The hard part is in getting everyone else to use the terms. I am generally skeptical of deciding on definitions and getting everyone else to use them, and usually try to use terms the way other people use terms.
Agreed with this, but see above about trying to conform with the way terms are used, rather than defining terms and trying to drag everyone else along.
This seems odd given your objection to “soft/slow” takeoff usage and your advocacy of “continuous takeoff” ;)
I don’t think “soft/slow takeoff” has a canonical meaning—some people (e.g. Paul) interpret it as not having discontinuities, while others interpret it as capabilities increasing slowly past human intelligence over (say) centuries (e.g. Superintelligence). If I say “slow takeoff” I don’t know which one the listener is going to hear it as. (And if I had to guess, I’d expect they think about the centuries-long version, which is usually not the one I mean.)
In contrast, I think “AI risk” has a much more canonical meaning, in that if I say “AI risk” I expect most listeners to interpret it as accidental risk caused by the AI system optimizing for goals that are not our own.
(Perhaps an important point is that I’m trying to communicate to a much wider audience than the people who read all the Alignment Forum posts and comments. I’d feel more okay about “slow takeoff” if I was just speaking to people who have read many of the posts already arguing about takeoff speeds.)