I was struck by how much I broadly agreed with almost everything Robin said. ETA: The key points of disagreement are a) I think principal-agent problems with a very smart agent can get very bad, see comment above, and b) on my inside view, timelines could be short (though I agree from the outside timelines look long).
To answer the questions:
Setting aside everything you know except what this looks like from the outside, would you predict AGI happening soon?
No.
Should reasoning around AI risk arguments be compelling to outsiders outside of AI?
Depends on which arguments you’re talking about, but I don’t think it justifies devoting lots of resources to AI risk, if you rely just on the arguments / reasoning (as opposed to e.g. trusting the views of people worried about AI risk).
What percentage of people who agree with you that AI risk is big, agree for the same reasons that you do?
Depending on the definition of “big”, I may or may not think that long-term AI risk is big. I do think AI risk is worthy of more attention than most other future scenarios, though 100 people thinking about it seems quite reasonable to me.
I think most people who agree do so for a similar broad reason, which is that agency problems can get very bad when the agent is much more capable than you. However, the details of the specific scenarios they are worried about tend to be different.
I was struck by how much I broadly agreed with almost everything Robin said. ETA: The key points of disagreement are a) I think principal-agent problems with a very smart agent can get very bad, see comment above, and b) on my inside view, timelines could be short (though I agree from the outside timelines look long).
To answer the questions:
No.
Depends on which arguments you’re talking about, but I don’t think it justifies devoting lots of resources to AI risk, if you rely just on the arguments / reasoning (as opposed to e.g. trusting the views of people worried about AI risk).
Depending on the definition of “big”, I may or may not think that long-term AI risk is big. I do think AI risk is worthy of more attention than most other future scenarios, though 100 people thinking about it seems quite reasonable to me.
I think most people who agree do so for a similar broad reason, which is that agency problems can get very bad when the agent is much more capable than you. However, the details of the specific scenarios they are worried about tend to be different.