Thanks for writing this! Your point about chimps vs. humans was new to me and I’ll chew on it.
Because I think this post is important enough I’m gonna really go above and beyond here: I’m gonna set a timer and think for 5 minutes, with paper, before writing the rest of my comment.
Okay, that was fun. What I got out of reading your post twice (once before writing this and once during the 5-minute timer) was basically the following. Let me know if this is an accurate summary of your position:
AI researchers will try to optimize AI to be as useful as possible. There are many tradeoffs that need to be navigated in order to do this, e.g. between universality and speed, and the Pareto boundary describing all of these tradeoffs is likely to grow slowly / continuously. So we probably won’t see fast / discontinuous Pareto improvements in available AIs.
I find this reasonably persuasive, but I’m not convinced the positions you’ve described as slow takeoff vs. fast takeoff are what other people mean when they talk about slow vs. fast takeoff.
I’d be interested to see you elaborate on this:
There is another reason I’m skeptical about hard takeoff from universality secret sauce: I think we already could make universal AIs if we tried (that would, given enough time, learn on their own and converge to arbitrarily high capability levels), and the reason we don’t is because it’s just not important to performance and the resulting systems would be really slow. This inside view argument is too complicated to make here and I don’t think my case rests on it, but it is relevant to understanding my view.
(Also, meta: I notice that setting the 5-minute timer had the effect of shifting me out of “look for something to disagree with” and into “try to understand what is even being claimed in the first place.” Food for thought!)
I agree that some people talking about slow takeoff mean something stronger (e.g. “no singularity ever”), but I think that’s an unusual position inside our crowd (and even an unusual position amongst thoughtful ML researchers), and it’s not e.g. Robin’s view (who I take as a central example of a slow takeoff proponent).
Cool. It’s an update to my models to think explicitly in terms of the behavior of the Pareto boundary, as opposed to in terms of the behavior of some more nebulous “the current best AI.” So thanks for that.
It’s not e.g. Robin’s view (who I take as a central example of a slow takeoff proponent).
Would you predict Robin to have any major disagreements with the view expressed in your write-up?
I found myself more convinced by this presentation of the “slow” view than I usually have been by Robin’s side of the FOOM debate, but nothing is jumping out to me as obviously different. (So I’m not sure if this is the same view, just presented in a different style and/or with different arguments, or whether it’s a different view.)
Robin makes a lot of more detailed claims (e.g. about things being messy and having lots of parts) that are irrelevant to this particular conclusion. I disagree with many of the more detailed claims, and think they distract from the strongest part of the argument in this case.
Thanks for writing this! Your point about chimps vs. humans was new to me and I’ll chew on it.
Because I think this post is important enough I’m gonna really go above and beyond here: I’m gonna set a timer and think for 5 minutes, with paper, before writing the rest of my comment.
Okay, that was fun. What I got out of reading your post twice (once before writing this and once during the 5-minute timer) was basically the following. Let me know if this is an accurate summary of your position:
I find this reasonably persuasive, but I’m not convinced the positions you’ve described as slow takeoff vs. fast takeoff are what other people mean when they talk about slow vs. fast takeoff.
I’d be interested to see you elaborate on this:
(Also, meta: I notice that setting the 5-minute timer had the effect of shifting me out of “look for something to disagree with” and into “try to understand what is even being claimed in the first place.” Food for thought!)
That’s an accurate summary of my position.
I agree that some people talking about slow takeoff mean something stronger (e.g. “no singularity ever”), but I think that’s an unusual position inside our crowd (and even an unusual position amongst thoughtful ML researchers), and it’s not e.g. Robin’s view (who I take as a central example of a slow takeoff proponent).
Cool. It’s an update to my models to think explicitly in terms of the behavior of the Pareto boundary, as opposed to in terms of the behavior of some more nebulous “the current best AI.” So thanks for that.
Would you predict Robin to have any major disagreements with the view expressed in your write-up?
I found myself more convinced by this presentation of the “slow” view than I usually have been by Robin’s side of the FOOM debate, but nothing is jumping out to me as obviously different. (So I’m not sure if this is the same view, just presented in a different style and/or with different arguments, or whether it’s a different view.)
Robin makes a lot of more detailed claims (e.g. about things being messy and having lots of parts) that are irrelevant to this particular conclusion. I disagree with many of the more detailed claims, and think they distract from the strongest part of the argument in this case.
Where is that second quote from? I can’t find it here.
It’s from the linked post under the section “Universality thresholds”.