I’m confused about why you only updated mildly away from slow takeoff. It seems that you’ve got a pretty good argument against slow takeoff here:
Are there simple changes to chimps (or other animals) that would make them much better at accumulating culture?
Will humans continually pursue all simple yet powerful changes to our AIs?
Seems like if the answer to the first question is No, then there really is some relatively sharp transition to much more powerful culture-accumulating capabilities, that humans crossed when they evolved from chimp-like creatures. Thus, our default assumption should be that as we train bigger and bigger neural nets on more and more data, there will also be some relatively sharp transition. In other words, Yudkowsky’s argument is correct.
Seems like if the answer to the second question is No, then Paul’s disanalogy between evolution and AI researchers is also wrong; both evolution and AI researchers are shoddy optimizers that sometimes miss things etc. So Yudkowsky’s argument is correct.
Now, you put 50% on the first answer being No and 70% on the second answer being No. So shouldn’t you have something like 85% credence that Paul is wrong and Yudkowsky’s argument is correct? And isn’t that a fairly big update against slow takeoff?
Maybe the idea is that you are meta-uncertain, unsure you are reasoning about this correctly, etc.? Or maybe the idea is that Yudkowsky’s argument could easily be wrong for other reasons than the ones Paul gave? Fair enough.
There’s the high-level argument that AIs will recursively self-improve very fast.
There’s support for this argument from the example of humans.
There’s a rebuttal to that support from the concept of changing selection pressures.
There’s a counterrebuttal to changing selection pressures from my post.
By the time we reach the fourth level down, there’s not that much scope for updates on the original claim, because at each level we lose confidence that we’re arguing about the right thing, and also we’ve zoomed in enough that we’re ignoring most of the relevant considerations.
I’m confused. I thought the intended argument is “Yes, there are simple changes to chimps that make them much better at accumulating culture; similarly we should expect there to be simple changes to neural nets that much improve their capabilities, and so just as humans had a ‘fast takeoff’ so too will neural nets”.
This implies that a “Yes” to Q1 supports fast takeoff. And I tend to agree with this—if there are only complicated changes that lead to discontinuities, then why expect that we will find them?
(Like, there is some program we can write that would be way, way more intelligent than us. You could think of that as a complicated change. But surely the existence of a superintelligence doesn’t tell us much about takeoff speeds.)
I also interpreted Richard as arguing that a “Yes” to Q1 would support fast takeoff, though I found it hard to follow the reasoning on how Q1 and Q2 relate to takeoff speeds (will write a top-level comment about this after this one).
Very good point; now I am confused. I think tentatively that Richard was too quick to make ” Are there simple changes to chimps (or other animals) that would make them much better at accumulating culture?” the crux on which “human progress would have been much less abrupt if evolution had been optimising for cultural ability all along” depends.
I’m confused about why you only updated mildly away from slow takeoff. It seems that you’ve got a pretty good argument against slow takeoff here:
Seems like if the answer to the first question is No, then there really is some relatively sharp transition to much more powerful culture-accumulating capabilities, that humans crossed when they evolved from chimp-like creatures. Thus, our default assumption should be that as we train bigger and bigger neural nets on more and more data, there will also be some relatively sharp transition. In other words, Yudkowsky’s argument is correct.
Seems like if the answer to the second question is No, then Paul’s disanalogy between evolution and AI researchers is also wrong; both evolution and AI researchers are shoddy optimizers that sometimes miss things etc. So Yudkowsky’s argument is correct.
Now, you put 50% on the first answer being No and 70% on the second answer being No. So shouldn’t you have something like 85% credence that Paul is wrong and Yudkowsky’s argument is correct? And isn’t that a fairly big update against slow takeoff?
Maybe the idea is that you are meta-uncertain, unsure you are reasoning about this correctly, etc.? Or maybe the idea is that Yudkowsky’s argument could easily be wrong for other reasons than the ones Paul gave? Fair enough.
So my reasoning is something like:
There’s the high-level argument that AIs will recursively self-improve very fast.
There’s support for this argument from the example of humans.
There’s a rebuttal to that support from the concept of changing selection pressures.
There’s a counterrebuttal to changing selection pressures from my post.
By the time we reach the fourth level down, there’s not that much scope for updates on the original claim, because at each level we lose confidence that we’re arguing about the right thing, and also we’ve zoomed in enough that we’re ignoring most of the relevant considerations.
I’ll make this more explicit.
I’m confused. I thought the intended argument is “Yes, there are simple changes to chimps that make them much better at accumulating culture; similarly we should expect there to be simple changes to neural nets that much improve their capabilities, and so just as humans had a ‘fast takeoff’ so too will neural nets”.
This implies that a “Yes” to Q1 supports fast takeoff. And I tend to agree with this—if there are only complicated changes that lead to discontinuities, then why expect that we will find them?
(Like, there is some program we can write that would be way, way more intelligent than us. You could think of that as a complicated change. But surely the existence of a superintelligence doesn’t tell us much about takeoff speeds.)
I also interpreted Richard as arguing that a “Yes” to Q1 would support fast takeoff, though I found it hard to follow the reasoning on how Q1 and Q2 relate to takeoff speeds (will write a top-level comment about this after this one).
Very good point; now I am confused. I think tentatively that Richard was too quick to make ” Are there simple changes to chimps (or other animals) that would make them much better at accumulating culture?” the crux on which “human progress would have been much less abrupt if evolution had been optimising for cultural ability all along” depends.