I totally understand that if this frame of reference is correct, then FOOM is what you have to accept
I’d say something a bit weaker, that if you expect large acceleration in this scenario (>15x), then software intelligence explosion looks likely. And this is one of the biggest cruxes in practice.
I would overall be more convinced of this view if you could involve some philosophy of science or work from metascience such as Michael Nielsen’s work or similar.
Is there any reason why this set of objections applies more to AIs than to humans? It sounds like you are rejecting the premise of having AIs which are as good as best human resarchers/engineers. I agree that these factors slow human AI R&D progress at AI companies, but given the condition of having AIs which are this capable, I don’t see why you’d expect them to bite harder for the AIs (maybe because the AI run organization is somewhat bigger?). If anything I’d guess that (if you accept the premise) than AIs will be better at overcoming these obstacles due to better reproducibility, willingness to run methodology/metascience experiments on themselves, and better coordination.
All that said, I agree that a potentially key factor is that the AI capability profile might be importantly weaker than humans in important ways at the time of first having full automation (but these difficulties are overcome by AI advantages like narrow super humanness, speed, vast knowledge, coordination, etc). (Note that the scenario in the post is not necessarily talking about the exact moment you first have full automation.) I think this could result in full automation with AIs which are less generally smart, less good at noticing patterns, and less cognitively flexible. So these AIs might be differentially hit by these issues. Nonetheless, there is still the question of how hard it would be for further reasearch to make these AI weaknesses relative to humans go away.
So if I’m looking for black swans in your model or hidden corners of it, this is where I would look if that makes sense?
Is there any reason why this set of objections applies more to AIs than to humans? It sounds like you are rejecting the premise of having AIs which are as good as best human resarchers/engineers
I believe so, I believe that if you look at Michael Levin’s work he has this well put concept of a very efficient memorization algorithm that maps all past data into a very small bandwidth that is then mapped on to a large future cognitive lightcone. Algorithmically the main benefit that biological systems have is very efficient re-sampling algorithms, basically the only way to do this for a human is to be able to frame-shift and so we have a large optimisation pressure for frame-shifting.
The way that training algorithms currently work seem to be pointing towards a direction where this capacity is a lot more weakly optimised for.
If we look at the psychology literature on creative thinking, they often divide things up into convergent and divergent thinking. We also have the division between selection and generation and I think that the specific capacity of divergent selective thinking is dependent on frame-shifting and I think this is the difficult skill of “rejecting or accepting the frame”.
I think science of philosophy agrees with this and so I can agree with you that we will see a large speed up but amdahl’s law and all that so if selective divergent thinking is already the bottleneck will AI systems really speed things up that much?
(I believe that the really hard problems are within the divergent selective camp as they’re often related to larger conceptual questions.)
I’d say something a bit weaker, that if you expect large acceleration in this scenario (>15x), then software intelligence explosion looks likely. And this is one of the biggest cruxes in practice.
Is there any reason why this set of objections applies more to AIs than to humans? It sounds like you are rejecting the premise of having AIs which are as good as best human resarchers/engineers. I agree that these factors slow human AI R&D progress at AI companies, but given the condition of having AIs which are this capable, I don’t see why you’d expect them to bite harder for the AIs (maybe because the AI run organization is somewhat bigger?). If anything I’d guess that (if you accept the premise) than AIs will be better at overcoming these obstacles due to better reproducibility, willingness to run methodology/metascience experiments on themselves, and better coordination.
All that said, I agree that a potentially key factor is that the AI capability profile might be importantly weaker than humans in important ways at the time of first having full automation (but these difficulties are overcome by AI advantages like narrow super humanness, speed, vast knowledge, coordination, etc). (Note that the scenario in the post is not necessarily talking about the exact moment you first have full automation.) I think this could result in full automation with AIs which are less generally smart, less good at noticing patterns, and less cognitively flexible. So these AIs might be differentially hit by these issues. Nonetheless, there is still the question of how hard it would be for further reasearch to make these AI weaknesses relative to humans go away.
So if I’m looking for black swans in your model or hidden corners of it, this is where I would look if that makes sense?
I believe so, I believe that if you look at Michael Levin’s work he has this well put concept of a very efficient memorization algorithm that maps all past data into a very small bandwidth that is then mapped on to a large future cognitive lightcone. Algorithmically the main benefit that biological systems have is very efficient re-sampling algorithms, basically the only way to do this for a human is to be able to frame-shift and so we have a large optimisation pressure for frame-shifting.
The way that training algorithms currently work seem to be pointing towards a direction where this capacity is a lot more weakly optimised for.
If we look at the psychology literature on creative thinking, they often divide things up into convergent and divergent thinking. We also have the division between selection and generation and I think that the specific capacity of divergent selective thinking is dependent on frame-shifting and I think this is the difficult skill of “rejecting or accepting the frame”.
I think science of philosophy agrees with this and so I can agree with you that we will see a large speed up but amdahl’s law and all that so if selective divergent thinking is already the bottleneck will AI systems really speed things up that much?
(I believe that the really hard problems are within the divergent selective camp as they’re often related to larger conceptual questions.)