So if I’m looking for black swans in your model or hidden corners of it, this is where I would look if that makes sense?
Is there any reason why this set of objections applies more to AIs than to humans? It sounds like you are rejecting the premise of having AIs which are as good as best human resarchers/engineers
I believe so, I believe that if you look at Michael Levin’s work he has this well put concept of a very efficient memorization algorithm that maps all past data into a very small bandwidth that is then mapped on to a large future cognitive lightcone. Algorithmically the main benefit that biological systems have is very efficient re-sampling algorithms, basically the only way to do this for a human is to be able to frame-shift and so we have a large optimisation pressure for frame-shifting.
The way that training algorithms currently work seem to be pointing towards a direction where this capacity is a lot more weakly optimised for.
If we look at the psychology literature on creative thinking, they often divide things up into convergent and divergent thinking. We also have the division between selection and generation and I think that the specific capacity of divergent selective thinking is dependent on frame-shifting and I think this is the difficult skill of “rejecting or accepting the frame”.
I think science of philosophy agrees with this and so I can agree with you that we will see a large speed up but amdahl’s law and all that so if selective divergent thinking is already the bottleneck will AI systems really speed things up that much?
(I believe that the really hard problems are within the divergent selective camp as they’re often related to larger conceptual questions.)
So if I’m looking for black swans in your model or hidden corners of it, this is where I would look if that makes sense?
I believe so, I believe that if you look at Michael Levin’s work he has this well put concept of a very efficient memorization algorithm that maps all past data into a very small bandwidth that is then mapped on to a large future cognitive lightcone. Algorithmically the main benefit that biological systems have is very efficient re-sampling algorithms, basically the only way to do this for a human is to be able to frame-shift and so we have a large optimisation pressure for frame-shifting.
The way that training algorithms currently work seem to be pointing towards a direction where this capacity is a lot more weakly optimised for.
If we look at the psychology literature on creative thinking, they often divide things up into convergent and divergent thinking. We also have the division between selection and generation and I think that the specific capacity of divergent selective thinking is dependent on frame-shifting and I think this is the difficult skill of “rejecting or accepting the frame”.
I think science of philosophy agrees with this and so I can agree with you that we will see a large speed up but amdahl’s law and all that so if selective divergent thinking is already the bottleneck will AI systems really speed things up that much?
(I believe that the really hard problems are within the divergent selective camp as they’re often related to larger conceptual questions.)