I think I mostly agree with everything you say in this last comment, but I don’t see how my previous comment disagreed with any of that either?
The thing I care about here is not “what happens as a mind grows”, in some abstract sense.
The thing I care about is, “what is the best way for a powerful system to accomplish a very difficult goal quickly/reliably?” (which is what we want the AI for)
My lists were intended to be about that. We could rewrite the first list in my previous comment to:
more advanced minds have more and better and more efficient technologies
more advanced minds have an easier time getting any particular thing done, see more/better ways to do any particular thing, can consider more/better plans for any particular thing, have more and better methods for any particular context, have more ideas, ask better questions, would learn any given thing faster
and so on
and the second list to:
more advanced minds eventually (and maybe quite soon) get close to never getting stuck
more advanced minds eventually (and maybe quite soon) get close to being unexploitable
and so on
I think I probably should have included “I don’t actually know what to do with any of this, because I’m not sure what’s confusing about “Intelligence in the limit.”″ in the part of your shortform I quoted in my first comment — that’s the thing I’m trying to respond to. The point I’m making is:
There’s a difference between stuff like (a) “you become less exploitable by [other minds of some fixed capability level]” and stuff like (b) “you get close to being unexploitable”/”you approach a limit of unexploitability”.
I could easily see someone objecting to claims of the kind (b), while accepting claims of the kind (a) — well, because I think these are probably the correct positions.
If we replaced “more advanced minds” with “minds that are better at doing very difficult stuff” or other reasonable alternatives, I would still make the (a) vs (b) distinction, and still say type (b) claims are suspicious.
The structural thing is less the definition of “what sort of mind” and more, instead of saying “gets more X”, saying “if process Z is causing X to increase, what happens?”. (call this a type C claim)
But I’m also not sure what feels sus about Type B claims to you, when X is at least pinned down a bit more.
I think I mostly agree with everything you say in this last comment, but I don’t see how my previous comment disagreed with any of that either?
My lists were intended to be about that. We could rewrite the first list in my previous comment to:
more advanced minds have more and better and more efficient technologies
more advanced minds have an easier time getting any particular thing done, see more/better ways to do any particular thing, can consider more/better plans for any particular thing, have more and better methods for any particular context, have more ideas, ask better questions, would learn any given thing faster
and so on
and the second list to:
more advanced minds eventually (and maybe quite soon) get close to never getting stuck
more advanced minds eventually (and maybe quite soon) get close to being unexploitable
and so on
I think I probably should have included “I don’t actually know what to do with any of this, because I’m not sure what’s confusing about “Intelligence in the limit.”″ in the part of your shortform I quoted in my first comment — that’s the thing I’m trying to respond to. The point I’m making is:
There’s a difference between stuff like (a) “you become less exploitable by [other minds of some fixed capability level]” and stuff like (b) “you get close to being unexploitable”/”you approach a limit of unexploitability”.
I could easily see someone objecting to claims of the kind (b), while accepting claims of the kind (a) — well, because I think these are probably the correct positions.
Yeah it doesn’t necessarily disagree with it. But, framing the question:
seemed like those things were only in some sense false/confused because they are asking the wrong question.
I think “more advanced” still doesn’t feel like really the right way to frame the question, because “advanced” is still very underspecified.
If we replaced “more advanced minds” with “minds that are better at doing very difficult stuff” or other reasonable alternatives, I would still make the (a) vs (b) distinction, and still say type (b) claims are suspicious.
The structural thing is less the definition of “what sort of mind” and more, instead of saying “gets more X”, saying “if process Z is causing X to increase, what happens?”. (call this a type C claim)
But I’m also not sure what feels sus about Type B claims to you, when X is at least pinned down a bit more.