Also, your point about human plans not looking like randomly sampled plans is a point against your intuition that multi-level search processes will tend to generate such plans.
I think Mr. Bensinger’s argument is “randomly w.r.t. human plans,” while I read your answer as interpreting it as an inherent “randomness” property of plans.
Humans do not look random to other humans. This is not an argument for anything else then not looking random to humans.
It’s true that if humans were reliably very ambitious, consequentialist, and power-seeking, then this would be stronger evidence that superintelligent AI tends to be ambitious and power-seeking. So the absence of that evidence has to be evidence against “superintelligent AI tends to be ambitious and power-seeking”, even if it’s not a big weight in the scales.
Current ML work is on track to produce things that are, in the ways that matter, more like “randomly sampled plans” than like “the sorts of plans a civilization of human von Neumanns would produce”. (Before we’re anywhere near being able to produce the latter sorts of things.)[9]
We’re building “AI” in the sense of building powerful general search processes (and search processes for search processes), not building “AI” in the sense of building friendly ~humans but in silicon.
Mainly from the second paragraph, I got the impression that “randomly sampled plans” referred to, or at least included, what is the goal, not just how much you optimize it. Anyway, I think I’m losing the thread of the discussion, so whatever.
I think Mr. Bensinger’s argument is “randomly w.r.t. human plans,” while I read your answer as interpreting it as an inherent “randomness” property of plans.
Humans do not look random to other humans. This is not an argument for anything else then not looking random to humans.
It’s true that if humans were reliably very ambitious, consequentialist, and power-seeking, then this would be stronger evidence that superintelligent AI tends to be ambitious and power-seeking. So the absence of that evidence has to be evidence against “superintelligent AI tends to be ambitious and power-seeking”, even if it’s not a big weight in the scales.
Mainly from the second paragraph, I got the impression that “randomly sampled plans” referred to, or at least included, what is the goal, not just how much you optimize it. Anyway, I think I’m losing the thread of the discussion, so whatever.