As I’ve previously explained at length, that seems to me to postulate a quite unusual lumpiness relative to the history we’ve seen for innovation in general, and more particularly for tools, computers, AI, and even machine learning. And this seems to postulate much more of a lumpy conceptual essence to “betterness” than I find plausible. Recent machine learning systems today seem relatively close to each other in their abilities, are gradually improving, and none seem remotely inclined to mount a coup.
which seems to be exactly what Eliezer is saying, so the crux of the disagreement is about this. Hanson calls it a “postulate” while Eliezer claims to derive it from rather general principles.
I have no firm view on the topic, and the expert opinions seem to differ quite a bit.
One of Hanson’s points is
which seems to be exactly what Eliezer is saying, so the crux of the disagreement is about this. Hanson calls it a “postulate” while Eliezer claims to derive it from rather general principles.
I have no firm view on the topic, and the expert opinions seem to differ quite a bit.