(This is in response to a comment of brynema’s elsewhere; if we want LW discussions to thrive even in cases where the discussions require non-trivial prerequisites, my guess is that we should get in the habit of taking “already discussed exhaustively” questions to the welcome thread. Or if not here, to some beginner-friendly area for discussing or debating background material.)
brynema wrote:
So the idea is that a unique, complex thing may not necessarily have an appreciation for another unique complexity? Unless appreciating unique complexity has a mathematical basis.
Kind of. The idea is that:
Both human minds, and whatever AIs can be built, are mechanistic systems. We’re complex, but we still do what we do for mechanistic reasons, and not because the platonic spirit of “right thing to do”ness seeps into our intelligence.
Goals, and “optimization power / intelligence” with which to figure out how to reach those goals, are separable to a considerable extent. You can build many different systems, each of which is powerfully smart at figuring out how to hit its goals, but each of which has a very different goal from the others.
Humans, for example, have some very specific goals. We value, say, blueberry tea (such a beautiful molecule...), or particular shapes and kinds of meaty creatures to mate with, or particular kinds of neurologically/psychologically complex experiences that we call “enjoyment”, “love”, or “humor”. Each of these valued items has tons of arbitrary-looking details; just as you wouldn’t expect to find space aliens who speak English as their native language, you also shouldn’t expect an arbitrary intelligence to have human (as opposed to parrot, octopus, or such-and-such variety of space aliens) aesthetics or values.
If you’re dealing with a sufficiently powerful optimizing system, the question isn’t whether it would assign some value to you. The question is whether you are the thing that it would value most of all, compared to all the other possible things it could do with your atoms/energy/etc. Humans re-arranged the world far more than most species, because we were smart enough to see possibilities that weren’t in front of us, and to figure out ways of re-arranging the materials around us to better suit our goals. A more powerful optimizing system can be expected to change things around considerably more than we did.
That was terribly condensed, and may well not make total sense at this point. Eliezer’s OB posts fill in some of this in considerably better detail; also feel free, here in the welcome thread, to ask questions or to share counter-evidence.
(This is in response to a comment of brynema’s elsewhere; if we want LW discussions to thrive even in cases where the discussions require non-trivial prerequisites, my guess is that we should get in the habit of taking “already discussed exhaustively” questions to the welcome thread. Or if not here, to some beginner-friendly area for discussing or debating background material.)
brynema wrote:
Kind of. The idea is that:
Both human minds, and whatever AIs can be built, are mechanistic systems. We’re complex, but we still do what we do for mechanistic reasons, and not because the platonic spirit of “right thing to do”ness seeps into our intelligence.
Goals, and “optimization power / intelligence” with which to figure out how to reach those goals, are separable to a considerable extent. You can build many different systems, each of which is powerfully smart at figuring out how to hit its goals, but each of which has a very different goal from the others.
Humans, for example, have some very specific goals. We value, say, blueberry tea (such a beautiful molecule...), or particular shapes and kinds of meaty creatures to mate with, or particular kinds of neurologically/psychologically complex experiences that we call “enjoyment”, “love”, or “humor”. Each of these valued items has tons of arbitrary-looking details; just as you wouldn’t expect to find space aliens who speak English as their native language, you also shouldn’t expect an arbitrary intelligence to have human (as opposed to parrot, octopus, or such-and-such variety of space aliens) aesthetics or values.
If you’re dealing with a sufficiently powerful optimizing system, the question isn’t whether it would assign some value to you. The question is whether you are the thing that it would value most of all, compared to all the other possible things it could do with your atoms/energy/etc. Humans re-arranged the world far more than most species, because we were smart enough to see possibilities that weren’t in front of us, and to figure out ways of re-arranging the materials around us to better suit our goals. A more powerful optimizing system can be expected to change things around considerably more than we did.
That was terribly condensed, and may well not make total sense at this point. Eliezer’s OB posts fill in some of this in considerably better detail; also feel free, here in the welcome thread, to ask questions or to share counter-evidence.