The question isn’t so much how to make programs that exceed human performance at any particular cognitive task; it is how to make programs that can reach or exceed human performance at the entire range of cognitive tasks we can deal with, and that we can expect to do at least as well as we would dealing with challenges that we haven’t encountered yet.
My question is how to create more value, and trying to be better than humans in all things likely yields a sub optimal result for creating value for me; it force feeds problems computers aren’t good at, while starving problems computers are good at.
The context of the OP, the hypothetical intelligence explosion, pretty much assumes this interpretation.
At the very least, it assumes that an AGI will be G enough to take a look at its own “code” (whatever symbolic substrate it uses for encoding the computations that define it, which may not necessarily look like the “source code” we are familiar with, though it may well start off being a human invention) and figure out how to change that code so as to become an even more effective optimizer.
“Create more value” doesn’t in and of itself lead to an intelligence explosion. It’s something that would be nice to have, but not a game-changer.
That cross-domain thing, which is where we still have the lead, is a game-changer. (Dribbling a ping-pong ball is cute, but I want to know what the thing will do with an egg. Dribbling the egg is right out. Figuring out the egg is food, that’s the kind of thing you want an AGI capable of.)
The question isn’t so much how to make programs that exceed human performance at any particular cognitive task; it is how to make programs that can reach or exceed human performance at the entire range of cognitive tasks we can deal with, and that we can expect to do at least as well as we would dealing with challenges that we haven’t encountered yet.
In fewer words, mastering the trick of cross-domain optimization.
I don’t think that’s a good question at all.
My question is how to create more value, and trying to be better than humans in all things likely yields a sub optimal result for creating value for me; it force feeds problems computers aren’t good at, while starving problems computers are good at.
The context of the OP, the hypothetical intelligence explosion, pretty much assumes this interpretation.
At the very least, it assumes that an AGI will be G enough to take a look at its own “code” (whatever symbolic substrate it uses for encoding the computations that define it, which may not necessarily look like the “source code” we are familiar with, though it may well start off being a human invention) and figure out how to change that code so as to become an even more effective optimizer.
“Create more value” doesn’t in and of itself lead to an intelligence explosion. It’s something that would be nice to have, but not a game-changer.
That cross-domain thing, which is where we still have the lead, is a game-changer. (Dribbling a ping-pong ball is cute, but I want to know what the thing will do with an egg. Dribbling the egg is right out. Figuring out the egg is food, that’s the kind of thing you want an AGI capable of.)