I just finished reading Eliezer’s April Fools Day post, where he illustrated how good society could be. A future society filled with rational people, that is structured the way Eliezer describes, and continues with linear progression in technology would be pretty amazing. What is it that the intelligence explosion would provide of value that this society wouldn’t?
Put differently, diff(intelligenceExplosion, dath ilan).
Well, in dath ilan, people do still die, even though they’re routinely cryonically frozen. I suspect with an intelligence explosion death becomes very rare (or horrifically common, like, extinction).
First, superintelligence can create a better utopia. After astronomical amounts of time, dath ilan may have the technology necessary, but I suspect that they would lack the understanding of their own utility function that FAI would have. They are also not immune to politics, and will act suboptimally because of that.
Second, there’s a not insignificant chance of dath ilan being wiped out by some kind of existential risk before they’re advanced enough to prevent it.
I agree with the idea that the AI will help with existential risk.
First, superintelligence can create a better utopia.
What I’m asking is “What would this utopia have in particular that dath ilan wouldn’t have?”. The next question then becomes how much better would a society with those things be than a dath ilan-like society. I’m having trouble imagining what the answer to the first question is, so I can’t even think about the second one.
Dath ilan would refrain from optimizing humanity (making them happier, use less resources, etc.) in fear of optimizing away their humanity. An FAI would know exactly what a person is, and would be able to optimize them much better.
The only answer I could really imagine starts to get into the territory of wireheading. But if that’s the end that we seek, then we’re pretty much there now. Soon enough we’ll have the resources to let everyone wirehead as much as they want. If that’s true, then why even bother with FAI (and risk things going wrong with it)? (Note: I suspect that FAI is worth it. But this is the argument I make when I argue against myself, and I don’t really know how to respond.)
The only answer I could really imagine starts to get into the territory of wireheading.
Exactly. If dath ilan tried to do it, they’d get well into the territory of wireheading. Only an FAI could start to get there, and then stop at exactly the right place.
Even if you’re totally in favor of wireheading, whatever it is you’re wireheading has to be sentient. Dath ilan would have to use an entire human brain just to be sure. An FAI could make an optimally sentient orgasmium.
That’s just happiness though. An FAI could create new emotions from scratch. Nobody values complexity. That would just mean setting fire to everything so there’s more entropy. The key is figuring out exactly what it is we value, to tell if a complicated system is valuable. An FAI could give us a very interesting set of emotions.
dath ilan seems to have a specific kind of political correctness when it comes to not talking about specific issues that different from out one’s and I don’t think an intelligence explosion is simply going to change this. .
I just finished reading Eliezer’s April Fools Day post, where he illustrated how good society could be. A future society filled with rational people, that is structured the way Eliezer describes, and continues with linear progression in technology would be pretty amazing. What is it that the intelligence explosion would provide of value that this society wouldn’t?
Put differently, diff(intelligenceExplosion, dath ilan).
Well, in dath ilan, people do still die, even though they’re routinely cryonically frozen. I suspect with an intelligence explosion death becomes very rare (or horrifically common, like, extinction).
Only a few people die. Once they figure out how to cure death, they’ll stop dying. The vast majority of members will exist after that point.
There are two main differences I can see.
First, superintelligence can create a better utopia. After astronomical amounts of time, dath ilan may have the technology necessary, but I suspect that they would lack the understanding of their own utility function that FAI would have. They are also not immune to politics, and will act suboptimally because of that.
Second, there’s a not insignificant chance of dath ilan being wiped out by some kind of existential risk before they’re advanced enough to prevent it.
I agree with the idea that the AI will help with existential risk.
What I’m asking is “What would this utopia have in particular that dath ilan wouldn’t have?”. The next question then becomes how much better would a society with those things be than a dath ilan-like society. I’m having trouble imagining what the answer to the first question is, so I can’t even think about the second one.
Dath ilan would refrain from optimizing humanity (making them happier, use less resources, etc.) in fear of optimizing away their humanity. An FAI would know exactly what a person is, and would be able to optimize them much better.
How?
The only answer I could really imagine starts to get into the territory of wireheading. But if that’s the end that we seek, then we’re pretty much there now. Soon enough we’ll have the resources to let everyone wirehead as much as they want. If that’s true, then why even bother with FAI (and risk things going wrong with it)? (Note: I suspect that FAI is worth it. But this is the argument I make when I argue against myself, and I don’t really know how to respond.)
Exactly. If dath ilan tried to do it, they’d get well into the territory of wireheading. Only an FAI could start to get there, and then stop at exactly the right place.
Even if you’re totally in favor of wireheading, whatever it is you’re wireheading has to be sentient. Dath ilan would have to use an entire human brain just to be sure. An FAI could make an optimally sentient orgasmium.
That’s just happiness though. An FAI could create new emotions from scratch. Nobody values complexity. That would just mean setting fire to everything so there’s more entropy. The key is figuring out exactly what it is we value, to tell if a complicated system is valuable. An FAI could give us a very interesting set of emotions.
dath ilan seems to have a specific kind of political correctness when it comes to not talking about specific issues that different from out one’s and I don’t think an intelligence explosion is simply going to change this. .