I don’t see why you disagree. The OP shows what happens if you take ideas too serious. You can justify any atrocities with the right variables in your probability and utility calculations. If you would have to kill most humans now to have 1000 years to come up with the solution to FAI and by that act gain 10^100 years of paradise, then I think some people here would follow through on it.
I disagreed with the idea of disrupting randomly chosen research efforts, because I don’t see how this improves the chances that the first AI will be friendly, and only pushes back the date of the singularity by a few years. I somewhat facetiously exempted the idea of causing a collapse of civilization from my disapproval, because I see the Friendliness problem as mathematical/philosophical whereas the AGI problem probably also has a technological component. So I imagine that a collapse would set back AGI research farther than Friendliness research.
I agree with your horror at the potential atrocities that can be justified once people start worrying about optimizing undiscounted distant future utilities. A failure to discount is, so far as I can tell, the source of almost all of the insanity in this community’s take on the Singularity. Nonetheless, as I say in my last paragraph, working to make sure that the first superhuman AGI is friendly is pretty damned good idea.
A failure to discount is, so far as I can tell, the source of almost all of the insanity in this community’s take on the Singularity.
Probably the main idea the area here is the proposal that—within a few decades, not long after we get powerful machine intelligence, something really, really bad might happen—and that we can influence whether it will or not.
I might differ considerably on the p()s and details—but I think that proposition is a reasonable one.
Since the event in question will probably be within the lifetimes of many here, I think it is close enough for many people’s temporal discounting to leave it partially intact.
Don’t really understand the question. Our expectations about what happens are not affected by whether we discount or not. The probability I assign to the event “FOOMing AI within 40 years” is the same regardless of what discount rate I use. Same goes for the probability I assign to the proposition that “The first superhuman AI will tile the universe with paperclips.” Or the proposition that “The first superhuman AI will tile the universe with happy humans.”
What discounting or not discounting changes is how I feel about those possibilities. If I use a low discount rate, the future means a lot to me, and I should most rationally sell all I own and contribute it to the SIAI, pretty much however little I think of SIAI effectiveness. If I use a higher discount rate, then it is possible that I care more about what happens in the next 40 years than I do about anything that happens after 2050. I don’t see uFAI all that far out as such a horrible disaster. And I don’t see FAI as incredibly awesome either, if it doesn’t appear quickly. I would be much more impressed to see a cure for malaria next year.
Our expectations about what happens are not affected by whether we discount or not.
Of course. What I was trying to get at was whether a few decades is too far away for you, or whether those ideas are not what you mean—and are talking about some other “insanity” to do with events further out in the future.
A few decades is not that far out—for many people.
You are still missing my point. The insanity has to do with utilities farther out in the future, not events farther out in the future. ‘Insane’ people and AGI’s care about those utilities, care a lot. Me, and most other people, not so much.
Most people will worry about the happiness of their grandchildren, few really care about their great^n grandchildren when n rises to double-digits. And even if they do care about future generations on a par with the next one, they probably normalize for population size so that they don’t think that future generations collectively are more important than current ones.
You are still missing my point. The insanity has to do with utilities farther out in the future, not events farther out in the future. ‘Insane’ people and AGI’s care about those utilities, care a lot. Me, and most other people, not so much.
The utilities you would calculate are not big enough for “normal” people to worry about? This is the end of the human race in a few decades we are talking about—right?
If you ignore the possibility of “insane” astronomical waste, that would still be a matter of some concern to many—no?
I don’t see why you disagree. The OP shows what happens if you take ideas too serious. You can justify any atrocities with the right variables in your probability and utility calculations. If you would have to kill most humans now to have 1000 years to come up with the solution to FAI and by that act gain 10^100 years of paradise, then I think some people here would follow through on it.
I disagreed with the idea of disrupting randomly chosen research efforts, because I don’t see how this improves the chances that the first AI will be friendly, and only pushes back the date of the singularity by a few years. I somewhat facetiously exempted the idea of causing a collapse of civilization from my disapproval, because I see the Friendliness problem as mathematical/philosophical whereas the AGI problem probably also has a technological component. So I imagine that a collapse would set back AGI research farther than Friendliness research.
I agree with your horror at the potential atrocities that can be justified once people start worrying about optimizing undiscounted distant future utilities. A failure to discount is, so far as I can tell, the source of almost all of the insanity in this community’s take on the Singularity. Nonetheless, as I say in my last paragraph, working to make sure that the first superhuman AGI is friendly is pretty damned good idea.
Probably the main idea the area here is the proposal that—within a few decades, not long after we get powerful machine intelligence, something really, really bad might happen—and that we can influence whether it will or not.
I might differ considerably on the p()s and details—but I think that proposition is a reasonable one.
Since the event in question will probably be within the lifetimes of many here, I think it is close enough for many people’s temporal discounting to leave it partially intact.
Agree / disagree?
Don’t really understand the question. Our expectations about what happens are not affected by whether we discount or not. The probability I assign to the event “FOOMing AI within 40 years” is the same regardless of what discount rate I use. Same goes for the probability I assign to the proposition that “The first superhuman AI will tile the universe with paperclips.” Or the proposition that “The first superhuman AI will tile the universe with happy humans.”
What discounting or not discounting changes is how I feel about those possibilities. If I use a low discount rate, the future means a lot to me, and I should most rationally sell all I own and contribute it to the SIAI, pretty much however little I think of SIAI effectiveness. If I use a higher discount rate, then it is possible that I care more about what happens in the next 40 years than I do about anything that happens after 2050. I don’t see uFAI all that far out as such a horrible disaster. And I don’t see FAI as incredibly awesome either, if it doesn’t appear quickly. I would be much more impressed to see a cure for malaria next year.
Of course. What I was trying to get at was whether a few decades is too far away for you, or whether those ideas are not what you mean—and are talking about some other “insanity” to do with events further out in the future.
A few decades is not that far out—for many people.
You are still missing my point. The insanity has to do with utilities farther out in the future, not events farther out in the future. ‘Insane’ people and AGI’s care about those utilities, care a lot. Me, and most other people, not so much.
Most people will worry about the happiness of their grandchildren, few really care about their great^n grandchildren when n rises to double-digits. And even if they do care about future generations on a par with the next one, they probably normalize for population size so that they don’t think that future generations collectively are more important than current ones.
The utilities you would calculate are not big enough for “normal” people to worry about? This is the end of the human race in a few decades we are talking about—right?
If you ignore the possibility of “insane” astronomical waste, that would still be a matter of some concern to many—no?