Potentially yes, but I think the problem can be profitably restated without any reference to the Singularity or FOOMing AI. (I’ve often wondered whether the Friendliness problem would be better recognized and accepted if it was presented without reference to the Singularity).
Edit: See also Vladimir Nesov’s summary, which is quite good, but not quite as short as you’re looking for here.
Concerns about FOOM are also what makes SIAI look like (and some posters talk like) a loony doom cult.
Skip the “instant godlike superintelligence with nanotech arms” shenanigans, and AI ethics still remains an interesting and important problem, as you observed.
But it’s much easier to get people to look at an interesting problem so you can then persuade them that it’s serious, than it is to convince them that they are about to die in order to make them look at your problem. Especially since modern society has so inured people to apocalyptic warnings that the wiser half of the population takes them with a few kilograms of salt to begin with.
Just because a rational person would believe something, doesn’t mean a rational person would say that thing.
If telling people the fate of the world depends on you is going to make them less likely to listen, you probably shouldn’t tell them that. Especially if it’s true (because that just makes it more important that they listen)
FOOM is central to the argument that we need to solve Friendliness up front, rather than build it incrementally as patches to a slowly growing AGI. If you leave it out to get past weirdness censors, you can no longer support the same conclusions.
But it’s much easier to get people to look at an interesting problem so you can then persuade them that it’s serious, than it is to convince them that they are about to die in order to make them look at your problem.
Notice that Nihil didn’t propose never mentioning the urgency you believe exists, just not using it as your rallying cry.
I got fascinated by friendliness theorem despite never believing in the singularity (and in fact, not knowing much about the idea other than that it was being argued on the basis of extrapolating moore’s law, which explains why I didn’t buy it.)
Other people could be drawn in by the interesting philosophical and practical challenges of Friendliness Theorem without the FOOM threat.
It is more important to convince the AGI researchers who see themselves as practical people trying to achieve good results in the real world than people who like an interesting theoretical problem.
No. Because it is better for the people who would otherwise be working on dangerous AGI to realize they should not do that, than to have people who would not have been working on AI at all to comment that the dangerous AGI researchers shouldn’t do that.
Sorry, I thought that post was a pretty good statement of the Friendliness problem, sans reference to the Singularity (or even any kind of self-optimization), but perhaps I misunderstood what you were looking for.
Self-optimization is what makes friendliness a serious problem.
Potentially yes, but I think the problem can be profitably restated without any reference to the Singularity or FOOMing AI. (I’ve often wondered whether the Friendliness problem would be better recognized and accepted if it was presented without reference to the Singularity).
Edit: See also Vladimir Nesov’s summary, which is quite good, but not quite as short as you’re looking for here.
Friendliness would certainly be worth pursuing—it applies to a lot of human issues in addition to what we want from computer programs.
Still, concerns about FOOM is the source of urgency here.
Concerns about FOOM are also what makes SIAI look like (and some posters talk like) a loony doom cult.
Skip the “instant godlike superintelligence with nanotech arms” shenanigans, and AI ethics still remains an interesting and important problem, as you observed.
But it’s much easier to get people to look at an interesting problem so you can then persuade them that it’s serious, than it is to convince them that they are about to die in order to make them look at your problem. Especially since modern society has so inured people to apocalyptic warnings that the wiser half of the population takes them with a few kilograms of salt to begin with.
Statements like this make posters look like they confuse rationality with the rejection of non-intuitive ideas.
Just because a rational person would believe something, doesn’t mean a rational person would say that thing.
If telling people the fate of the world depends on you is going to make them less likely to listen, you probably shouldn’t tell them that. Especially if it’s true (because that just makes it more important that they listen)
FOOM is central to the argument that we need to solve Friendliness up front, rather than build it incrementally as patches to a slowly growing AGI. If you leave it out to get past weirdness censors, you can no longer support the same conclusions.
NihilCredo said:
Notice that Nihil didn’t propose never mentioning the urgency you believe exists, just not using it as your rallying cry.
I got fascinated by friendliness theorem despite never believing in the singularity (and in fact, not knowing much about the idea other than that it was being argued on the basis of extrapolating moore’s law, which explains why I didn’t buy it.)
Other people could be drawn in by the interesting philosophical and practical challenges of Friendliness Theorem without the FOOM threat.
It is more important to convince the AGI researchers who see themselves as practical people trying to achieve good results in the real world than people who like an interesting theoretical problem.
Because people who like theoretical problems are less effective than people trying for good results? I don’t buy it.
No. Because it is better for the people who would otherwise be working on dangerous AGI to realize they should not do that, than to have people who would not have been working on AI at all to comment that the dangerous AGI researchers shouldn’t do that.
The Hidden Complexity of Wishes
I do not understand your point. Would you care to explain?
Sorry, I thought that post was a pretty good statement of the Friendliness problem, sans reference to the Singularity (or even any kind of self-optimization), but perhaps I misunderstood what you were looking for.
Oh, I misunderstood your link. I agree, that’s a good summary of the idea behind the “complexity of value” hypothesis.