But it’s much easier to get people to look at an interesting problem so you can then persuade them that it’s serious, than it is to convince them that they are about to die in order to make them look at your problem.
Notice that Nihil didn’t propose never mentioning the urgency you believe exists, just not using it as your rallying cry.
I got fascinated by friendliness theorem despite never believing in the singularity (and in fact, not knowing much about the idea other than that it was being argued on the basis of extrapolating moore’s law, which explains why I didn’t buy it.)
Other people could be drawn in by the interesting philosophical and practical challenges of Friendliness Theorem without the FOOM threat.
It is more important to convince the AGI researchers who see themselves as practical people trying to achieve good results in the real world than people who like an interesting theoretical problem.
No. Because it is better for the people who would otherwise be working on dangerous AGI to realize they should not do that, than to have people who would not have been working on AI at all to comment that the dangerous AGI researchers shouldn’t do that.
NihilCredo said:
Notice that Nihil didn’t propose never mentioning the urgency you believe exists, just not using it as your rallying cry.
I got fascinated by friendliness theorem despite never believing in the singularity (and in fact, not knowing much about the idea other than that it was being argued on the basis of extrapolating moore’s law, which explains why I didn’t buy it.)
Other people could be drawn in by the interesting philosophical and practical challenges of Friendliness Theorem without the FOOM threat.
It is more important to convince the AGI researchers who see themselves as practical people trying to achieve good results in the real world than people who like an interesting theoretical problem.
Because people who like theoretical problems are less effective than people trying for good results? I don’t buy it.
No. Because it is better for the people who would otherwise be working on dangerous AGI to realize they should not do that, than to have people who would not have been working on AI at all to comment that the dangerous AGI researchers shouldn’t do that.