Are they so deluded to seriously believe they will be the first to build an AGI? I hope not.
What I’ve inferred from statements of key SI folk (most especially Luke) is that they don’t think this likely, but they think the possible futures in which it happens are vastly superior to the ones in which it doesn’t, so they’re working towards it anyway.
the best they can do is to disseminate the results of their research in a way that will maximize the number of AI researchers who will notice it and take it seriously
Yeah, this seems pretty plausible to me as well. (Though also pretty unlikely.)
FWIW, my understanding of SI’s original chosen strategy for making AI researchers take LW’s ideas about Friendliness seriously was to publicize the Sequences, which would improve the general rationality of people everywhere (aka “raise the sanity waterline”), which would improve the rationality of AI researchers (and those who fund them, etc), which would increase the chances of AI researchers embracing the importance of Friendliness, which would increase the chances of FAI being developed before UFAI, which would save the world.
From what I can tell, SI has since them moved on to other strategies for saving the world, like publishing the Sequences in book form, publishing popular fiction, holding minicamps, etc., but all built on the premise that “raising the sanity waterline” among the most easily reached people is a more viable approach than attempting to reach specific audiences like professional researchers.
Even if you accept the premise that you can “teach” rationality to AI researchers capable of building an AGI (who probably would not be idiots, but they might be indeed affected by biases), doing so it’s still an extremenly unfocused way to accomplish the task of advancing the state of the art on machine ethics.
If you want to advance the state of the art on machine ethics, then the most efficient way of doing it is to do actual research on machine ethics. If AI researchers don’t take machine ethics as seriously as you think they should, then the most efficient way to convince them is to put forward your arguments in forms and media accessible and salient to them.
Once you go for peer review, you may receive negative feedback, of course. That might mean two things: That your core claims are wrong, in which case you should recognize that, stop wasting your efforts and move to something else, or that your arguments are uncompelling or unclear, in which case you should improve them, since it is your responsibility to make yourself understood.
My admittedly incomplete understanding is that “raising the sanity waterline” activities have now been spun off to the Center for Applied Rationality, which is either planning to incorporate as a non-profit or already incorporated. This would then leave SIAI as focusing on the strictly AGI- and Friendliness-related stuff.
Ah. I’m aware of the SI/CFAR split, but haven’t paid much attention to what activities are owned by which entity, or how separate their staffs and resources actually are. E.g., I haven’t a clue which entity sponsors LW, if either, or even whether it’s possible to distinguish one condition from the other.
What I’ve inferred from statements of key SI folk (most especially Luke) is that they don’t think this likely, but they think the possible futures in which it happens are vastly superior to the ones in which it doesn’t, so they’re working towards it anyway.
Yeah, this seems pretty plausible to me as well. (Though also pretty unlikely.)
FWIW, my understanding of SI’s original chosen strategy for making AI researchers take LW’s ideas about Friendliness seriously was to publicize the Sequences, which would improve the general rationality of people everywhere (aka “raise the sanity waterline”), which would improve the rationality of AI researchers (and those who fund them, etc), which would increase the chances of AI researchers embracing the importance of Friendliness, which would increase the chances of FAI being developed before UFAI, which would save the world.
From what I can tell, SI has since them moved on to other strategies for saving the world, like publishing the Sequences in book form, publishing popular fiction, holding minicamps, etc., but all built on the premise that “raising the sanity waterline” among the most easily reached people is a more viable approach than attempting to reach specific audiences like professional researchers.
That’s seems to be an inefficient approach.
Even if you accept the premise that you can “teach” rationality to AI researchers capable of building an AGI (who probably would not be idiots, but they might be indeed affected by biases), doing so it’s still an extremenly unfocused way to accomplish the task of advancing the state of the art on machine ethics.
If you want to advance the state of the art on machine ethics, then the most efficient way of doing it is to do actual research on machine ethics. If AI researchers don’t take machine ethics as seriously as you think they should, then the most efficient way to convince them is to put forward your arguments in forms and media accessible and salient to them.
Once you go for peer review, you may receive negative feedback, of course. That might mean two things: That your core claims are wrong, in which case you should recognize that, stop wasting your efforts and move to something else, or that your arguments are uncompelling or unclear, in which case you should improve them, since it is your responsibility to make yourself understood.
My admittedly incomplete understanding is that “raising the sanity waterline” activities have now been spun off to the Center for Applied Rationality, which is either planning to incorporate as a non-profit or already incorporated. This would then leave SIAI as focusing on the strictly AGI- and Friendliness-related stuff.
Ah. I’m aware of the SI/CFAR split, but haven’t paid much attention to what activities are owned by which entity, or how separate their staffs and resources actually are. E.g., I haven’t a clue which entity sponsors LW, if either, or even whether it’s possible to distinguish one condition from the other.
From the information available on their websites, it seems that LW is still operated by SI.
I suggest splitting it off an operating it as a charity separate from both SI and CFAR.