This is very worrying, especially in light of the lack of a public research agenda. SI’s inability to describe its research agenda suggests the possibility that they cannot describe their research agenda because they do not know what they are doing because FAI is such a ridiculously hard problem that they have no idea where to begin. I’m hoping that SI will soon be able to make it clear that this is not the case.
Yeah, this is the point of Eliezer’s forthcoming ‘Open Problems in Friendly AI’ sequence, which I personally wish he had written in 2009 after his original set of sequences.
Yeah, this is the point of Eliezer’s forthcoming ‘Open Problems in Friendly AI’ sequence, which I personally wish he had written in 2009 after his original set of sequences.