Eliezer will have lots of specific things to say in his forthcoming “Open Problems in Friendly AI” sequence (I know; I’ve seen the outline). In any case, wouldn’t it be a lot more unsettling if, at this early stage, we pretended we knew enough to commit entirely to one very particular approach?
It’s unsettling that this is still an early stage. SI has been around for over a decade. I’m looking forward to the open problems sequence; perhaps I should shut up about the lack of explanation of SI’s research for now, considering that the sequence seems like a credible promise to remedy this.
Eliezer will have lots of specific things to say in his forthcoming “Open Problems in Friendly AI” sequence (I know; I’ve seen the outline). In any case, wouldn’t it be a lot more unsettling if, at this early stage, we pretended we knew enough to commit entirely to one very particular approach?
It’s unsettling that this is still an early stage. SI has been around for over a decade. I’m looking forward to the open problems sequence; perhaps I should shut up about the lack of explanation of SI’s research for now, considering that the sequence seems like a credible promise to remedy this.