I think that you interacted most with a pretty uncharacteristically biased sample of characters: most of the long-term SIAI folk have longer timelines than good ol’ me and Justin by about 15-20 years. That said, it’s true that everyone is still pretty worried about AI-soon, no matter the probability.
Well, 15-20 years doesn’t strike me as that much of a time difference, actually. But in any case I was really talking about my surprise at the amount of emphasis on “preventing UFAI” as opposed to “creating FAI”. Do you suppose that’s also reflective of a biased sample?
Well, 15-20 years doesn’t strike me as that much of a time difference, actually.
Really? I mean, relative to your estimate it might not be big, but absolutely speaking, doom 15 years versus doom 35 years seems to make a huge difference in expected utility.
Do you suppose that’s also reflective of a biased sample?
Probably insofar as Eliezer and Marcello weren’t around: FAI and the Visiting Fellows intersect at decision theory only. But the more direct (and potentially dangerous) AGI stuff isn’t openly discussed for obvious reasons.
relative to your estimate it might not be big, but absolutely speaking, doom 15 years versus doom 35 years seems to make a huge difference in expected utility.
A good point. By the way, I should mention that I updated my estimate after it was pointed out to me that other folks’ estimates were taking Outside View considerations into account, and after I learned I had been overestimating the information-theoretic complexity of existing minds. FOOM before 2100 looks significantly more likely to me now than it did before.
Probably insofar as Eliezer and Marcello weren’t around: FAI and the Visiting Fellows intersect at decision theory only.
Well I didn’t expect that AGI technicalities would be discussed openly, of course. What I’m thinking of is Eliezer’s attitude that (for now) AGI is unlikely to be developed by anyone not competent enough to realize Friendliness is a problem, versus the apparent fear among some other people that AGI might be cobbled together more or less haphazardly, even in the near term.
Eliezer’s attitude that (for now) AGI is unlikely to be developed by anyone not competent enough to realize Friendliness is a problem
Huh. I didn’t get that from the sequences, perhaps I should check again. It always seemed to me as if he saw AGI as really frickin’ hard but not excessively so, whereas Friendliness is the Impossible Problem made up of smaller but also impossible problems.
I think that you interacted most with a pretty uncharacteristically biased sample of characters: most of the long-term SIAI folk have longer timelines than good ol’ me and Justin by about 15-20 years. That said, it’s true that everyone is still pretty worried about AI-soon, no matter the probability.
Well, 15-20 years doesn’t strike me as that much of a time difference, actually. But in any case I was really talking about my surprise at the amount of emphasis on “preventing UFAI” as opposed to “creating FAI”. Do you suppose that’s also reflective of a biased sample?
Really? I mean, relative to your estimate it might not be big, but absolutely speaking, doom 15 years versus doom 35 years seems to make a huge difference in expected utility.
Probably insofar as Eliezer and Marcello weren’t around: FAI and the Visiting Fellows intersect at decision theory only. But the more direct (and potentially dangerous) AGI stuff isn’t openly discussed for obvious reasons.
A good point. By the way, I should mention that I updated my estimate after it was pointed out to me that other folks’ estimates were taking Outside View considerations into account, and after I learned I had been overestimating the information-theoretic complexity of existing minds. FOOM before 2100 looks significantly more likely to me now than it did before.
Well I didn’t expect that AGI technicalities would be discussed openly, of course. What I’m thinking of is Eliezer’s attitude that (for now) AGI is unlikely to be developed by anyone not competent enough to realize Friendliness is a problem, versus the apparent fear among some other people that AGI might be cobbled together more or less haphazardly, even in the near term.
Huh. I didn’t get that from the sequences, perhaps I should check again. It always seemed to me as if he saw AGI as really frickin’ hard but not excessively so, whereas Friendliness is the Impossible Problem made up of smaller but also impossible problems.