Yes—they’d show SIAI is doing something, but not that it’s doing the right thing. And a 99% competent SIAI could well be worse than a 0% competent one – if they create a fooming UFAI a few years earlier.
It seems hard to think of anything that would verify that the nine are doing the right thing without risking AGI knowledge leaking out—I’d much sooner take my chances with a bunch of dudes in a basement who at least know there’s a problem then an IBM team who just want moar awesum.
If Friendliness turns out to be largely independent of the AGI bit I suppose it could be usefully published—both for feedback, and to raise awareness, and LW etc. could critique it.
The realistic outcomes for humanity are uFAI foom, FAI foom, or extinction by some other means. Soon doesn’t matter all that much; the only significant question is probability of an eventual Friendly foom. Those “few years earlier” only matter if someone else would have run a Friendly AGI in those few intervening years.
EDITED TO ADD: None of this changes the substance of your article, but just to pick a few nits:
“Foom” refers to a scenario in which we reach superintelligence rapidly enough to take humanity by surprise. That isn’t certain—it’s imaginable that we could have, say, several years of moderately superhuman intelligence.
Also, while these may be the long term realistic outcomes, in the short term another possible outcome is global catastrophe short of extinction, which would slow things down some.
Yes—they’d show SIAI is doing something, but not that it’s doing the right thing. And a 99% competent SIAI could well be worse than a 0% competent one – if they create a fooming UFAI a few years earlier.
It seems hard to think of anything that would verify that the nine are doing the right thing without risking AGI knowledge leaking out—I’d much sooner take my chances with a bunch of dudes in a basement who at least know there’s a problem then an IBM team who just want moar awesum.
If Friendliness turns out to be largely independent of the AGI bit I suppose it could be usefully published—both for feedback, and to raise awareness, and LW etc. could critique it.
The realistic outcomes for humanity are uFAI foom, FAI foom, or extinction by some other means. Soon doesn’t matter all that much; the only significant question is probability of an eventual Friendly foom. Those “few years earlier” only matter if someone else would have run a Friendly AGI in those few intervening years.
EDITED TO ADD: None of this changes the substance of your article, but just to pick a few nits:
“Foom” refers to a scenario in which we reach superintelligence rapidly enough to take humanity by surprise. That isn’t certain—it’s imaginable that we could have, say, several years of moderately superhuman intelligence.
Also, while these may be the long term realistic outcomes, in the short term another possible outcome is global catastrophe short of extinction, which would slow things down some.
I don’t think any of that changes the substance of my argument.
Sorry, should have been clearer that I was just nitpicking. Will edit.