I think that the reason that governments are not researching existential risk and artificial intelligence is because (a) the actors involved in governments are shortsighted and (b) the public doesn’t demand that governments research these things. It seems quite possible to me that in the future governments will put large amounts of funding into these things.
Maybe, but more likely rich individuals will see the benefits long before the public does, then the “establishment” will organize a secret AGI project. Though this doesn’t even seem remotely close to happening: the whole thing pattern matches for some kind of craziness/scam.
•I agree that there’s gap between when rich individuals see the benefits of existential risk research and when the general public sees the benefits of existential risk research.
•The gap may nevertheless be inconsequential relative to the time that it will take to build a general AI.
•I presently believe that it’s not desirable for general AI research to be done in secret. Secret research proceeds slower than open research, and we may be “on the clock” because of existential risks unrelated to general AI. In my mind this factor outweighs the arguments that Eliezer has advanced for general AI research being done in secret.
I presently believe that it’s not desirable for general AI research to be done in secret.
There are shades between complete secrecy and blurting it out on the radio. Right now, human-universal cognitive biases keep it effectively secret, but in the future we may find that the military closes in on it like knowledge of how to build nuclear weapons.
That, and secrets are damn hard to keep. In all of history, there has only been one military secret that has never been exposed, and that’s the composition of Greek fire. Someone is going to leak.
Maybe, but more likely rich individuals will see the benefits long before the public does, then the “establishment” will organize a secret AGI project. Though this doesn’t even seem remotely close to happening: the whole thing pattern matches for some kind of craziness/scam.
•I agree that there’s gap between when rich individuals see the benefits of existential risk research and when the general public sees the benefits of existential risk research.
•The gap may nevertheless be inconsequential relative to the time that it will take to build a general AI.
•I presently believe that it’s not desirable for general AI research to be done in secret. Secret research proceeds slower than open research, and we may be “on the clock” because of existential risks unrelated to general AI. In my mind this factor outweighs the arguments that Eliezer has advanced for general AI research being done in secret.
There are shades between complete secrecy and blurting it out on the radio. Right now, human-universal cognitive biases keep it effectively secret, but in the future we may find that the military closes in on it like knowledge of how to build nuclear weapons.
That, and secrets are damn hard to keep. In all of history, there has only been one military secret that has never been exposed, and that’s the composition of Greek fire. Someone is going to leak.