EY’s one quote included the phrase “intergalactic civilization” was a cringe moment for me- it sounds too much like sci-fi to register with even the brightest and most rational of NPR’s demographic.
If you really wanted to get AI researchers and other academics to take you serious then making the term Singularity part of the name of your charity is a bad idea in the first place. Getting the mainstream to support you might work vice versa though, almost nobody will care about some academic treatment of friendly AI but a lot will read on when someone starts talking about an intergalactic civilisation being destroyed by superhuman AI.
I expect that the AAAI have cold feet—since to them, the SIAI probably looks like a bunch of amateur upstarts who are spreading FUD about everyone else’s efforts being dangerous.
Funding advanced machine intelligence research a decade or so before it has much of a chance to pay off is not easy, and—from the point of view many others in the field—the SIAI can easily appear to be be hindering as much as helping:
I’ve seen a number of researchers complaining about this—most recently Eray Ozkural:
But now, your people are making AGI code look like a nuclear warhead. Or worse, because it could go off on its own! Fear! People!! Fear!!!!! Are you trying to prevent us from getting any funding for code’s sake?
It does look as though that is part of the plan to me.
Exactly. My understanding is, AGI researchers and SIAI are inevitably going to be at odds because they have almost the opposite goals. SIAI is mostly concerned with preventing catastrophe, and the AGI researchers want to achieve big things as quickly as possible (to attract grants/private funding, etc).
I am not sure they are so very different. SIAI is one of many organisations who wants to be in at the birth of the future superintelligence. Each player realises the significance of getting there first. Presumably, as we get closer, the FUD marketing—and the teams jabbing at each other—will ramp up.
If you really wanted to get AI researchers and other academics to take you serious then making the term Singularity part of the name of your charity is a bad idea in the first place. Getting the mainstream to support you might work vice versa though, almost nobody will care about some academic treatment of friendly AI but a lot will read on when someone starts talking about an intergalactic civilisation being destroyed by superhuman AI.
This comment raised an interesting question: is it more important to get noticed/supported by other AI researchers, or by the general public?
I expect that the AAAI have cold feet—since to them, the SIAI probably looks like a bunch of amateur upstarts who are spreading FUD about everyone else’s efforts being dangerous.
Funding advanced machine intelligence research a decade or so before it has much of a chance to pay off is not easy, and—from the point of view many others in the field—the SIAI can easily appear to be be hindering as much as helping:
I’ve seen a number of researchers complaining about this—most recently Eray Ozkural:
It does look as though that is part of the plan to me.
Exactly. My understanding is, AGI researchers and SIAI are inevitably going to be at odds because they have almost the opposite goals. SIAI is mostly concerned with preventing catastrophe, and the AGI researchers want to achieve big things as quickly as possible (to attract grants/private funding, etc).
I am not sure they are so very different. SIAI is one of many organisations who wants to be in at the birth of the future superintelligence. Each player realises the significance of getting there first. Presumably, as we get closer, the FUD marketing—and the teams jabbing at each other—will ramp up.