I expect that the AAAI have cold feet—since to them, the SIAI probably looks like a bunch of amateur upstarts who are spreading FUD about everyone else’s efforts being dangerous.
Funding advanced machine intelligence research a decade or so before it has much of a chance to pay off is not easy, and—from the point of view many others in the field—the SIAI can easily appear to be be hindering as much as helping:
I’ve seen a number of researchers complaining about this—most recently Eray Ozkural:
But now, your people are making AGI code look like a nuclear warhead. Or worse, because it could go off on its own! Fear! People!! Fear!!!!! Are you trying to prevent us from getting any funding for code’s sake?
It does look as though that is part of the plan to me.
Exactly. My understanding is, AGI researchers and SIAI are inevitably going to be at odds because they have almost the opposite goals. SIAI is mostly concerned with preventing catastrophe, and the AGI researchers want to achieve big things as quickly as possible (to attract grants/private funding, etc).
I am not sure they are so very different. SIAI is one of many organisations who wants to be in at the birth of the future superintelligence. Each player realises the significance of getting there first. Presumably, as we get closer, the FUD marketing—and the teams jabbing at each other—will ramp up.
I expect that the AAAI have cold feet—since to them, the SIAI probably looks like a bunch of amateur upstarts who are spreading FUD about everyone else’s efforts being dangerous.
Funding advanced machine intelligence research a decade or so before it has much of a chance to pay off is not easy, and—from the point of view many others in the field—the SIAI can easily appear to be be hindering as much as helping:
I’ve seen a number of researchers complaining about this—most recently Eray Ozkural:
It does look as though that is part of the plan to me.
Exactly. My understanding is, AGI researchers and SIAI are inevitably going to be at odds because they have almost the opposite goals. SIAI is mostly concerned with preventing catastrophe, and the AGI researchers want to achieve big things as quickly as possible (to attract grants/private funding, etc).
I am not sure they are so very different. SIAI is one of many organisations who wants to be in at the birth of the future superintelligence. Each player realises the significance of getting there first. Presumably, as we get closer, the FUD marketing—and the teams jabbing at each other—will ramp up.