I found HK’s analysis largely sound (based on what I could follow, anyway), but it didn’t have much of an effect on my donation practices. The following outlines my reasoning for doing what I do.
I have no feasible way to evaluate SIAI’s work firsthand. I couldn’t do that even if their findings were publicly available, and it’s my default policy to reject the idea of donating to anyone whose claims I can’t understand. If donating were a purely technical question, and if it came down to nothing but my estimate of SIAI’s chances of actually making groundbreaking research, I wouldn’t bet on them to be the first to build an AGI, never mind a FAI. (Also, on a more cynical note, if SIAI were simply an elaborate con job instead of a genuine research effort, I honestly wouldn’t expect to see much of a difference.)
However, I can accept the core arguments for fast AI and uFAI to such a degree that I think the issue needs addressing, whatever that answer turns out to be. I view the AI risk PR work SIAI does as their most important contribution to date. Even if they never publish anything again, starting today, and even if they’ll never have a line of code to show for anything, I estimate their net result to be positive simply for raising awareness about what looks to me like a legitimate concern. Someone should be asking those questions, and so far I haven’t seen anyone else do that. To that end, I still estimate donating to SIAI to be worthwhile. At least for the time being.
I found HK’s analysis largely sound (based on what I could follow, anyway), but it didn’t have much of an effect on my donation practices. The following outlines my reasoning for doing what I do.
I have no feasible way to evaluate SIAI’s work firsthand. I couldn’t do that even if their findings were publicly available, and it’s my default policy to reject the idea of donating to anyone whose claims I can’t understand. If donating were a purely technical question, and if it came down to nothing but my estimate of SIAI’s chances of actually making groundbreaking research, I wouldn’t bet on them to be the first to build an AGI, never mind a FAI. (Also, on a more cynical note, if SIAI were simply an elaborate con job instead of a genuine research effort, I honestly wouldn’t expect to see much of a difference.)
However, I can accept the core arguments for fast AI and uFAI to such a degree that I think the issue needs addressing, whatever that answer turns out to be. I view the AI risk PR work SIAI does as their most important contribution to date. Even if they never publish anything again, starting today, and even if they’ll never have a line of code to show for anything, I estimate their net result to be positive simply for raising awareness about what looks to me like a legitimate concern. Someone should be asking those questions, and so far I haven’t seen anyone else do that. To that end, I still estimate donating to SIAI to be worthwhile. At least for the time being.