I think it’s really hard to figure out how to help with beneficial AI. Various career and research paths vary in how likely they are to help, or harm, or fit together. I think many prominent thinkers in the AI landscape have developed nuanced takes on how to think about the evolving landscape, but often haven’t written up those thoughts.
I like this post both for laying out a lot of object-level thoughts about that, and also for demonstrating a possible framework for organizing those object-level thoughts, and for doing it very comprehensively.
I haven’t finished processing all of the object level points and am not sure which ones I endorse at this point. But I’m looking forward to debate on the various points here. I’d welcome other thinkers in the AI Existential Safety space writing up similarly comprehensive posts about how they think about all of this.
Curated, for several reasons.
I think it’s really hard to figure out how to help with beneficial AI. Various career and research paths vary in how likely they are to help, or harm, or fit together. I think many prominent thinkers in the AI landscape have developed nuanced takes on how to think about the evolving landscape, but often haven’t written up those thoughts.
I like this post both for laying out a lot of object-level thoughts about that, and also for demonstrating a possible framework for organizing those object-level thoughts, and for doing it very comprehensively.
I haven’t finished processing all of the object level points and am not sure which ones I endorse at this point. But I’m looking forward to debate on the various points here. I’d welcome other thinkers in the AI Existential Safety space writing up similarly comprehensive posts about how they think about all of this.