If we can’t build such a system (because of energy or anything else) then the problem doesn’t arise, and we don’t need to worry (yet) about the solution. But without knowing whether that’s the case, prudence and self-preservation mean we should be prepared for the eventuality where having a viable plan (or many) is necessary.
Now, how do we know we can even get there ( develop AI to that level ) under Energy decline?
I’ve got a post here on less wrong where I address this in more detail.
I really would like feedback.
If we can’t build such a system (because of energy or anything else) then the problem doesn’t arise, and we don’t need to worry (yet) about the solution. But without knowing whether that’s the case, prudence and self-preservation mean we should be prepared for the eventuality where having a viable plan (or many) is necessary.