Another hope would be that if we can get all the other problems in Friendly AI right, we’ll be able to trust the AI to solve consciousness for us.
That’s not a hope. It’s appealing to a magic genie to solve our problems. We really have to get out of the habit of doing that, or we’ll never get anything done.
“I hope a magic genie will solve our problems” is a kind of hope, though as I say in the OP, not one I’d want to bet on. For the record, the “maybe an FAI will solve our problems” isn’t so much my thought as something I anticipated that some members of the LW community might say in response to this post.
It’s only legit if you can exhibit a computation which you are highly confident will solve the problem of consciousness, without being able to solve the problem of consciousness yourself.
That’s not a hope. It’s appealing to a magic genie to solve our problems. We really have to get out of the habit of doing that, or we’ll never get anything done.
“I hope a magic genie will solve our problems” is a kind of hope, though as I say in the OP, not one I’d want to bet on. For the record, the “maybe an FAI will solve our problems” isn’t so much my thought as something I anticipated that some members of the LW community might say in response to this post.
It’s only legit if you can exhibit a computation which you are highly confident will solve the problem of consciousness, without being able to solve the problem of consciousness yourself.