The actually difficult problem that’s specific to the question of free will is “how is the state space generated” (i.e., where do all these graph nodes come from in the first place, that our algorithm is searching through?).
The other actually difficult problem, which is not specific to the question of free will but applies also (and first) to Eliezer’s “dissolving” of problems like “How An Algorithm Feels From Inside”, is “why exactly should this algorithm feel like anything from the inside? why, indeed, should anything feel like anything from the inside?” Without an answer to this question (which Eliezer never gives and, as far as I can recall, never even seriously acknowledges), all of these supposed “solutions”… aren’t.
I’m inclined to give Yudkowsky credit for solving the “in scope” problems, and to defer the difficult problems you identify as “out of scope”.
For free will, the question Yudkowsky is trying to address is, “What could it possibly mean to make decisions in a deterministic universe?”
I think the relevant philosophical question being posed here is addressed by contemplating a chess engine as a toy model. The program searches the game tree in order to output the best move. It can’t know which move is best in advance of performing the search, and the search algorithm treats all legal moves as “possible”, even though the program is deterministic and will only end up outputting one of them.
In the case of human free will, it’s true that we don’t have a “game tree” written out the way the rules of chess specify the game tree for a chess engine, but figuring that out seems like “merely” an enormously difficult empirical cognitive science problem, rather than the elementary philosophical confusion being addressed by the blog posts. I feel like I “could” lift my arm, because if my brain computed the intent to lift my arm, it could output the appropriate nerve signals to make it happen, but I can’t know whether I will lift my arm in advance of computing the decision to do so, and the decision treats both the lift and not-lift outcomes as “possible”, even though the universe is deterministic and I’m only going to end up doing one of them.
The “how the algorithm feels” methodology is doing work (identifying the role could-ness plays in the “map” of choosing a chess move or lifting my arm, without presupposing fundamental could-ness in the “territory”), even if it doesn’t itself solve the hard problem of why algorithms have feelings.
I don’t dispute that both the “search algorithm” idea and the “algorithm that implements this cognitive functionality” idea are valuable, and cut through some parts of the confusions related to free will and consciousness respectively. But the things I mention are hardly “out of scope”, if without them, the puzzles remains (as indeed they do, IMO).
In any case, claiming that the questions of either free will or consciousness have been “solved” by these explanations is simply false, and that’s what I was objecting to.
In the case of human free will, it’s true that we don’t have a “game tree” written out the way the rules of chess specify the game tree for a chess engine, but figuring that out seems like “merely” an enormously difficult empirical cognitive science problem, rather than the elementary philosophical confusion being addressed by the blog posts.
This is the sort of claim that it’s premature to make prior to having even a rough functional sketch of the solution. Something might look like ‘“merely” an enormously difficult empirical cognitive science problem’, until you try to solve it, and realize that you’re still confused.
What do you see as the actually difficult problems?
The actually difficult problem that’s specific to the question of free will is “how is the state space generated” (i.e., where do all these graph nodes come from in the first place, that our algorithm is searching through?).
The other actually difficult problem, which is not specific to the question of free will but applies also (and first) to Eliezer’s “dissolving” of problems like “How An Algorithm Feels From Inside”, is “why exactly should this algorithm feel like anything from the inside? why, indeed, should anything feel like anything from the inside?” Without an answer to this question (which Eliezer never gives and, as far as I can recall, never even seriously acknowledges), all of these supposed “solutions”… aren’t.
I’m inclined to give Yudkowsky credit for solving the “in scope” problems, and to defer the difficult problems you identify as “out of scope”.
For free will, the question Yudkowsky is trying to address is, “What could it possibly mean to make decisions in a deterministic universe?”
I think the relevant philosophical question being posed here is addressed by contemplating a chess engine as a toy model. The program searches the game tree in order to output the best move. It can’t know which move is best in advance of performing the search, and the search algorithm treats all legal moves as “possible”, even though the program is deterministic and will only end up outputting one of them.
In the case of human free will, it’s true that we don’t have a “game tree” written out the way the rules of chess specify the game tree for a chess engine, but figuring that out seems like “merely” an enormously difficult empirical cognitive science problem, rather than the elementary philosophical confusion being addressed by the blog posts. I feel like I “could” lift my arm, because if my brain computed the intent to lift my arm, it could output the appropriate nerve signals to make it happen, but I can’t know whether I will lift my arm in advance of computing the decision to do so, and the decision treats both the lift and not-lift outcomes as “possible”, even though the universe is deterministic and I’m only going to end up doing one of them.
The “how the algorithm feels” methodology is doing work (identifying the role could-ness plays in the “map” of choosing a chess move or lifting my arm, without presupposing fundamental could-ness in the “territory”), even if it doesn’t itself solve the hard problem of why algorithms have feelings.
I don’t dispute that both the “search algorithm” idea and the “algorithm that implements this cognitive functionality” idea are valuable, and cut through some parts of the confusions related to free will and consciousness respectively. But the things I mention are hardly “out of scope”, if without them, the puzzles remains (as indeed they do, IMO).
In any case, claiming that the questions of either free will or consciousness have been “solved” by these explanations is simply false, and that’s what I was objecting to.
This is the sort of claim that it’s premature to make prior to having even a rough functional sketch of the solution. Something might look like ‘“merely” an enormously difficult empirical cognitive science problem’, until you try to solve it, and realize that you’re still confused.