I’m not sure I agree with this distinction as any more than one of degree. Both tasks and problems are differences between the perceived state of the world and a desired state of the world.
As you describe it, “tasks” tend to be plans of action which you expect to have acceptible cost for their probability of success in moving the world state to the desired one. “problems” are just situations where the cost of the actions you’re considering are too high for their probability of success.
I believe that both cost of planned actions and probability of success in moving to a desired world state are pretty close to continuous. There’s no reasonable threshold between tasks and problems other than “unwilling to proceed”, and even that is an action plan—“do without the desired change”.
For BOTH tasks and problems, there may be alternate paths with higher probability of success or lower cost. There may not be. There may be alternate acceptible (or preferable!) destination world states for which there are better or easier to find plans of action. There may not be.
“problems” are just situations where the cost of the actions you’re considering are too high for their probability of success.
I think it may be a bit more than that. An example of what Alicorn is calling a “problem” might be where you can’t even figure out what actions you should be taking in the first place. Or you lack resources or knowledge to actually carry out those actions.
Hmm, I may need to find better words to express this idea. each possible action you take has some probability of being part your desired future world-state . You may not assign a high probability to any action you’ve considered, but that just rolls into the decision of what to do.
For EVERYTHING you’ve labeled “problem”, there are actions you might take and/or goal changes you might make. Same for “tasks”. Many times, that action is “research”, which has sub-actions like “find an interweb terminal” or “ask someone”, or “complain on Less Wrong”, which has sub-actions, which have sub-actions, etc. You might categorize some of these as “tasks” or “problems”, but that categorization is arbitrary.
Lacking knowledge vs lacking a sandwich is NOT a binary distinction. It’s a distinction in costs, duration, and probability of success of various actions you might take.
Lacking resources is even more obviously not distinct: task: get resources. subtask: find someone to pay you. subtask: learn a valuable skill. etc… So: not a problem, right? It takes time and is not guaranteed to work, but both of those are true for “acquire bread to make sandwich” too.
The continuum of cost of actions and probability of success has no obvious inflection point to objectively call “problem” vs “task”.
“problems” are just situations where the cost of the actions you’re considering are too high for their probability of success.
I don’t think that’s it. The distinction between tasks and problems is well-expressed in the idiom of Eliezer’s post on Possibility and Could-ness: achieving the GOAL state is a problem until the could-ness algorithm has managed to label it “reachable from START”, at which point it becomes a task. (This makes the problem/task status of any particular GOAL a property of the current state of the could-ness algorithm, which is as it should be.)
I think Alicorn intends to offer observations which might improve the execution of our could-ness algorithms. The current post points out that due to the similarity of language we use to describe tasks and problems, it’s common for people who have problems to fail to recognize that fact and not even start their could-ness algorithms.
I think the reason for the common lingual similarity in treating problems and tasks is the ACTUAL similarity. Could-ness, or reachability of a theoretical (alternate past or unknown future) world-state is not binary. It’s a probability function related to likelihood of theoretical actions and likelihood of various results of those actions.
If your could-ness function returns one bit of information, it’s too simple to be very useful. And any theory of decision-making based on it is equally oversimplified.
I do think there’s value in exploring this as a (false, but perhaps novel) quantization. Choosing between physical movement vs searching for alternate plans vs abandoning/altering goals (all of which are action, but feel somewhat different) is a real part of any decision theory.
I don’t think the quantization is real. The chose of what to do next (perform some physical action, think about alternate strategies, or rethink goals (or change focus to a different goal)) is valid and necessary for things labeled tasks as well as those labeled problems.
If you want to consider probabilities other than epsilon and 1 - epsilon then the distinction becomes: setting up and approximating the solution to the right Bellman equation is the problem stage; carrying out the indicated actions is the task stage.
I’m not sure I agree with this distinction as any more than one of degree. Both tasks and problems are differences between the perceived state of the world and a desired state of the world.
As you describe it, “tasks” tend to be plans of action which you expect to have acceptible cost for their probability of success in moving the world state to the desired one. “problems” are just situations where the cost of the actions you’re considering are too high for their probability of success.
I believe that both cost of planned actions and probability of success in moving to a desired world state are pretty close to continuous. There’s no reasonable threshold between tasks and problems other than “unwilling to proceed”, and even that is an action plan—“do without the desired change”.
For BOTH tasks and problems, there may be alternate paths with higher probability of success or lower cost. There may not be. There may be alternate acceptible (or preferable!) destination world states for which there are better or easier to find plans of action. There may not be.
I think it may be a bit more than that. An example of what Alicorn is calling a “problem” might be where you can’t even figure out what actions you should be taking in the first place. Or you lack resources or knowledge to actually carry out those actions.
Hmm, I may need to find better words to express this idea. each possible action you take has some probability of being part your desired future world-state . You may not assign a high probability to any action you’ve considered, but that just rolls into the decision of what to do.
For EVERYTHING you’ve labeled “problem”, there are actions you might take and/or goal changes you might make. Same for “tasks”. Many times, that action is “research”, which has sub-actions like “find an interweb terminal” or “ask someone”, or “complain on Less Wrong”, which has sub-actions, which have sub-actions, etc. You might categorize some of these as “tasks” or “problems”, but that categorization is arbitrary.
Lacking knowledge vs lacking a sandwich is NOT a binary distinction. It’s a distinction in costs, duration, and probability of success of various actions you might take.
Lacking resources is even more obviously not distinct: task: get resources. subtask: find someone to pay you. subtask: learn a valuable skill. etc… So: not a problem, right? It takes time and is not guaranteed to work, but both of those are true for “acquire bread to make sandwich” too.
The continuum of cost of actions and probability of success has no obvious inflection point to objectively call “problem” vs “task”.
I don’t think that’s it. The distinction between tasks and problems is well-expressed in the idiom of Eliezer’s post on Possibility and Could-ness: achieving the GOAL state is a problem until the could-ness algorithm has managed to label it “reachable from START”, at which point it becomes a task. (This makes the problem/task status of any particular GOAL a property of the current state of the could-ness algorithm, which is as it should be.)
I think Alicorn intends to offer observations which might improve the execution of our could-ness algorithms. The current post points out that due to the similarity of language we use to describe tasks and problems, it’s common for people who have problems to fail to recognize that fact and not even start their could-ness algorithms.
I think the reason for the common lingual similarity in treating problems and tasks is the ACTUAL similarity. Could-ness, or reachability of a theoretical (alternate past or unknown future) world-state is not binary. It’s a probability function related to likelihood of theoretical actions and likelihood of various results of those actions.
If your could-ness function returns one bit of information, it’s too simple to be very useful. And any theory of decision-making based on it is equally oversimplified.
I do think there’s value in exploring this as a (false, but perhaps novel) quantization. Choosing between physical movement vs searching for alternate plans vs abandoning/altering goals (all of which are action, but feel somewhat different) is a real part of any decision theory.
I don’t think the quantization is real. The chose of what to do next (perform some physical action, think about alternate strategies, or rethink goals (or change focus to a different goal)) is valid and necessary for things labeled tasks as well as those labeled problems.
If you want to consider probabilities other than epsilon and 1 - epsilon then the distinction becomes: setting up and approximating the solution to the right Bellman equation is the problem stage; carrying out the indicated actions is the task stage.