I generally think of FDT as taking a causal model of the world and augmenting it with “logical nodes” (that have to be placed in a common-sense, non-systematic way, which is an issue with FDT). Whether or not some FDT agent regards “bet on 1 while PA proves I pick 2″ as an option depends on how you’ve set up the logical nodes in your augmented model.
If the agent evaluates actions by pretending to control a logical node that’s upstream of both its own action and PA proofs about its action (which is pretty reasonable), then “bet on 1 while PA proves I pick 2” is not a counterfactual it ever considers, and FDT picks 2.
Right, but it’s fairly clear to me that this is not what the authors have in mind. For example, they cite Bjerring (2014), who proposes very specific and precise extensions of the Lewis-Stalnaker semantics.
It’s fairly clear to me that the authors do not have any specific and precise method in mind, Bjerring or no Bjerring.
From the paper:
While we do not yet have a satisfying account of how to perform counterpossible reasoning in practice, the human brain shows that reasonable heuristics exist.
Unfortunately, it’s not clear how to define a true operator.
In fact, any agent-independent rule for construction of counterpossibles is doomed, because different questions can cause the same mathematical change to produce different imagined results. What mathematical propositions get chosen to be “upstream” or “downstream” has to depend on what you’re thinking of as “doing the changing” or “doing the reacting” for the question at hand.
This is important both normatively (e.g. if you were somehow designing an AI that used FDT), and also to understand how humans reason about thought experiments—by constructing the counterfactuals in response to the proposed thought experiment.
It’s fairly clear to me that the authors do not have any specific and precise method in mind, Bjerring or no Bjerring.
Of course they don’t have a specific proposal in the paper. I’m just saying that it seems like they would want to be more precise, or that a full specification requires more work on counterpossibles (which you seem to be arguing against). From the abstract:
While not necessary for considering classic decision theory problems, we note that a full specification of FDT will require a non-trivial theory of logical counterfactuals and algorithmic similarity.
...
What mathematical propositions get chosen to be “upstream” or “downstream” has to depend on what you’re thinking of as “doing the changing” or “doing the reacting” for the question at hand.
If this is in fact how we should think about FDT, the theory becomes very uninteresting since it seems like you can then just get whatever recommendations you want from it.
If this is in fact how we should think about FDT, the theory becomes very uninteresting since it seems like you can then just get whatever recommendations you want from it.
Well, just because something is vague and relies on common sense, doesn’t mean you can get whatever answer you want from it.
And there’s still plenty of progress to be made in formalizing FDT—it’s just that a formalization of an FDT agent isn’t going to reference some agent-independent way of computing counterpossibles. Instead it’s going to have to contain standards for how best to compute counterpossibles on the fly in response to the needs of the moment.
I generally think of FDT as taking a causal model of the world and augmenting it with “logical nodes” (that have to be placed in a common-sense, non-systematic way, which is an issue with FDT). Whether or not some FDT agent regards “bet on 1 while PA proves I pick 2″ as an option depends on how you’ve set up the logical nodes in your augmented model.
If the agent evaluates actions by pretending to control a logical node that’s upstream of both its own action and PA proofs about its action (which is pretty reasonable), then “bet on 1 while PA proves I pick 2” is not a counterfactual it ever considers, and FDT picks 2.
Right, but it’s fairly clear to me that this is not what the authors have in mind. For example, they cite Bjerring (2014), who proposes very specific and precise extensions of the Lewis-Stalnaker semantics.
It’s fairly clear to me that the authors do not have any specific and precise method in mind, Bjerring or no Bjerring.
From the paper:
In fact, any agent-independent rule for construction of counterpossibles is doomed, because different questions can cause the same mathematical change to produce different imagined results. What mathematical propositions get chosen to be “upstream” or “downstream” has to depend on what you’re thinking of as “doing the changing” or “doing the reacting” for the question at hand.
This is important both normatively (e.g. if you were somehow designing an AI that used FDT), and also to understand how humans reason about thought experiments—by constructing the counterfactuals in response to the proposed thought experiment.
Of course they don’t have a specific proposal in the paper. I’m just saying that it seems like they would want to be more precise, or that a full specification requires more work on counterpossibles (which you seem to be arguing against). From the abstract:
...
If this is in fact how we should think about FDT, the theory becomes very uninteresting since it seems like you can then just get whatever recommendations you want from it.
Well, just because something is vague and relies on common sense, doesn’t mean you can get whatever answer you want from it.
And there’s still plenty of progress to be made in formalizing FDT—it’s just that a formalization of an FDT agent isn’t going to reference some agent-independent way of computing counterpossibles. Instead it’s going to have to contain standards for how best to compute counterpossibles on the fly in response to the needs of the moment.