As far as I can tell, the account of decision-dependent beliefs described in that post is entirely compatible with what I say here.
(The account of “belief-dependent beliefs”, if you will, is a different matter; but I make no claims about that, in this post. Also, I think that the notion of “world reacts to agent’s beliefs”, as described there and elsewhere, is confused in an important way, but that’s a discussion for another time.)
On the whole, I must admit that I’m slightly confused about what you were getting at, with that link.
hmm, I did read you to be making claims about that, which is why I linked it. in particular:
As I’ve shown, there is little material difference between a belief that’s “about the future” and one that’s “about a part of the present concerning which we have insufficient information”
There seems to me to be a legitimate difference in processes which implement decision theories that choose actions by choosing belief-dependent beliefs, in that which belief is true happens computationally after the decision of which belief to hold. in some logics this is equivalent, but some of your emphasized statements in this post don’t seem as obviously justified to me as you assert them to be. But if that doesn’t justify the connection to you, then I don’t think I’m interpreting my intuitions into precise claims correctly, and so we won’t be able to do further evaluation of correctness of the intuitions I get from a difference I see between your post and that one, at least in this thread or until I can be more specific.
Meta note—it’s possible I was irrationally reacting to your use of emphasis and asserting-before-justifying, in eg, the sentence
Of course any philosopher worth his salt will find much to quarrel with, in that highly questionable account of decision-making
“of course” and “highly questionable” don’t hold in their syntax a statement of first person perspective, which my brain automatically translates as a peer-pressure-backed request to update before processing the rest of the claims. I generally prefer to avoid such claims and try to parameterize first-person-ness whereever I can. This is not material to your point, but seems to have affected some features in how I responded in ways that were avoidable by either of us, and is an example of the sort of thing I think is an unnecessary overhead in iterative-disagreement-based communication.
But if that doesn’t justify the connection to you, then I don’t think I’m interpreting my intuitions into precise claims correctly, and so we won’t be able to do further evaluation of correctness of the intuitions I get from a difference I see between your post and that one, at least in this thread or until I can be more specific.
Alright. If you have further thoughts on the matter in the future, I’ll certainly be interested in reading them.
“of course” and “highly questionable” don’t hold in their syntax a statement of first person perspective
Correct, it is not a first person perspective, but rather a reference to an ongoing debate in philosophy (of course, you can guess which side of it I come down on…). This post originally having been a comment in a thread that mostly wasn’t about this specific point, I didn’t add any links to point a reader to references for this; I agree that this is a lacuna, and I’ll see about adding a clarifying link or two.
However, please note that the rest of the post does not depend on this point! Take a closer look at the paragraph beginning with “Let us consider again the belief”; as you see, the logic that I outline in the last part of the post is agnostic about whether beliefs are prior to decisions, or vice versa.
(You’re at least the second reader to find this unclear, which suggests that it could use a bit of an edit…)
(EDIT: Edited.)
a … request to update before processing the rest of the claims
Ok, I’ve now read the linked post.
As far as I can tell, the account of decision-dependent beliefs described in that post is entirely compatible with what I say here.
(The account of “belief-dependent beliefs”, if you will, is a different matter; but I make no claims about that, in this post. Also, I think that the notion of “world reacts to agent’s beliefs”, as described there and elsewhere, is confused in an important way, but that’s a discussion for another time.)
On the whole, I must admit that I’m slightly confused about what you were getting at, with that link.
hmm, I did read you to be making claims about that, which is why I linked it. in particular:
There seems to me to be a legitimate difference in processes which implement decision theories that choose actions by choosing belief-dependent beliefs, in that which belief is true happens computationally after the decision of which belief to hold. in some logics this is equivalent, but some of your emphasized statements in this post don’t seem as obviously justified to me as you assert them to be. But if that doesn’t justify the connection to you, then I don’t think I’m interpreting my intuitions into precise claims correctly, and so we won’t be able to do further evaluation of correctness of the intuitions I get from a difference I see between your post and that one, at least in this thread or until I can be more specific.
Meta note—it’s possible I was irrationally reacting to your use of emphasis and asserting-before-justifying, in eg, the sentence
“of course” and “highly questionable” don’t hold in their syntax a statement of first person perspective, which my brain automatically translates as a peer-pressure-backed request to update before processing the rest of the claims. I generally prefer to avoid such claims and try to parameterize first-person-ness whereever I can. This is not material to your point, but seems to have affected some features in how I responded in ways that were avoidable by either of us, and is an example of the sort of thing I think is an unnecessary overhead in iterative-disagreement-based communication.
Alright. If you have further thoughts on the matter in the future, I’ll certainly be interested in reading them.
Correct, it is not a first person perspective, but rather a reference to an ongoing debate in philosophy (of course, you can guess which side of it I come down on…). This post originally having been a comment in a thread that mostly wasn’t about this specific point, I didn’t add any links to point a reader to references for this; I agree that this is a lacuna, and I’ll see about adding a clarifying link or two.
However, please note that the rest of the post does not depend on this point! Take a closer look at the paragraph beginning with “Let us consider again the belief”; as you see, the logic that I outline in the last part of the post is agnostic about whether beliefs are prior to decisions, or vice versa.
(You’re at least the second reader to find this unclear, which suggests that it could use a bit of an edit…)
(EDIT: Edited.)
Please see this old comment for my stance on this.