It is still true in that instance. If a person^ told you, “you should give lots of money to charity,” and you followed the suggestion, and later regretted it, then you would be less inclined to listen to that person’s advice in the future.
Anonymity cannot erase identity, it can only obscure it. Readers of the statement have an implicit probability distribution as to the possible identity of the poster, and the readers which follow the suggestion posted will update their trust metric over that probability distribution in response to the outcome of the suggestion. This is part of what I meant by generalized personhood.
What if two people have identical information on all facts about the world and the likely consequences of actions. In your model, can they disagree about shouldness?
Would you expect two people who had identical information on all facts about the world and the likely consequences of actions to get in an argument about “should”, as people in more normal situations are wont to do? Let’s say they get in an argument about what a third person would do. Is this possible? How would you explain it?
Then you need to expand your model. How do you decide what to do?
EDIT: The difference between my viewpoint and your viewpoint is that I view language as a construct purely for communication between different beings rather than for internal planning.
I’m not sure I understand your criticisms. My definition of “should” applies to any agents which are capable of communicating messages for the purpose of coordinating coalitions; outside of that, it does not require that the interpreter of the language have any specific cognitive structure. According to my definition, even creatures as simple as ants could potentially have a signal for ‘should.’
You seem to be attempting to generalize the specific phenomenon of human conscious decision-making to a broader class of cognitive agents. It may well be that the human trait of adapting external language for the purpose of internal decision-making actually turns out to be very effective in practice for all high-level agents. However, it is also quite possible that in the space of possible minds, there are many effective designs which do not use internal language.
I do not see how Bayesian reasoning requires the use of internal language.
Because you’d have a data structure of world-states and their probabilities, which would look very much like a bunch of statements of the form “This world-state has this probability”.
It doesn’t need to be written in a human-like way to have meaning, and if it has a meaning then my argument applies.
So “should” = the table of expected utilities that goes along with the table of probabilities.
Certainly not true in all instances.
“You should give lots of money to charity”, for instance.
It is still true in that instance. If a person^ told you, “you should give lots of money to charity,” and you followed the suggestion, and later regretted it, then you would be less inclined to listen to that person’s advice in the future.
^: Where personhood can be generalized.
Suppose I post a statement of shouldness anonymously on an internet forum. Does that statement have no meaning?
Anonymity cannot erase identity, it can only obscure it. Readers of the statement have an implicit probability distribution as to the possible identity of the poster, and the readers which follow the suggestion posted will update their trust metric over that probability distribution in response to the outcome of the suggestion. This is part of what I meant by generalized personhood.
What if two people have identical information on all facts about the world and the likely consequences of actions. In your model, can they disagree about shouldness?
The concept of “shouldness” does not exist in my model. My model is behavioristic.
Would you expect two people who had identical information on all facts about the world and the likely consequences of actions to get in an argument about “should”, as people in more normal situations are wont to do? Let’s say they get in an argument about what a third person would do. Is this possible? How would you explain it?
Then you need to expand your model. How do you decide what to do?
The decision theory of your choice.
EDIT: The difference between my viewpoint and your viewpoint is that I view language as a construct purely for communication between different beings rather than for internal planning.
a) No, how do you decide what to do?
b) So when I think thoughts in my head by myself, I’m just rehearsing things I might say to people at a future date?
c) Does that mean you have to throw away Bayesian reasoning? Or, if not, how do you incorporate a defense of Bayesian reasoning into that framework?
I’m not sure I understand your criticisms. My definition of “should” applies to any agents which are capable of communicating messages for the purpose of coordinating coalitions; outside of that, it does not require that the interpreter of the language have any specific cognitive structure. According to my definition, even creatures as simple as ants could potentially have a signal for ‘should.’
You seem to be attempting to generalize the specific phenomenon of human conscious decision-making to a broader class of cognitive agents. It may well be that the human trait of adapting external language for the purpose of internal decision-making actually turns out to be very effective in practice for all high-level agents. However, it is also quite possible that in the space of possible minds, there are many effective designs which do not use internal language.
I do not see how Bayesian reasoning requires the use of internal language.
Because you’d have a data structure of world-states and their probabilities, which would look very much like a bunch of statements of the form “This world-state has this probability”.
It doesn’t need to be written in a human-like way to have meaning, and if it has a meaning then my argument applies.
So “should” = the table of expected utilities that goes along with the table of probabilities.
Then I am basically in agreement with you.