I’m not sure I understand your criticisms. My definition of “should” applies to any agents which are capable of communicating messages for the purpose of coordinating coalitions; outside of that, it does not require that the interpreter of the language have any specific cognitive structure. According to my definition, even creatures as simple as ants could potentially have a signal for ‘should.’
You seem to be attempting to generalize the specific phenomenon of human conscious decision-making to a broader class of cognitive agents. It may well be that the human trait of adapting external language for the purpose of internal decision-making actually turns out to be very effective in practice for all high-level agents. However, it is also quite possible that in the space of possible minds, there are many effective designs which do not use internal language.
I do not see how Bayesian reasoning requires the use of internal language.
Because you’d have a data structure of world-states and their probabilities, which would look very much like a bunch of statements of the form “This world-state has this probability”.
It doesn’t need to be written in a human-like way to have meaning, and if it has a meaning then my argument applies.
So “should” = the table of expected utilities that goes along with the table of probabilities.
I’m not sure I understand your criticisms. My definition of “should” applies to any agents which are capable of communicating messages for the purpose of coordinating coalitions; outside of that, it does not require that the interpreter of the language have any specific cognitive structure. According to my definition, even creatures as simple as ants could potentially have a signal for ‘should.’
You seem to be attempting to generalize the specific phenomenon of human conscious decision-making to a broader class of cognitive agents. It may well be that the human trait of adapting external language for the purpose of internal decision-making actually turns out to be very effective in practice for all high-level agents. However, it is also quite possible that in the space of possible minds, there are many effective designs which do not use internal language.
I do not see how Bayesian reasoning requires the use of internal language.
Because you’d have a data structure of world-states and their probabilities, which would look very much like a bunch of statements of the form “This world-state has this probability”.
It doesn’t need to be written in a human-like way to have meaning, and if it has a meaning then my argument applies.
So “should” = the table of expected utilities that goes along with the table of probabilities.
Then I am basically in agreement with you.