For deciding your own decisions, only a full description of your own utility function and decision theory will tell you what to do in every situation. And (work out what you would do if you were maximally smart, then do that) is a useless rule in practice. When deciding your own actions, you don’t need to use rules at all.
If you are in any kind of organization that has rules, you have to use your own decision theory to work out which decision is best. To do this would involve weighing up the pros and cons of rule breaking, with one of the cons being any punishment the rule enforcers might apply.
Suppose you are in charge, you get to write the rules and no one else can do anything about rules they don’t like.
You are still optimizing for more than just being correct. You want rules that are reasonably enforceable, the decision of whether or not to punish can only depend on things the enforcers know. You also want the rules to be short enough and simple enough for the rule followers to comprehend.
The best your rules can hope to do when faced with a sufficiently weird situation is not apply any restrictions at all.
When deciding your own actions, you don’t need to use rules at all.
Even a rudimentary level of knowledge of how people behave is enough to know that this is entirely false. Act consequentialism doesn’t work for human psychology.
Suppose you are in charge, you get to write the rules and no one else can do anything about rules they don’t like.
This, too, bears no resemblance to reality. People can do all sorts of things about rules they don’t like.
Act consequentialism doesn’t work for human psychology.
In what sense does it “not work”? I feel like I use act consequentialism all the time, for example when deciding what restaurant to go to for dinner (which can’t be made into a rule since it depends on so many variables like where I am located on a particular day, what the weather is like, and what foods I’ve eaten recently), or to decide whether to say X to person Y or hold my tongue (which similarly depends on many variables). I may not be doing expected utility computations in a conscious or explicit way (at least not in most cases), but I’m guessing the neural networks implementing my intuitions have been trained to do something like that. (ETA: Because the choices I make usually respond to changing circumstances in a way that seems consistent with doing something like EU maximization.) Do you have some reason to think otherwise?
I feel like I use act consequentialism all the time, for example when deciding what restaurant to go to for dinner …
Really? When deciding what restaurant to go to for dinner, you examine all possible consequences of all the choices at your disposal (and the probability distributions across them), evaluate or rank them all, and select one? You don’t use any rules at all?
… which can’t be made into a rule since it depends on so many variables like where I am located on a particular day, what the weather is like, and what foods I’ve eaten recently
Why do you think that there needs to be a rule, instead of, say, multiple rules (some combination of which may bear on any given situation)? And why can’t rules depend on variables? (Or contain heuristics, etc.?)
… I’m guessing the neural networks implementing my intuitions have been trained to do something like [expected utility computations]
This seems stupendously unlikely. My reason for thinking otherwise is that this just isn’t consistent with anything we know about how people make decisions.
Because the choices I make usually respond to changing circumstances in a way that seems consistent with doing something like EU maximization.
Are you just saying that the preferences revealed in your choices conform to the VNM axioms (or some other formalism—if so, which?)? (If you are, then you know that this implies nothing at all about whether your brain is actually doing any expected utility computations.) Or are you making some stronger claim? If so, what is it?
Really? When deciding what restaurant to go to for dinner, you examine all possible consequences of all the choices at your disposal (and the probability distributions across them), evaluate or rank them all, and select one? You don’t use any rules at all?
No, I guess I examine some subset of consequences that seem relevant to each decision (i.e., might differ across my choices in a predictable way, and the differences make a difference for my values). I can’t confidently say that I don’t use any rules at all (maybe I’m using some rules in some subconscious way, or I’m doing something that counts as “using a rule”) but neither can I say what those rules are.
Why do you think that there needs to be a rule, instead of, say, multiple rules (some combination of which may bear on any given situation)? And why can’t rules depend on variables? (Or contain heuristics, etc.?)
I wasn’t intending to make a point about single vs multiple rules (but since you ask, having multiple rules seems to require some meta-rule to tell you which rules to use in which circumstances and how to adjudicate conflict between them, so that meta-rule would be “the rule”). My point was more that I don’t see what rule(s) I could be using that would seemingly take into account so many variables in such a fluid and dynamic way, and can seemingly handle new unforeseen circumstances/variables without me having to think “how should I change my rules to handle this?”
This seems stupendously unlikely. My reason for thinking otherwise is that this just isn’t consistent with anything we know about how people make decisions.
Can you list some such inconsistencies, so I can have a better idea of what you mean?
Are you just saying that the preferences revealed in your choices conform to the VNM axioms (or some other formalism—if so, which?)?
No, I mean things like when one of my choices would predictably cause some bad consequences (and doesn’t cause enough good consequences to compensate) I seem to fairly reliably avoid making that choice, even when there’s enough novelty involved that it seems unlikely I would have created a rule to cover the situation ahead of time, and without having to think “how should I change my rules to handle this?”
You may need to taboo “rule” to get much further on this. I can’t speak for Wei, but I use plenty of heuristics, cached ideas, and non-legible estimates of effect in choosing a dinner location. None of these are “rules” in the sense I get from this post, and I don’t abandon nor reformulate them when I choose a different food than previously.
To clarify, “rule” as used in the grandparent and “rule” as used in the OP are different concepts. (Namely, in the grandparent I was referring—following what I took to be Wei Day’s usage—to rule consequentialism.)
Can you delineate a bit of the difference between these uses of “rule”, and how rule consequentialism avoids any of the problems that the post (and comments/objections to it) talks about?
… how rule consequentialism avoids any of the problems that the post (and comments/objections to it) talks about?
It… doesn’t? Who said that it does? I’m not even sure what that would mean; it seems like an almost entirely orthogonal issue…
(As for the uses of “rule”, that’s a fine question but I hesitate to write any lengthy commentary on it, because it seems like we have some sort of weird misunderstanding, and it may not even be relevant…)
For deciding your own decisions, only a full description of your own utility function and decision theory will tell you what to do in every situation. And (work out what you would do if you were maximally smart, then do that) is a useless rule in practice. When deciding your own actions, you don’t need to use rules at all.
If you are in any kind of organization that has rules, you have to use your own decision theory to work out which decision is best. To do this would involve weighing up the pros and cons of rule breaking, with one of the cons being any punishment the rule enforcers might apply.
Suppose you are in charge, you get to write the rules and no one else can do anything about rules they don’t like.
You are still optimizing for more than just being correct. You want rules that are reasonably enforceable, the decision of whether or not to punish can only depend on things the enforcers know. You also want the rules to be short enough and simple enough for the rule followers to comprehend.
The best your rules can hope to do when faced with a sufficiently weird situation is not apply any restrictions at all.
Even a rudimentary level of knowledge of how people behave is enough to know that this is entirely false. Act consequentialism doesn’t work for human psychology.
This, too, bears no resemblance to reality. People can do all sorts of things about rules they don’t like.
In what sense does it “not work”? I feel like I use act consequentialism all the time, for example when deciding what restaurant to go to for dinner (which can’t be made into a rule since it depends on so many variables like where I am located on a particular day, what the weather is like, and what foods I’ve eaten recently), or to decide whether to say X to person Y or hold my tongue (which similarly depends on many variables). I may not be doing expected utility computations in a conscious or explicit way (at least not in most cases), but I’m guessing the neural networks implementing my intuitions have been trained to do something like that. (ETA: Because the choices I make usually respond to changing circumstances in a way that seems consistent with doing something like EU maximization.) Do you have some reason to think otherwise?
Really? When deciding what restaurant to go to for dinner, you examine all possible consequences of all the choices at your disposal (and the probability distributions across them), evaluate or rank them all, and select one? You don’t use any rules at all?
Why do you think that there needs to be a rule, instead of, say, multiple rules (some combination of which may bear on any given situation)? And why can’t rules depend on variables? (Or contain heuristics, etc.?)
This seems stupendously unlikely. My reason for thinking otherwise is that this just isn’t consistent with anything we know about how people make decisions.
Are you just saying that the preferences revealed in your choices conform to the VNM axioms (or some other formalism—if so, which?)? (If you are, then you know that this implies nothing at all about whether your brain is actually doing any expected utility computations.) Or are you making some stronger claim? If so, what is it?
No, I guess I examine some subset of consequences that seem relevant to each decision (i.e., might differ across my choices in a predictable way, and the differences make a difference for my values). I can’t confidently say that I don’t use any rules at all (maybe I’m using some rules in some subconscious way, or I’m doing something that counts as “using a rule”) but neither can I say what those rules are.
I wasn’t intending to make a point about single vs multiple rules (but since you ask, having multiple rules seems to require some meta-rule to tell you which rules to use in which circumstances and how to adjudicate conflict between them, so that meta-rule would be “the rule”). My point was more that I don’t see what rule(s) I could be using that would seemingly take into account so many variables in such a fluid and dynamic way, and can seemingly handle new unforeseen circumstances/variables without me having to think “how should I change my rules to handle this?”
Can you list some such inconsistencies, so I can have a better idea of what you mean?
No, I mean things like when one of my choices would predictably cause some bad consequences (and doesn’t cause enough good consequences to compensate) I seem to fairly reliably avoid making that choice, even when there’s enough novelty involved that it seems unlikely I would have created a rule to cover the situation ahead of time, and without having to think “how should I change my rules to handle this?”
You may need to taboo “rule” to get much further on this. I can’t speak for Wei, but I use plenty of heuristics, cached ideas, and non-legible estimates of effect in choosing a dinner location. None of these are “rules” in the sense I get from this post, and I don’t abandon nor reformulate them when I choose a different food than previously.
To clarify, “rule” as used in the grandparent and “rule” as used in the OP are different concepts. (Namely, in the grandparent I was referring—following what I took to be Wei Day’s usage—to rule consequentialism.)
Can you delineate a bit of the difference between these uses of “rule”, and how rule consequentialism avoids any of the problems that the post (and comments/objections to it) talks about?
It… doesn’t? Who said that it does? I’m not even sure what that would mean; it seems like an almost entirely orthogonal issue…
(As for the uses of “rule”, that’s a fine question but I hesitate to write any lengthy commentary on it, because it seems like we have some sort of weird misunderstanding, and it may not even be relevant…)