I enjoy having posts which show how to apply rational thought processes to everyday situations, so thank you.
However, there is a failure mode on the 2x2 matrix method, that I think should be mentioned—it ignores probabilities of various options, and focuses solely on their payoff (example given below). I think when making the 2x2 matrix, there should be an explicit step where you assign probabilities to the beliefs in question, and keep those probabilities in mind when making your decision.
I think this is obvious to most long-time LWers, but worry about someone new coming across this decision method, and utilizing it without thinking it through.
Here is an example of how this can backfire, otherwise:
Your new babysitter seems perfect in every way: Clean background check, and her organization skills helps offset your absent-mindedness. One day, you notice your priceless family heirloom diamond earrings aren’t where you normally keep them. The probability is much higher that you accidentally misplaced them (you have a habit of doing that), but there is a small suspicion on your part that the babysitter might have taken them.
You BELIEVE she took them, in REALITY she took them- You fire the babysitter and have to find another.
You BELIEVE she took them, in REALITY you misplaced them- You fire the babysitter who was innocent after all.
You BELIEVE you misplaced them, in REALITY she took them- Your babysitter isn’t as good or honest as you think she is! Not only might she continue stealing from you, but more importantly, you continue to leave your child under the care of a dishonest person. BAD THINGS MIGHT HAPPEN TO YOUR BABY!
You BELIEVE you misplaced them, in REALITY you misplaced them- You keep your nice babysitter. Perhaps you come across your earrings later.
I enjoy having posts which show how to apply rational thought processes to everyday situations, so thank you.
However, there is a failure mode on the 2x2 matrix method, that I think should be mentioned—it ignores probabilities of various options, and focuses solely on their payoff (example given below). I think when making the 2x2 matrix, there should be an explicit step where you assign probabilities to the beliefs in question, and keep those probabilities in mind when making your decision.
I think this is obvious to most long-time LWers, but worry about someone new coming across this decision method, and utilizing it without thinking it through.
Here is an example of how this can backfire, otherwise:
Your new babysitter seems perfect in every way: Clean background check, and her organization skills helps offset your absent-mindedness. One day, you notice your priceless family heirloom diamond earrings aren’t where you normally keep them. The probability is much higher that you accidentally misplaced them (you have a habit of doing that), but there is a small suspicion on your part that the babysitter might have taken them.
You BELIEVE she took them, in REALITY she took them- You fire the babysitter and have to find another.
You BELIEVE she took them, in REALITY you misplaced them- You fire the babysitter who was innocent after all.
You BELIEVE you misplaced them, in REALITY she took them- Your babysitter isn’t as good or honest as you think she is! Not only might she continue stealing from you, but more importantly, you continue to leave your child under the care of a dishonest person. BAD THINGS MIGHT HAPPEN TO YOUR BABY!
You BELIEVE you misplaced them, in REALITY you misplaced them- You keep your nice babysitter. Perhaps you come across your earrings later.