You can assume, fairly simply, that the things I retrospectively consciously endorse were actually good for me. The question is how to predict that sort of thing rather than rationalizing or checking it only retrospectively.
Ah, but (to be facetious and semi-trolling for a moment), the narrative fallacy means you can’t trust those retrospective endorsements either. Isn’t every thought we ever take just self-signalling? Are we not mere microbes in the bowels of Moloch, utterly incapable of real thought or action? Blobs of sentience randomly thrust above the mire of dead matter like a slime mould in its aggregation phase, imagining for a moment that it is a real thing, before collapsing once more into the unthinking ooze!
You wrote this facetiously, but I regularly find myself updating towards it being quite true.
The basilisk lives, and goes forth to destroy the world! My work here is done!
More seriously, I find it easy to build that point of view from the materials of LessWrong, Overcoming Bias, and blogs on rationality, neuroscience, neoreaction, and PUA. If I were inclined to the task I could do it at book length, but it would be the intellectual equivalent of setting a car bomb. So I won’t. But it is possible. It is also possible to build completely different stories from the same collection of concepts, as easy as it is to build them from words.
The question that interests me is why people (including myself) are convinced by this story or that. Are they undertaking rational updating in the face of evidence? I provided none, only cherry-picked references to other ideas woven together with hyperbolic metaphors. Do they go along with stories that tell them what they would like to believe already? And yet “microbes in the bowels of Moloch, utterly incapable of real thought or action” is not something anyone would want to be. Perhaps this story appeals because its message, “nothing is true, all is a lie”, like its newage opposite, “reality is whatever you want it to be”, removes the burden of living in a world where achieving anything worth while is both possible, and a struggle.
the things I retrospectively consciously endorse were actually good for me.
After how long?
Let us assume that I make a large loan out to someone—call him Jim. Jim promises to pay me back in exactly a year, and I have no reason to doubt him. Two months after taking my money, Jim vanishes, and cannot be found again. The one-year mark passes, and I see no sign of my loan being returned.
At this point, I am likely to regret extending the original loan; I do not retrospectively endorse the action.
One month later, Jim reappears; in apology for repaying my loan late, he repays twice the originally agreed amount.
At this point, I do retrospectively endorse the action of extending the loan.
So, whether or not I retrospectively endorse an action can depend on how long it is since the original action occurred, and can change depending on the observed consequences of the action. How do you tell when to stop, and consciously endorse the action?
That implies that “endorse” means “I conclude that this action left me better off than without it”. I don’t think this is what most people mean by endorsement. In particular, it fails to consider that some actions can leave you better off or worse off by luck.
If you drive drunk, and you get home safely, does that imply you would endorse having driven drunk that particular time?
If you drive drunk, and you get home safely, does that imply you would endorse having driven drunk that particular time?
No, it does not; undertaking a high-risk no-reward action is not endorsable simply because the risk is avoided once. You make a good point.
Nonetheless, I have noted that whether I retrospectively endorse an action or not can change as more information is discovered. Hence, the time horizon chosen is important.
I tend to avoid retrospectively endorsing actions based on their outcomes, as that opens up the danger of falling to outcome bias. I instead prefer to orient toward evaluating the process of how I made the decision and took the action, and then trying to improve the process. After all, I can’t control the outcome, I can only control the process and my actions, and I believe it is important to only evaluate and endorse those areas that I can control.
You do make a good point; the advantage of retrospectively endorsing based on outcomes, is that it highlights very clearly where your decision making processes are faulty and provides an incentive to fix said faults before a negative outcome happens again.
But if you’re happy with your decision-engine validating processes without that, then it’s not necessary.
You can assume, fairly simply, that the things I retrospectively consciously endorse were actually good for me.
I think you’re confusing regret or lack of it with “actually good for me”. Certainly, the future-you can evaluate the consequences of some action better than the past-you, but he’s still only future-you, not an arbiter of what is “actually good” and what is not.
I think there is another issue at play here, namely whether it is worthwhile to evaluate the consequences of decision or actions, or the process of making the decision and taking the action. I believe that improving the process is what is important, not the outcome, as focusing on the outcome often leads to outcome bias. We can only control the process, after all, not the outcome, and it’s important to focus on what we have in our locus of control.
There’s no confusion here if we use a naturalistic definition of “actually good”. If we use a nonnaturalistic definition, then of course the question becomes bloody nonsense. I would hope you’d have the charity not to automatically interpret what my question nonsensically!
You can assume, fairly simply, that the things I retrospectively consciously endorse were actually good for me. The question is how to predict that sort of thing rather than rationalizing or checking it only retrospectively.
Ah, but (to be facetious and semi-trolling for a moment), the narrative fallacy means you can’t trust those retrospective endorsements either. Isn’t every thought we ever take just self-signalling? Are we not mere microbes in the bowels of Moloch, utterly incapable of real thought or action? Blobs of sentience randomly thrust above the mire of dead matter like a slime mould in its aggregation phase, imagining for a moment that it is a real thing, before collapsing once more into the unthinking ooze!
Ah, there’s that good old-fashioned Overcoming-Biasian “rationality”, insulting the human mind while making no checkable predictions whatsoever!
You wrote this facetiously, but I regularly find myself updating towards it being quite true.
The basilisk lives, and goes forth to destroy the world! My work here is done!
More seriously, I find it easy to build that point of view from the materials of LessWrong, Overcoming Bias, and blogs on rationality, neuroscience, neoreaction, and PUA. If I were inclined to the task I could do it at book length, but it would be the intellectual equivalent of setting a car bomb. So I won’t. But it is possible. It is also possible to build completely different stories from the same collection of concepts, as easy as it is to build them from words.
The question that interests me is why people (including myself) are convinced by this story or that. Are they undertaking rational updating in the face of evidence? I provided none, only cherry-picked references to other ideas woven together with hyperbolic metaphors. Do they go along with stories that tell them what they would like to believe already? And yet “microbes in the bowels of Moloch, utterly incapable of real thought or action” is not something anyone would want to be. Perhaps this story appeals because its message, “nothing is true, all is a lie”, like its newage opposite, “reality is whatever you want it to be”, removes the burden of living in a world where achieving anything worth while is both possible, and a struggle.
After how long?
Let us assume that I make a large loan out to someone—call him Jim. Jim promises to pay me back in exactly a year, and I have no reason to doubt him. Two months after taking my money, Jim vanishes, and cannot be found again. The one-year mark passes, and I see no sign of my loan being returned.
At this point, I am likely to regret extending the original loan; I do not retrospectively endorse the action.
One month later, Jim reappears; in apology for repaying my loan late, he repays twice the originally agreed amount.
At this point, I do retrospectively endorse the action of extending the loan.
So, whether or not I retrospectively endorse an action can depend on how long it is since the original action occurred, and can change depending on the observed consequences of the action. How do you tell when to stop, and consciously endorse the action?
That implies that “endorse” means “I conclude that this action left me better off than without it”. I don’t think this is what most people mean by endorsement. In particular, it fails to consider that some actions can leave you better off or worse off by luck.
If you drive drunk, and you get home safely, does that imply you would endorse having driven drunk that particular time?
No, it does not; undertaking a high-risk no-reward action is not endorsable simply because the risk is avoided once. You make a good point.
Nonetheless, I have noted that whether I retrospectively endorse an action or not can change as more information is discovered. Hence, the time horizon chosen is important.
I tend to avoid retrospectively endorsing actions based on their outcomes, as that opens up the danger of falling to outcome bias. I instead prefer to orient toward evaluating the process of how I made the decision and took the action, and then trying to improve the process. After all, I can’t control the outcome, I can only control the process and my actions, and I believe it is important to only evaluate and endorse those areas that I can control.
You do make a good point; the advantage of retrospectively endorsing based on outcomes, is that it highlights very clearly where your decision making processes are faulty and provides an incentive to fix said faults before a negative outcome happens again.
But if you’re happy with your decision-engine validating processes without that, then it’s not necessary.
I think you’re confusing regret or lack of it with “actually good for me”. Certainly, the future-you can evaluate the consequences of some action better than the past-you, but he’s still only future-you, not an arbiter of what is “actually good” and what is not.
I think there is another issue at play here, namely whether it is worthwhile to evaluate the consequences of decision or actions, or the process of making the decision and taking the action. I believe that improving the process is what is important, not the outcome, as focusing on the outcome often leads to outcome bias. We can only control the process, after all, not the outcome, and it’s important to focus on what we have in our locus of control.
There’s no confusion here if we use a naturalistic definition of “actually good”. If we use a nonnaturalistic definition, then of course the question becomes bloody nonsense. I would hope you’d have the charity not to automatically interpret what my question nonsensically!
I have no idea what a naturalistic definition of “actually good” would be.