It’s not always possible to resolve the uncertainty. When this is the case, recognizing that there is uncertainty may not be helpful. Unfortunately, uncertainty can be demotivating even when completing the task is expected value maximizing.
This looks contradictory. How can you know that it is expected value maximizing when it is uncertain?
Or do you mean it in the sense of later examples that if (the if is the uncertainty) you knew that some precondition on your side were fullfilled, then the task would be value maximizing.
I think to resolve the uncertainty i.e. the inability of the subconscious to provide clear feelings it is neccessary to give your inituition, your subconscious enough information to work on. How do you do that?
My approach is to push as much partial reasoning results into the subconscious as I can. My assumption is that the subconscious cannot to logical inferences i.e. complex symbol processing. But I assume that it can match up and weigh lots of fixed structures (cached thoughts) against each other. To feed this I use structured pondering (to distinguish from worry or rumination which are mostly maladaptive). Positively thinking about the same problem from different sides trying to find out what works, what could be improved and what to get rid off (but not to ruminate what all is bad).
I think this helped me to be balanced and to not get into inconsistency between conscious and subconscious. A problem I think purely logical thinkers have to watch out for.
Disclaimer: Maybe it’s just that way for me (brains have variety).
How can you know that it is expected value maximizing when it is uncertain?
Expected value is, by definition, the value evaluated in the face of (quantified) uncertainty. It is actual value that you do not know whether you are maximising. Actual value is what you care about, but expected value is all you know.
I read “when completing the task is expected value maximizing” to mean among the available tasks that maximises expected value conditional on being able to even establish such an expected value.
Thus I felt the uncertainty mentioned was of the kind not being part of the effective value. OK. It was strictly not correct. In the end you theoretically can always smear your uncertainty over all tasks. That what you meant.
But mayby the unease is due to some more uncertain difficult to grasp kind. Like the Radical Uncertainty.
Or do you mean it in the sense of later examples that if (the if is the uncertainty) you knew that some precondition on your side were fullfilled, then the task would be value maximizing.
This looks contradictory. How can you know that it is expected value maximizing when it is uncertain?
Or do you mean it in the sense of later examples that if (the if is the uncertainty) you knew that some precondition on your side were fullfilled, then the task would be value maximizing.
I think to resolve the uncertainty i.e. the inability of the subconscious to provide clear feelings it is neccessary to give your inituition, your subconscious enough information to work on. How do you do that?
My approach is to push as much partial reasoning results into the subconscious as I can. My assumption is that the subconscious cannot to logical inferences i.e. complex symbol processing. But I assume that it can match up and weigh lots of fixed structures (cached thoughts) against each other. To feed this I use structured pondering (to distinguish from worry or rumination which are mostly maladaptive). Positively thinking about the same problem from different sides trying to find out what works, what could be improved and what to get rid off (but not to ruminate what all is bad).
I think this helped me to be balanced and to not get into inconsistency between conscious and subconscious. A problem I think purely logical thinkers have to watch out for.
Disclaimer: Maybe it’s just that way for me (brains have variety).
Expected value is, by definition, the value evaluated in the face of (quantified) uncertainty. It is actual value that you do not know whether you are maximising. Actual value is what you care about, but expected value is all you know.
I read “when completing the task is expected value maximizing” to mean among the available tasks that maximises expected value conditional on being able to even establish such an expected value.
Thus I felt the uncertainty mentioned was of the kind not being part of the effective value. OK. It was strictly not correct. In the end you theoretically can always smear your uncertainty over all tasks. That what you meant.
But mayby the unease is due to some more uncertain difficult to grasp kind. Like the Radical Uncertainty.
Thanks for the remarks
Yes, if I understand you correctly.