I’m smart. They’re not (IQ test, SAT, or a million other evidences). Even though high intelligence doesn’t at all cause rationality, in my experience judging others it’s so correlated as to nearly be a prerequisite.
I care a lot (but not too much) about consistency under the best / most rational reflection I’m capable of. Whenever this would conflict with people liking me, I know how to keep a secret. They don’t make such strong claims of valuing rationality. Maybe others are secretly rational, but I doubt it. In the circles I move in, nobody is trying to conceal intellect. If you could be fun, nice, AND seem smart, you would do it. Those who can’t seem smart, aren’t.
I care a lot (but not too much) about consistency under the best / most rational reflection I’m capable of.
That value doesn’t directly lead to having a belief system where individual beliefs can be used to make accurate predictions.
For most practical purposes the forward–backward algorithm produces better models of the world than Viterbi.
Viterbi optimizes for overall consitstency while the forward–backward algorithm looks at local states.
If you have uncertainity in the data about which you reason, the world view with the most consistency is likely flawed.
One example is heat development in some forms of meditation. The fact that our body can develop heat through thermogenin without any shivering is a relatively new biochemical discovery.
There were plenty of self professed rationalists who didn’t believe in any heat development in meditation because the people in the meditation don’t shiver.
The search for consistency leads in examples like that to denying important empirical evidence.
It takes a certain humility to accept that there heat development during meditation without knowing a mechanism that can account for the development of heat.
People who want to signal socially that they know-it-all don’t have the epistemic humility that allows for the insight that there are important things that they just don’t understand.
To quote Nassim Taleb:
“It takes extraordinary wisdom and self control to accept that many things have a logic we do not understand that is smarter than our own.”
I’m pretty humble about what I know. That said, it sometimes pays to not undersell (when others are confidently wrong, and there’s no time to explain why, for example).
Interesting analogy between “best path / MAP (viterbi)” :: “integral over all paths / expectation” as “consistent” :: “some other type of thinking/ not consistent?” I don’t see what “integral over many possibilities” has to do with consistency, except that it’s sometimes the correct (but more expensive) thing to do.
I’m pretty humble about what I know. That said, it sometimes pays to not undersell (when others are confidently wrong, and there’s no time to explain why, for example).
I’m not so much talking about humility that you communicate to other people but about actually thinking that the other person might be right.
I don’t see what “integral over many possibilities” has to do with consistency, except that it’s sometimes the correct (but more expensive) thing to do.
There are cases where the forward backward algorithm gives you a path that’s impossible to happen. I would call those paths inconsistent.
That’s one of the lessons I learned in bioinformatics. Having a algorithm that robust to error is often much better than just picking the explanation that most likely to explain the data.
A map of the world that allows for some inconsistency is more robust than one where one error leads to a lot of bad updates to make the map consistent with the error.
I understand forward-backward (in general) pretty well and am not sure what application you’re thinking of or what you mean by “a path that’s impossible to happen”. Anyway, yes, I agree that you shouldn’t usually put 0 plausibility on views other than your current best guess.
My personal answer:
I’m smart. They’re not (IQ test, SAT, or a million other evidences). Even though high intelligence doesn’t at all cause rationality, in my experience judging others it’s so correlated as to nearly be a prerequisite.
I care a lot (but not too much) about consistency under the best / most rational reflection I’m capable of. Whenever this would conflict with people liking me, I know how to keep a secret. They don’t make such strong claims of valuing rationality. Maybe others are secretly rational, but I doubt it. In the circles I move in, nobody is trying to conceal intellect. If you could be fun, nice, AND seem smart, you would do it. Those who can’t seem smart, aren’t.
I’m winning more than they are.
That value doesn’t directly lead to having a belief system where individual beliefs can be used to make accurate predictions. For most practical purposes the forward–backward algorithm produces better models of the world than Viterbi. Viterbi optimizes for overall consitstency while the forward–backward algorithm looks at local states.
If you have uncertainity in the data about which you reason, the world view with the most consistency is likely flawed.
One example is heat development in some forms of meditation. The fact that our body can develop heat through thermogenin without any shivering is a relatively new biochemical discovery. There were plenty of self professed rationalists who didn’t believe in any heat development in meditation because the people in the meditation don’t shiver. The search for consistency leads in examples like that to denying important empirical evidence.
It takes a certain humility to accept that there heat development during meditation without knowing a mechanism that can account for the development of heat.
People who want to signal socially that they know-it-all don’t have the epistemic humility that allows for the insight that there are important things that they just don’t understand.
To quote Nassim Taleb: “It takes extraordinary wisdom and self control to accept that many things have a logic we do not understand that is smarter than our own.”
For the record, I’m not a member of any religion.
I’m pretty humble about what I know. That said, it sometimes pays to not undersell (when others are confidently wrong, and there’s no time to explain why, for example).
Interesting analogy between “best path / MAP (viterbi)” :: “integral over all paths / expectation” as “consistent” :: “some other type of thinking/ not consistent?” I don’t see what “integral over many possibilities” has to do with consistency, except that it’s sometimes the correct (but more expensive) thing to do.
I’m not so much talking about humility that you communicate to other people but about actually thinking that the other person might be right.
There are cases where the forward backward algorithm gives you a path that’s impossible to happen. I would call those paths inconsistent.
That’s one of the lessons I learned in bioinformatics. Having a algorithm that robust to error is often much better than just picking the explanation that most likely to explain the data.
A map of the world that allows for some inconsistency is more robust than one where one error leads to a lot of bad updates to make the map consistent with the error.
I understand forward-backward (in general) pretty well and am not sure what application you’re thinking of or what you mean by “a path that’s impossible to happen”. Anyway, yes, I agree that you shouldn’t usually put 0 plausibility on views other than your current best guess.
It possible that you p=0 to go from 5:A to 6:B and the path created by forward-backward still goes from 5:A to 6:B.