Or this one from Brooklyn Nine Nine, that I inexplicably can’t find an actual video clip of
noggin-scratcher
The “Phantom of Heilbronn” was an unknown female serial killer, whose DNA was found at numerous crime scenes across Austria, France and Germany from 1993 to 2009. It was eventually realised that the DNA belonged to a woman working in a factory, making the cotton swabs that were used to collect evidence.
I have single-digit multiplications just kinda cached as ingrained associations between any two given digits and their product, but now I’m curious what algorithm you were using for them. Repeated addition maybe?
A thought occurs for sleep: normally to successfully fall asleep it helps to put yourself in a very predictable and boring state (lying motionless in the dark with your eyes closed), with no major sense input to attend to, threats to deal with, or necessary tasks to complete.
If everything is low certainty, maybe that leaves the brain unable to settle, because it can’t reach high confidence in the hypothesis that it’s safe to fall asleep.
I realized that the documentation I initially wrote correctly explained the problem and its solution, and that the comments in the source code were useful and sufficient.
Feels to me like you may have fallen into the same trap twice, of deeming documentation “sufficient” when it explains things to the satisfaction of the version of yourself that already understands the problem and the solution.
But by the time you achieve understanding, naturally the necessary insights (those required to cross the gap from a starting point of not understanding) feel obvious, unnecessary to mention, positively insulting to the future reader’s intelligence by their simplicity… and yet the you of a few hours earlier would have probably thanked yourself for spelling them out explicitly.
Without knowing all the specifics, it is of course impossible for me to say for sure if this actually applies to your case. But as a rule it seems like something to check for whenever you look back at documentation that has mysteriously come to seem more complete and sufficient without any actual edits.
You flip a fair coin 20 times. If this sequence contains at least one HHHH, I pay you $100. If it contains at least one HHHT, you pay me $100. If it contains neither, nobody wins.
Nits could be picked about this working more because “occurrences of a given substring matched continuously within a longer string” is a different question from “odds of a given string”; rather than because of irregular strings being inherently more probable or any difference between finite and infinite strings.
Specifically the part where, for the HHHT player, if the string is at HHH, then either they get a successful match from a T on the next flip or the string stands at HHHH and they can still hope for H[HHHT] a mere one flip later (compared to the HHHH player having to start over from zero whenever a T comes up). Benefiting greatly there from the target string overlapping with itself as you slide a 4-wide frame along the larger sequence.
The imbalance thus created would presumably still appear if you were to count matches on a similar sliding basis along an infinite string. Or equally disappear in the finite case if you only look at discrete chunks of 4 flips at a time (and treat that 20 flip sequence as 5 independent nonoverlapping trials).
So the claim would have to be that the bias is adaptive because we’re more likely to need to intuitively estimate odds about occurrences in continuous series rather than discrete chunks. Which isn’t implausible, but is less intrinsically obvious than the idea that we’d more often encounter finite cases than infinite ones.
So instead of directly maximising any particular method of aggregating utility, the proposal seems to be that we should maximise how satisfied people are, in aggregate, with the aggregating method being maximised?
But should we maximise total satisfaction with the utility-aggregating method being maximised, or average satisfaction with that aggregating method?
And is it preferable to have a small population who are very satisfied with the utility aggregation method, or a much larger population who think the utility aggregation method is only getting it right slightly more often than chance?
Needs another layer of meta
(on a second look I see that you did indeed suggest voting on any such problems)
Why are you calling this a nitpick?
Because the central idea of the post isn’t really about that specific probability puzzle, and can in theory stand alone to succeed or fail on other merits—regardless of whether that illustrative example in particular is actually a good choice of example.
Possibly there are better examples in the full paper linked, but I couldn’t comment on that either way because I’ve only read this excerpt/summary.
I would expect a less pronounced version of the same effect. Both get to HT together, but if you’re looking for HTH and get HTT then you’re starting over hoping for your first H on the next flip, whereas if you’re looking for HTT and get HTH you’ve got a small headstart because that last H can be the first H going forward to HT[HTT]
I would expect an enormous bias in terms of how much attention is paid to “interesting” questions—propositions we could sensibly rate near-1 or near-0 are boring because they’re so obviously true or so obviously absurd that they feel mundane—like they’re hardly a question in the first place.
It’d feel like cheating by way of degenerate examples to enumerate every common object and say “I’m extremely sure that at least one of those exists”, but that is still a lot of propositions (even more propositions available if we dare to venture that two of each thing might exist)
Or for another partial explanation: on a question that is in any actual doubt, it’s difficult to gather sufficient evidence to push a probability estimate very far into the tails. There are only so many independent observations one can make in a day.
One other risk of confounding would be the days not being truly independent of each other. If, for example, caffeine consumed on one day were to affect your sleep quality that night and thus your alertness the next day.
Ironically I read this after the correction and still thought it scanned a little oddly; as if were suggesting that the world naturally leads us to believe that things ought to fall at different rates specifically or only while in a vacuum.
For my contribution to the bikeshedding, maybe “ought to fall at different rates, even while in a vacuum”.
I would draw a distinction within epistemic rationality between (A) wanting to know more true things, and (B) wanting to avoid believing false things.
Most of the examples given poke at a tension between learning, or potentially learning, or disseminating for others to learn, something new (so, type A epistemics according to the typology I just pulled from my nethers) versus the risks of instrumental harm incurred in the process.
Only example 3 really addresses type B by asking whether it’s better for Cansa’s friend to believe a falsehood if that will improve their prognosis. But the distinction between believing 51% and 49% is small enough for that to feel like a small and weak falsehood anyway.
I don’t know what your poll respondents had in mind, but if you asked me about my preference between epistemic/instrumental rationality, truth vs winning, my first thought would be a rather stronger type B example; I’d be asking myself whether I’d want to believe a fairly large falsehood in exchange for instrumental benefits. Which might be sidestepping part of the intent of the poll question, but also might explain some of the apparent discrepancy.
plus some ancillary assumptions (more-general “theory” beliefs like “skepticism of medical authorities” can cause more-specific “claim” beliefs like “vaccines have harmful additives”, but not vice versa)
This jumped out to me because it seems potentially untrue; I would expect there to exist at least some instances where people’s belief about the specifics comes prior to, and is what causes, their beliefs about the general theory.
P(E) ~= 1 | C (where “|” stands for “given”). If I can say this, I can most certainly say that C causes E
Well… unless P(E) also ~= 1 | !C because P(E) ~= 1 and C is irrelevant
I would expect the effectiveness of mockery in making people change their mind/behaviour to vary strongly based on who’s doing the mocking.
If I find myself being mocked by someone I have no particular respect for, no ongoing interaction with, or who I can judge doesn’t know what they’re talking about, that’s much easier to shrug off and deflect than if I’m being mocked by a peer or authority figure, or collectively mocked by a group I want to be part of.
Could be another source of discrepancy if “Does mockery work?” prompts people to imagine the first type where they try to mock a random stranger and the stranger doesn’t care; whereas asking “Have you ever changed in response to mockery” dredges up memories of the actually effective kind of mockery.
Well, if the health gain from the health kit is large enough to outweigh the health loss from needing to run through lava afterwards, then OK, maybe that’s worth doing.
Also even if it’s not actually enough, and you’re going to come out at a small loss overall, sometimes by the magic of time discounting it still feels like a net positive to your present self—because the cost is further away into the future than the gain.
I thought it was still quite an apt analogy, because we do essentially the same time-biased thing in all kinds of other contexts
If anyone knows where to find either the “questions to ask if you wanted to end your relationship” post, or more about the harms of professionalism, I’d be interested.
“Ice floats, so if the glacier is free-floating, then it melting doesn’t cause a sea level rise”
A thing I recently learned: this is only true of ice floating on fresh water.
Salt water is more dense than fresh (and the ice itself is still mostly fresh even if it formed out of sea water) so ice floating on the sea floats a little higher than it would float on freshwater. This reduces its displacement and means that melting it does somewhat increase the water level.