What are the practical benefits of having an intuitive understanding of Bayes’ Theorem? If it helps, please name an example of how it impacted your day today
I work in tech support (pretty advanced, i.e. I’m routinely dragged into conference calls on 5 minutes notice with 10 people in panic mode because some database cluster is down). Here’s a standard situation: “All queries are slow. There are some errors in the log saying something about packets dropped.”. So, do I go and investigate all network cards on these 50 machines to see if the firmware is up to date, or do I look for something else? I see people picking the first option all the time. There are error messages, so we have evidence, and that must be it, right? But I have prior knowledge: it’s almost never the damn network, so I just ignore that outright, and only come back to it if more plausible causes can be excluded.
Bayes gives me a formal assurance that I’m right to reason this way. I don’t really need it quantitatively—just repeating “Base rate fallacy, base rate fallacy” to myself gets me in the right direction—but it’s nice to know that there’s an exact justification for what I’m doing. Another way would be to learn tons of little heuristics (“No. It’s not a compiler bug.”, “No. There’s not a mistake in this statewide math exam you’re taking”), but it’s great to look at the underlying principle.
Troubleshooting is a great example where a little probability goes a long way, thanks.
Amusingly, there was in fact an error in the GRE Subject test I once took, long ago (in computer science). All of the 5 multiple choice answers were incorrect. I agree that conditional on disagreement between test and testtaker, the test is usually right.
Funny, I am trying to use LW knowledge for IT related troubleshooting (ERP software) and usually fail, so far. I am trying to use the Solomonoff induction, to generate hypotheses and compare them to data. But data is very hard to mine. I could either investigate the whole database, as theoretically the whole could affect any routine, or try to see what routines ran and which branches of them, which IF statements were fulfilled true and which false, and this gets me to “aha the user forgot to check checkmark X in form Y”. But that also takes a huge amount of time. Often only 1% of a posting codeunit runs at all, finding that is a hell. And I simply don’t know where to generate hypotheses from. “Anything could fail” is not a hypothesis. We have user errors, we have bugs, and we have heck-knows-what cases.
Maybe I should try the Bayesian branch, not the Solomonoff branch. As data, evidence, is very hard to mine in this case, maybe I should look for the most frequent causes of errors, instead of trying to find evidence for the current one. This means I should keep a log, what the problem was, and what caused it.
I work in tech support (pretty advanced, i.e. I’m routinely dragged into conference calls on 5 minutes notice with 10 people in panic mode because some database cluster is down). Here’s a standard situation: “All queries are slow. There are some errors in the log saying something about packets dropped.”. So, do I go and investigate all network cards on these 50 machines to see if the firmware is up to date, or do I look for something else? I see people picking the first option all the time. There are error messages, so we have evidence, and that must be it, right? But I have prior knowledge: it’s almost never the damn network, so I just ignore that outright, and only come back to it if more plausible causes can be excluded.
Bayes gives me a formal assurance that I’m right to reason this way. I don’t really need it quantitatively—just repeating “Base rate fallacy, base rate fallacy” to myself gets me in the right direction—but it’s nice to know that there’s an exact justification for what I’m doing. Another way would be to learn tons of little heuristics (“No. It’s not a compiler bug.”, “No. There’s not a mistake in this statewide math exam you’re taking”), but it’s great to look at the underlying principle.
Troubleshooting is a great example where a little probability goes a long way, thanks.
Amusingly, there was in fact an error in the GRE Subject test I once took, long ago (in computer science). All of the 5 multiple choice answers were incorrect. I agree that conditional on disagreement between test and testtaker, the test is usually right.
The Rasch model does not hate truth, nor does it love truth, but the truth if made out of items which it can use for something else.
Funny, I am trying to use LW knowledge for IT related troubleshooting (ERP software) and usually fail, so far. I am trying to use the Solomonoff induction, to generate hypotheses and compare them to data. But data is very hard to mine. I could either investigate the whole database, as theoretically the whole could affect any routine, or try to see what routines ran and which branches of them, which IF statements were fulfilled true and which false, and this gets me to “aha the user forgot to check checkmark X in form Y”. But that also takes a huge amount of time. Often only 1% of a posting codeunit runs at all, finding that is a hell. And I simply don’t know where to generate hypotheses from. “Anything could fail” is not a hypothesis. We have user errors, we have bugs, and we have heck-knows-what cases.
Maybe I should try the Bayesian branch, not the Solomonoff branch. As data, evidence, is very hard to mine in this case, maybe I should look for the most frequent causes of errors, instead of trying to find evidence for the current one. This means I should keep a log, what the problem was, and what caused it.
Thank you for the concept https://en.wikipedia.org/wiki/Base_rate_fallacy I think I will spread this in the ERP community and see what happens.