Hey, can anyone help me find this LW (likely but could be diaspora) article, especially if you might have read it too?
My vague memory: It was talking about (among other things?) some potential ways of extending point estimate probability predictions and calibration curves. I.e. in a situation where making a prediction in one way affects what the outcome will be, i.e. if there is a mind-reader/accurate-simulator involved that bases its actions on your prediction. And in this case, a two dimensional probability estimate might be more appropriate: If 40% is predicted for event A, event B will have a probability of 60%. If 70% for event A, then 80% for event B, and so on, a mapping potentially continuously defined for the whole range. (event A and event B might be the same.) IIRC the article contained 2D charts where curves and rectangles were drawn for illustration.
IIRC it didn’t have too many upvotes, more like around low-dozen, or at most low-hundred.
Searches I’ve tried so far: Google, Exa, Gemini 1.5 with Deep Research, Perplexity, OpenAI GPT-4o with Search.
p.s. if you are also unable to put enough time into finding it, do you have any ideas how it could be found?
The winning search strategy was quite interesting as well I think:
I took the history of all LW articles I have roughly ever read, I had easy access to all such titles and URLs, but not article contents. I fed them one by one into a 7B LLM asking it to rate how likely based on the title alone the unseen article content could match what I described above, as vague as that memory may be. Then I looked at the highest ranking candidates, and they were a dud. Did the same thing with a 70B model, et voila, the solution was near the top indeed.
Now I just need to re-read it if it was worth dredging up, I guess when a problem starts to itch it’s hard to resist solving it.
Hey, can anyone help me find this LW (likely but could be diaspora) article, especially if you might have read it too?
My vague memory: It was talking about (among other things?) some potential ways of extending point estimate probability predictions and calibration curves. I.e. in a situation where making a prediction in one way affects what the outcome will be, i.e. if there is a mind-reader/accurate-simulator involved that bases its actions on your prediction. And in this case, a two dimensional probability estimate might be more appropriate: If 40% is predicted for event A, event B will have a probability of 60%. If 70% for event A, then 80% for event B, and so on, a mapping potentially continuously defined for the whole range. (event A and event B might be the same.) IIRC the article contained 2D charts where curves and rectangles were drawn for illustration.
IIRC it didn’t have too many upvotes, more like around low-dozen, or at most low-hundred.
Searches I’ve tried so far: Google, Exa, Gemini 1.5 with Deep Research, Perplexity, OpenAI GPT-4o with Search.
p.s. if you are also unable to put enough time into finding it, do you have any ideas how it could be found?
I have found it! This was the one:
https://www.lesswrong.com/posts/qvNrmTqywWqYY8rsP/solutions-to-problems-with-bayesianism
Seems to have seen better reception at: https://forum.effectivealtruism.org/posts/3z9acGc5sspAdKenr/solutions-to-problems-with-bayesianism
The winning search strategy was quite interesting as well I think:
I took the history of all LW articles I have roughly ever read, I had easy access to all such titles and URLs, but not article contents. I fed them one by one into a 7B LLM asking it to rate how likely based on the title alone the unseen article content could match what I described above, as vague as that memory may be. Then I looked at the highest ranking candidates, and they were a dud. Did the same thing with a 70B model, et voila, the solution was near the top indeed.
Now I just need to re-read it if it was worth dredging up, I guess when a problem starts to itch it’s hard to resist solving it.