Why not just ask them to superforecast what would be the ideal response to “what’s the justification for prediction X”? Or are they so perfect that they don’t even consider counterfactuals?
I see superforcasting as the ability to give answers about how likely it is that a given event is going to happen and answer with a probability. That’s not the same skill as coming up with verbal justification.
Justification = P(“Opponent will agree that S is a good justification for prediction” | “I say S as my justification for a prediction”). If there are no division-by-zero errors, that should work.
You assume that the process of checking takes zero time so that you can just do it for every possible string in zero time.
If I the agent is like an LLM that takes some milliseconds to run the process of checking or a human that queries their intuition, this won’t happen in zero time.
But I guess I can see your point, the algorithm requires a lot of time and compute and maybe anything that has that much resources can answer questions like that with exhaustive enough search. I guess the problem as you define it is underconstrained.
Why not just ask them to superforecast what would be the ideal response to “what’s the justification for prediction X”? Or are they so perfect that they don’t even consider counterfactuals?
I see superforcasting as the ability to give answers about how likely it is that a given event is going to happen and answer with a probability. That’s not the same skill as coming up with verbal justification.
Justification = P(“Opponent will agree that S is a good justification for prediction” | “I say S as my justification for a prediction”). If there are no division-by-zero errors, that should work.
You assume that the process of checking takes zero time so that you can just do it for every possible string in zero time.
If I the agent is like an LLM that takes some milliseconds to run the process of checking or a human that queries their intuition, this won’t happen in zero time.
Then they aren’t perfect, aren’t they?
But I guess I can see your point, the algorithm requires a lot of time and compute and maybe anything that has that much resources can answer questions like that with exhaustive enough search. I guess the problem as you define it is underconstrained.
I meant perfect in the sense of the quality of the prediction not the amount of effort it takes to make the prediction.