You’re mistaking the probability for the hypothesis given the AI’s knowledge for the likelihood ratio of the data on the hypothesis given your own prior knowledge.
AI is a truth-detector that is wrong 1 time in 1000. If the detector says “true”, I shift my certainty upwards by a factor of 1000. “AI’s knowledge” doesn’t enter this picture.
So if someone rolls a 10^6-sided die and tells you they’re 99.9% sure the number was 749,763, you would only assign it a posterior probability of 10^-3?
So if someone rolls a 10^6-sided die and tells you they’re 99.9% sure the number was 749,763, you would only assign it a posterior probability of 10^-3?
I see. I used a wrong state space to model this. The answer above is right if I expect a statement of the form “I’m 99.9% sure that N was/wasn’t the number”, and have no knowledge about how N is related to the number on the die. Such statements would be correct 99.9% of the time, and I would only expect to hear positive statements 0.1% of the time, 99.9% of them incorrect.
The correct model is to expect a statement of the form “I’m 99.9% sure that N was the number”, with no option for negative, only with options for N. For such statements to be correct 99.9% of the time, N needs to be the right answer 99.9% of the time, as expected.
You’re mistaking the probability for the hypothesis given the AI’s knowledge for the likelihood ratio of the data on the hypothesis given your own prior knowledge.
AI is a truth-detector that is wrong 1 time in 1000. If the detector says “true”, I shift my certainty upwards by a factor of 1000. “AI’s knowledge” doesn’t enter this picture.
So if someone rolls a 10^6-sided die and tells you they’re 99.9% sure the number was 749,763, you would only assign it a posterior probability of 10^-3?
I see. I used a wrong state space to model this. The answer above is right if I expect a statement of the form “I’m 99.9% sure that N was/wasn’t the number”, and have no knowledge about how N is related to the number on the die. Such statements would be correct 99.9% of the time, and I would only expect to hear positive statements 0.1% of the time, 99.9% of them incorrect.
The correct model is to expect a statement of the form “I’m 99.9% sure that N was the number”, with no option for negative, only with options for N. For such statements to be correct 99.9% of the time, N needs to be the right answer 99.9% of the time, as expected.