I think that prediction market is only an instrument to compare existing logical uncertainty estimations, and some more straightforward instrument for their calulcation may be needed.
It may be the share of the similar statements from the same reference class which are known to be true. For example, if I want to learn n digit of pi, I can estimate the logical uncertainty as 1 of 10 as the reference class consists of 10 possible digits which have equal probability to be true. If I want to learn a priory probability of the veracity of a theorem about natural numbers, I may use share of true statements about natural numbers (no longer than m) compared to all possible statements (of this length). As any statement belongs to several reference classes, I could have several estimations this way, and after getting the median, I will be probably be close to some kind of best available to me estimation.
The described above algorithm for logical uncertainty calculation has an advantage of that it doesn’t require complex AI or general intelligence to be calculated, as it just compresses all prior knowledge about how many theorems have turned to be true, using rather simple calculations.
This calculations may be done more effectively if we add machine learning to predict which theorems are likely to be true, using the same architecture as in AlphaZero broad games engine. AlphaZero system has Monte-Carlo search engine (in space of possible future games) and “intuition” neural net part, which is trained (on previous games) to predict which moves are likely to be winning.
I think that prediction market is only an instrument to compare existing logical uncertainty estimations, and some more straightforward instrument for their calulcation may be needed.
It may be the share of the similar statements from the same reference class which are known to be true. For example, if I want to learn n digit of pi, I can estimate the logical uncertainty as 1 of 10 as the reference class consists of 10 possible digits which have equal probability to be true. If I want to learn a priory probability of the veracity of a theorem about natural numbers, I may use share of true statements about natural numbers (no longer than m) compared to all possible statements (of this length). As any statement belongs to several reference classes, I could have several estimations this way, and after getting the median, I will be probably be close to some kind of best available to me estimation.
The described above algorithm for logical uncertainty calculation has an advantage of that it doesn’t require complex AI or general intelligence to be calculated, as it just compresses all prior knowledge about how many theorems have turned to be true, using rather simple calculations.
This calculations may be done more effectively if we add machine learning to predict which theorems are likely to be true, using the same architecture as in AlphaZero broad games engine. AlphaZero system has Monte-Carlo search engine (in space of possible future games) and “intuition” neural net part, which is trained (on previous games) to predict which moves are likely to be winning.