making the correct inference from a limited set of data, chosen at pseudo-random.
That doesn’t constitute a strategy. What is the set of data you use to infer whether the RH is true? To infer whether ZFC is consistent? To decide what play to make in a game of Go against an algorithm you know, but running on a faster computer? To decide whether to one-box or two-box in Newcomb’s problem against a flawed simulator? To decide whether to cooperate or defect in the prisoner’s dilemma against an opponent you understand completely and who understands you completely?
(These last two problems cannot be posed in the framework I gave, but they do involve broadly similar issues. A deeper understanding of probabilistic reasoning seems to me to be essential to get the “right answer” in these cases.)
I was thinking more of problems along the lines of “here is the entire history of object X’s behavior and lots of related stuff, which you do not have enough time to process completely. What will X do next?”
A good set of data for the Riemann hypothesis and similar things would be the history of mathematics and the opinions of mathematicians. How often have similar opinions been accurate/inaccurate? This seems roughly like what I Was talking about, though the inhomogeneity of real-world data means you can certainly beat random picking by going after low-hanging fruit.
The Go problem is interesting. Since Go can be “solved,” if the other algorithm is optimal you’re screwed, or will play optimally too if your computer is fast enough. If both of those are false, and you have time to prepare, you could train an algorithm specifically to beat the known opponent, which, if done optimally, would again not pick randomly… guess I was wrong about that.
The last two don’t seem to exhibit the property you’re talking about, and instead the solutions should be fairly complete.
That doesn’t constitute a strategy. What is the set of data you use to infer whether the RH is true? To infer whether ZFC is consistent? To decide what play to make in a game of Go against an algorithm you know, but running on a faster computer? To decide whether to one-box or two-box in Newcomb’s problem against a flawed simulator? To decide whether to cooperate or defect in the prisoner’s dilemma against an opponent you understand completely and who understands you completely?
(These last two problems cannot be posed in the framework I gave, but they do involve broadly similar issues. A deeper understanding of probabilistic reasoning seems to me to be essential to get the “right answer” in these cases.)
I was thinking more of problems along the lines of “here is the entire history of object X’s behavior and lots of related stuff, which you do not have enough time to process completely. What will X do next?”
A good set of data for the Riemann hypothesis and similar things would be the history of mathematics and the opinions of mathematicians. How often have similar opinions been accurate/inaccurate? This seems roughly like what I Was talking about, though the inhomogeneity of real-world data means you can certainly beat random picking by going after low-hanging fruit.
The Go problem is interesting. Since Go can be “solved,” if the other algorithm is optimal you’re screwed, or will play optimally too if your computer is fast enough. If both of those are false, and you have time to prepare, you could train an algorithm specifically to beat the known opponent, which, if done optimally, would again not pick randomly… guess I was wrong about that.
The last two don’t seem to exhibit the property you’re talking about, and instead the solutions should be fairly complete.