I hope you stick around—LW needs people who’ve read Popper. (However, I take that back if it turns out that you’ve only read Deutsch’s simplified, evangelical caricature of him.)
Solomonoff Induction will keep going wrong in random and perverse ways because it pays no heed to theories as explanations.
This is at best unclear. If a person, or the entire scientific community, were given the task of competing with Solomonoff induction to predict an incoming data stream, then either (i) Solomonoff induction would eventually arrive at “the correct theory” (at least in the sense that it no longer ‘keeps going wrong’) or (ii) the human scientists would ‘keep going wrong in random ways’ (i.e. ways that seem random to the scientists) or both.
Talking about the need for ‘good explanations’ over and above mere predictive success is all very well, but it’s not that helpful unless you can say something about what makes an explanation good. (It would be even more helpful if you could say something about how an AI could recognize a good explanation.)
I think the point you should be concentrating on is that human scientists do not face the same problem-situation as Solomonoff induction. We can choose what questions to work on, design experiments, and we’re also blessed/cursed with the lack of any stable boundary between ‘data’ and ‘theory’.
I hope you stick around—LW needs people who’ve read Popper. (However, I take that back if it turns out that you’ve only read Deutsch’s simplified, evangelical caricature of him.)
Turning into a simplified caricature of Popper appears to be a memetic hazard of reading Popper. Not as bad as reading Rand, though.
Would you care to explain in what way Deutsch is a simplified caricature of Popper or are you just going to be content to make assertions like the other commenter? I doubt very much that you have any idea what Popperians such as David Deutsch are like.
Despite the context, I didn’t have Deutsch specifically in mind. Just a general observation over the years.
The issue under discussion here is wider than Popper and appears elsewhere, in the disagreement between small-world and large-world Bayesians, the former being the side that I guess Popper would have been on. (It’s a very long time since I read Popper, and I do not recall if he ever says anything about Bayesian inference.) Must Bayesian inference of the sort that takes a parameterised model and a prior distribution on the parameters, and fits it to a data set (a process whose validity everyone except a few hard-core frequentists agree on) be subordinate to some other process when the data do not fit your model at all, for any parameters, and you need to find a different model? The small-worlders say yes, and the large-worlders say no. The small-worlders scoff at the large-worlders, the large-worlders exhibit the Solomonoff universal prior, and if the small-worlders are still paying attention, they usually just scoff some more. Sometimes they will point out that Solomonoff induction is uncomputable, and that computable approximations are exponentially unfeasible, relying as they do on merely enumerating all possible hypotheses in order of size. I haven’t seen a large-worlder response to that anywhere, not even here on LessWrong, where large-worldism is the default view.
But then the small worlders, asked just what it is that they put above Bayes, reply only with magic names, such as “human judgement”, “criticism”, or “argument”. All they are doing is describing what the process feels like, not what it is. In chapter 7 of The Fabric of Reality, Deutsch circles around and around the point, but the point never appears.
ETA: And they generally deny that the process can be elucidated any further. Popper himself says (as quoted elsewhere in these threads) that there is no method.
So for me, that is where things stand, and I am not convinced by either side.
Some further context for the above. I’ve been drafting a posting on this for a while, which this comment and that one are based on.
Deutsch does say what makes an explanation good. He has a TED talk about it, and a new book, The Beginning of Infinity (came out 2 days ago in the UK) which has this as a major theme. Good explanations are hard to change while they still solve the same problem. The book has examples and elaboration.
I have read both Popper and Deutsch. Could you explain your comment about Deutsch?
You say human scientists do not face the same problem situation as Solomonoff Induction. But both are trying to create knowledge right? In Solomonoff Induction it is assumed that all knowledge comes to us via sensory organs as data streams and the task of the knowledge creator is to compress that data with the aim of making good predictions. This, it is held, is in some sense what scientists and all people do when they create knowledge and it is what the ideal knowledge creator should do. Critical rationalism rejects the idea that all knowledge comes to us via the senses—that is empiricism—and it rejects the idea that theories are just instruments for making predictions—that is instrumentalism.
You seem to think that predictive success can come without underlying explanations, as though explanations are optional. They are not. We can’t just neglect explanations and think we can get on with the process of building an AI. That we cannot formalize our current knowledge about explanations in a nice piece of mathematics should not be a deterrent to trying to learn more.
I hope you stick around—LW needs people who’ve read Popper. (However, I take that back if it turns out that you’ve only read Deutsch’s simplified, evangelical caricature of him.)
This is at best unclear. If a person, or the entire scientific community, were given the task of competing with Solomonoff induction to predict an incoming data stream, then either (i) Solomonoff induction would eventually arrive at “the correct theory” (at least in the sense that it no longer ‘keeps going wrong’) or (ii) the human scientists would ‘keep going wrong in random ways’ (i.e. ways that seem random to the scientists) or both.
Talking about the need for ‘good explanations’ over and above mere predictive success is all very well, but it’s not that helpful unless you can say something about what makes an explanation good. (It would be even more helpful if you could say something about how an AI could recognize a good explanation.)
I think the point you should be concentrating on is that human scientists do not face the same problem-situation as Solomonoff induction. We can choose what questions to work on, design experiments, and we’re also blessed/cursed with the lack of any stable boundary between ‘data’ and ‘theory’.
Turning into a simplified caricature of Popper appears to be a memetic hazard of reading Popper. Not as bad as reading Rand, though.
Would you care to explain in what way Deutsch is a simplified caricature of Popper or are you just going to be content to make assertions like the other commenter? I doubt very much that you have any idea what Popperians such as David Deutsch are like.
Despite the context, I didn’t have Deutsch specifically in mind. Just a general observation over the years.
The issue under discussion here is wider than Popper and appears elsewhere, in the disagreement between small-world and large-world Bayesians, the former being the side that I guess Popper would have been on. (It’s a very long time since I read Popper, and I do not recall if he ever says anything about Bayesian inference.) Must Bayesian inference of the sort that takes a parameterised model and a prior distribution on the parameters, and fits it to a data set (a process whose validity everyone except a few hard-core frequentists agree on) be subordinate to some other process when the data do not fit your model at all, for any parameters, and you need to find a different model? The small-worlders say yes, and the large-worlders say no. The small-worlders scoff at the large-worlders, the large-worlders exhibit the Solomonoff universal prior, and if the small-worlders are still paying attention, they usually just scoff some more. Sometimes they will point out that Solomonoff induction is uncomputable, and that computable approximations are exponentially unfeasible, relying as they do on merely enumerating all possible hypotheses in order of size. I haven’t seen a large-worlder response to that anywhere, not even here on LessWrong, where large-worldism is the default view.
But then the small worlders, asked just what it is that they put above Bayes, reply only with magic names, such as “human judgement”, “criticism”, or “argument”. All they are doing is describing what the process feels like, not what it is. In chapter 7 of The Fabric of Reality, Deutsch circles around and around the point, but the point never appears.
ETA: And they generally deny that the process can be elucidated any further. Popper himself says (as quoted elsewhere in these threads) that there is no method.
So for me, that is where things stand, and I am not convinced by either side.
Some further context for the above. I’ve been drafting a posting on this for a while, which this comment and that one are based on.
Deutsch does say what makes an explanation good. He has a TED talk about it, and a new book, The Beginning of Infinity (came out 2 days ago in the UK) which has this as a major theme. Good explanations are hard to change while they still solve the same problem. The book has examples and elaboration.
I have read both Popper and Deutsch. Could you explain your comment about Deutsch?
You say human scientists do not face the same problem situation as Solomonoff Induction. But both are trying to create knowledge right? In Solomonoff Induction it is assumed that all knowledge comes to us via sensory organs as data streams and the task of the knowledge creator is to compress that data with the aim of making good predictions. This, it is held, is in some sense what scientists and all people do when they create knowledge and it is what the ideal knowledge creator should do. Critical rationalism rejects the idea that all knowledge comes to us via the senses—that is empiricism—and it rejects the idea that theories are just instruments for making predictions—that is instrumentalism.
You seem to think that predictive success can come without underlying explanations, as though explanations are optional. They are not. We can’t just neglect explanations and think we can get on with the process of building an AI. That we cannot formalize our current knowledge about explanations in a nice piece of mathematics should not be a deterrent to trying to learn more.
I wonder what they think of the discussion of the Oracle in The Fabric of Reality, ch1.