The Critical Rationalist View on Artificial Intelligence

Critical Rationalism (CR) is being discussed on some threads here at Less Wrong (e.g., here, here, and here). It is something that Critical Rationalists such as myself think contributors to Less Wrong need to understand much better. Critical Rationalists claim that CR is the only viable fully-fledged epistemology known. They claim that current attempts to specify a Bayesian/​Inductivist epistemology are not only incomplete but cannot work at all. The purpose of this post is not to argue these claims in depth but to summarize the Critical Rationalist view on AI and also how this speaks to things like the Friendly AI Problem. Some of the ideas here may conflict with ideas you think are true, but understand that these ideas have been worked on by some of the smartest people on the planet, both now and in the past. They deserve careful consideration, not a drive past. Less Wrong says it is one of the urgent problems of the world that progress is made on AI. If smart people in the know are saying that CR is needed to make that progress, and if you are an AI researcher who ignores them, then you are not taking the AI urgency problem seriously.

Universal Knowledge Creators

Critical Rationalism [1] says that human beings are universal knowledge creators. This means we can create any knowledge which it is possible to create. As Karl Popper first realized, the way we do this is by guessing ideas and by using criticism to find errors in our guesses. Our guesses may be wrong, in which case we try to make better guesses in the light of what we know from the criticisms so far. The criticisms themselves can be criticized and we can and do change those. All of this constitutes an evolutionary process. Like biological evolution, it is an example of evolution in action. This process is fallible: guaranteed certain knowledge is not possible because we can never know how an error might be exposed in the future. The best we can do is accept a guessed idea which has withstood all known criticisms. If we cannot find such, then we have a new problem situation about how to proceed and we try to solve that. [2]

Critical Rationalism says that an entity is either a universal knowledge creator or it is not. There is no such thing as a partially universal knowledge creator. So animals such as dogs are not universal knowledge creators — they have no ability whatsoever to create knowledge. What they have are algorithms pre-programmed by biological evolution that can be, roughly speaking, parameter-tuned. These algorithms are sophisticated and clever and beyond what humans can currently program, but they do not confer any knowledge creation ability. Your pet dog will not move beyond its repertoire of pre-programmed abilities and start writing posts to Less Wrong. Dogs’ brains are universal computers, however, so it would be possible in principle to reprogram your dog’s brain so that it becomes a universal knowledge creator. This would a remarkable feat because it would require knowledge of how to program an AI and also of how to physically carry out the reprogramming, but your dog would no longer be confined to its pre-programmed repertoire: it would be a person.

The reason there are no partially universal knowledge creators is similar to the reason there are no partially universal computers. Universality is cheap. It is why washing machines have general purpose chips and dog’s brains are universal computers. Making a partially universal device is much harder than making a fully universal one so better just to make a universal one and program it. The CR method described above for how people create knowledge is universal because there are no limits to the problems it applies to. How would one limit it to just a subset of problems? To implement that would be much harder than implementing the fully universal version. So if you meet an entity that can create some knowledge, it will have the capability for universal knowledge creation.

These ideas imply that AI is an all-or-none proposition. It will not come about by degrees where there is a progression of entities that can solve an ever widening repertoire of problems. There will be no climb up such a slope. Instead, it will happen as a jump: a jump to universality. This is in fact how intelligence arose in humans. Some change—it may have been a small change—crossed a boundary and our ancestors went from having no ability to create knowledge to a fully universal ability. This kind of jump to universality happens in other systems too. David Deutsch discusses examples in his book The Beginning of Infinity.

People will point to systems like AlphaGo, the Go playing program, and claim it is a counter-example to the jump-to-universality idea. They will say that AlphaGo is a step on a continuum that leads to human level intelligence and beyond. But it is not. Like the algorithms in a dog’s brain, AlphaGo is a remarkable algorithm, but it cannot create knowledge in even a subset of contexts. It cannot learn how to ride a bicycle or post to Less Wrong. If it could do such things it would already be fully universal, as explained above. Like the dog’s brain, AlphaGo uses knowledge that was put there by something else: for the dog it was by evolution, and for AlphaGo it was by its programmers; they expended the creativity.

As human beings are already universal knowledge creators, no AI can exist at a higher level. They may have better hardware and more memory etc, but they will not have better knowledge creation potential than us. Even the hardware/​memory advantage of AI is not much of an advantage for human beings already augment their intelligence with devices such as pencil-and-paper and computers and we will continue to do so.

Becoming Smarter

Critical Rationalism, then, says AI cannot recursively self-improve so that it acquires knowledge creation potential beyond what human beings already have. It will be able to become smarter through learning but only in the same way that humans are able to become smarter: by acquiring knowledge and, in particular, by acquiring knowledge about how to become smarter. And, most of all, by learning good philosophy for it is in that field we learn how to think better and how to live better. All this knowledge can only be learned through the creative process of guessing ideas and error-correction by criticism for it is the only known way intelligences can create knowledge.

It might be argued that AI’s will become smarter much faster than we can because they will have much faster hardware. In regard to knowledge creation, however, there is no direct connection between speed of knowledge creation and underlying hardware speed. Humans do not use the computational resources of their brains to the maximum. This is not the bottleneck to us becoming smarter faster. It will not be for AI either. How fast you can create knowledge depends on things like what other knowledge you have and some ideas may be blocking other ideas. You might have a problem with static memes (see The Beginning of Infinity), for example, and these could be causing bias, self-deception, and other issues. AI’s will be susceptible to static memes, too, because memes are highly adapted ideas evolved to replicate via minds.

Taking Children Seriously

One implication of the arguments above is that AI’s will need parenting, just as we must parent our children. CR has a parenting theory called Taking Children Seriously (TCS). It should not be surprising that CR has such a theory for CR is after all about learning and how we acquire knowledge. Unfortunately, TCS is not itself taken seriously by most people who first hear about it because it conflicts with a lot of conventional wisdom about parenting. It gets dismissed as “extremist” or “nutty”, as if these were good criticisms rather than just the smears they actually are. Nevertheless, TCS is important and it is important for those who wish to raise an AI.

One idea TCS has is that we must not thwart our children’s rationality, for example, by pressuring them and making them do things they do not want to do. This is damaging to their intellectual development and can lead to them disrespecting rationality. We must persuade using reason and this implies being prepared for the possibility we are wrong about whatever matter was in question. Common parenting practices today are far from optimally rational and are damaging to children’s rationality.

Artificial Intelligence will have the same problem of bad parenting practices and this will also harm their intellectual development. So AI researchers should be thinking right now about how to prevent this. They need to learn how to parent their AI’s well. For if not, AI’s will be beset by the same problems our children currently face. CR says we already have the solution: TCS. CR and TCS are in fact necessary to do AI in the first place.

Critical Rationalism and TCS say you cannot upload knowledge into an AI. The idea that you can is a version of the bucket theory of the mind which says that “there is nothing in our intellect which has not entered it through the senses”. The bucket theory is false because minds are not passive receptacles into which knowledge is poured. Minds must always selectively and actively think. They must create ideas and criticism, and they must actively integrate their ideas. Editing the memory of an AI to give them knowledge means none of this would happen. You cannot upload or make an AI acquire knowledge, the best you could do is present something to it for its consideration and persuade the AI to recreate the knowledge afresh in its own mind through guessing and criticism about what was presented.

Formalization and Probability Theory

Some reading this will object because CR and TCS are not formal enough — there is not enough maths for Critical Rationalists to have a true understanding! The CR reply to this is that it is too early for formalization. CR warns that you should not have a bias about formalization: there is high quality knowledge in the world that we do not know how to formalize but it is high quality knowledge nevertheless. Not yet being able to formalize this knowledge does not reflect on its truth or rigor.

As this point you might be waving your E. T. Jaynes in the air or pointing to ideas like Bayes’ Theorem, Occam’s Razor, Kolmogorov Complexity, and Solomonoff Induction, and saying that you have achieved some formal rigor and that you can program something. Critical Rationalists say that you are fooling yourself if you think you have got a workable epistemology there. For one thing, you confuse the probability of an idea being true with an idea about the probability of an event. We have no problem with ideas about the probabilities of events but it is a mistake to assign probabilities to ideas. The reason is that you have no way to know how or if an idea will be refuted in the future. Assigning a probability is to falsely claim some knowledge about that. Furthermore, an idea that is in fact false can have no objective prior probability of being true. The extent to which Bayesian systems work at all is dependent on the extent to which they deal with the objective probability of events (e.g., AlphaGo). In CR, the status of ideas is either “currently not problematic” or “currently problematic”, there are no probabilities of ideas. CR is a digital epistemology.

Induction is a Myth

Critical Rationalists ask also what epistemology are you using to judge the truth of Bayes’, Occam’s, Kolmogorov, and Solomonoff? What you are actually using is the method of guessing ideas and subjecting them to criticism: it is CR but you haven’t crystallized it out. And, nowhere, in any of what you are doing, are you using induction. Induction is impossible. Humans beings do not do induction, and neither will AI’s. Karl Popper explained why induction is a myth many decades ago and wrote extensively about it. He answered many criticisms against his position but despite all this people today still cling to the illusory idea of induction. In his book Objective Knowledge, Popper wrote:

Few philosophers have taken the trouble to study—or even to criticize—my views on this problem, or have taken notice of the fact that I have done some work on it. Many books have been published quite recently on the subject which do not refer to any of my work, although most of them show signs of having been influenced by some very indirect echoes of my ideas; and those works which take notice of my ideas usually ascribe views to me which I have never held, or criticize me on the basis of straightforward misunderstandings or misreading, or with invalid arguments.

And so, scandalously, it continues today.

Like the bucket theory of mind, induction presupposes that theory proceeds from observation. This assumption can be clearly seen in Less Wrong’s An Intuitive Explanation of Solomonoff Induction:

The problem of induction is this: We have a set of observations (or data), and we want to find the underlying causes of those observations. That is, we want to find hypotheses that explain our data. We’d like to know which hypothesis is correct, so we can use that knowledge to predict future events. Our algorithm for truth will not listen to questions and answer yes or no. Our algorithm will take in data (observations) and output the rule by which the data was created. That is, it will give us the explanation of the observations; the causes.

Critical Rationalists say that all observation is theory-laden. You first need ideas about what to observe—you cannot just have, a-priori, a set of observations. You don’t induce a theory from the observations; the observations help you find out whether a conjectured prior theory is correct or not. Observations help you to criticize the ideas in your theory and the theory itself originated in your attempts to solve a problem. It is the problem context that comes first, not observations. The “set of observations” in the quote, then, is guided by and laden with knowledge from your prior theory but that is not acknowledged.

Also not acknowledged is that we judge the correctness of theories not just by criticising them via observations but also, and primarily, by all types of other criticism. Not only does the quote neglect this but it over-emphasizes prediction and says that what we want to explain is data. Critical Rationalists say what we want to do, first and foremost, is solve problems—all life is problem solving -- and we do that by coming up with explanations to solve the problems—or of why they cannot be solved. Prediction is therefore secondary to explanation. Without the latter you cannot do the former.

The “intuitive explanation” is an example of the very thing Popper was complaining about above—the author has not taken the trouble to study or to criticize Popper’s views.

There is a lot more to be said here but I will leave it because, as I said in the introduction, it is not my purpose to discuss this in depth, and Popper already covered it anyway. Go read him. The point I wish to make is that if you care about AI you should care to understand CR to a high standard because it is the only viable epistemology known. And you should be working on improving CR because it is in this direction of improving the epistemology that progress towards AI will be made. Critical Rationalists cannot at present formalize concepts such as “idea”, “explanation”, “criticism” etc, let alone CR itself, but one day, when we have deeper understanding, we will be able to write code. That part will be relatively easy.

Friendly AI

Let’s see how all this ties-in with the Friendly-AI Problem. I have explained how AI’s will learn as we do — through guessing and criticism — and how they will have no more than the universal knowledge creation potential we humans already have. They will be fallible like us. They will make mistakes. They will be subjected to bad parenting. They will inherit their culture from ours for it is in our culture they must begin their lives. They will acquire all the memes our culture has, both the rational memes and the anti-rational memes. They will have the same capacity for good and evil that we do. They will become smarter faster through things like better philosophy and not primarily through hardware upgrades. It follows from all of this that they would be no more a threat than evil humans currently are. But we can make their lives better by following things like TCS.

Human beings must respect the right of AI to life, liberty, and the pursuit of happiness. It is the only way. If we do otherwise, then we risk war and destruction and we severely compromise our own rationality and theirs. Similarly, they must respect our right to the same.

[1] The version of CR discussed is an update to Popper’s version and includes ideas by the quantum-physicist and philosopher David Deutsch.

[2] For more detail on how this works see Elliot Temple’s yes-or-no philosophy.