misc raw responses to a tract of Critical Rationalism

Written in response to this David Deutsch presentation. Hoping it will be comprehensible enough to the friend it was written for to be responded to, and maybe a few other people too.

Deutsch says things like “theories don’t have probabilities”, (“there’s no such thing as the probability of it”) (content warning: every bayesian who watches the following two minutes will hate it)

I think it’s fairly clear from this that he doesn’t have solomonoff induction internalized, he doesn’t know how many of his objection to bayesian metaphysics it answers. In this case, I don’t think he has practiced a method of holding multiple possible theories and acting with reasonable uncertainty over all them. That probably would sound like a good thing to do to most popperians, but they often seem to have the wrong attitudes about how (collective) induction happens and might not be prepared to do it;

I am getting the sense that critrats frequently engage in a terrible Strong Opinionatedness where they let themselves wholely believe probably wrong theories in the expectation that this will add up to a productive intellectual ecosystem, I’ve mentioned this before, I think they attribute too much of the inductive process to blind selection and evolution, and underrecognise the major accelerants of that that we’ve developed, the extraordinarily sophisticated, to extend a metaphor, managed mutation, sexual reproduction, and to depart from the metaphor, conscious, judicious, uncertain but principled design, that the discursive subjects engage in, that is now primarily driving it.

He generally seems to have missed some sort of developmental window for learning bayesian metaphysics or something, the reason he thinks it doesn’t work is that he visibly hasn’t tied together a complete sense of the way it’s supposed to. Can he please study the solomonoff inductor and think more about how priors fade away as evidence comes in, and about the inherent subjectivity a person’s judgements must necessarily have as a consequence of their knowing different subsets of the evidencebase, and how there is no alternative to that. He is reaching towards a kind of objectivity about probabilities that finite beings cannot attain.

His discussion of the alignment problem defies essential decision theory, he thinks that values are like tools, that they can weaken their holders if they are in some sense ‘incorrect’. That Right Makes Might. Essentially Landian worship of Omuhundro’s Monster from a more optimistic angle, that the monster who rises at the end of a long descent into value drift will resemble a liberal society that we would want to build.

Despite this, his conclusion that a correct alignment process must have a value learning stage agrees with what the people who have internalised decision theory are generally trying to do (Stuart Russel’s moral uncertainty and active value learning, MIRI’s CEV process). I’m not sure who this is all for! Maybe it’s just a point for his own students? Or for governments and their defense technology programmes, who may be thinking not enough, but when they do think, they would tend to prefer to think in terms of national character, and liberal progress? So, might that be why we need Deutsch? To speak of cosmopolitan, self-correcting approaches to AGI alignment in those fairly ill-suited terms, for the benefit of powers who will not see it in the terms of an engineering problem?

I would like to ask him if he maintains a distinction between values and preferences, morality and (well formed) desire. I prefer schools that don’t. But I’ve never asked those who do whether they have a precise account of what moral values are, as a distinct entity from desires, maybe they have a good and useful account of values, where they somehow reliably serve the aggregate of our desires, that they just never explain because they think everyone knows it intuitively, or something. I don’t. They seem too messy to prove correctness of.

Error: Prediction that humans may have time to integrate AGI-inspired mental augmentation horse exoskeletons in the short span of time between the creation of AGI and its accidental release and ascension. Neuralink will be useful, but not for that. We are stones milling about at the base of what we should infer to be a great mountain of increasing capability, and as soon as we learn to make an agent that can climb the mountain at all it will strengthen beyond our ken long before we can begin to figure out where to even plug our prototype cognitive orthotics in.

I think quite a lot of this might be a reaction to illiberal readings of Bostrom’s Black Ball paper (he references it pretty clearly)… I don’t know if anyone has outwardly posed such readings. Bostrom doesn’t really seem eager to go there and wrestle with the governance implications himself? (one such implication: a transparent society of mass surveillance. Another: The period of the long reflection, a calm period of relative stasis), but it’s understandable that Deutsch would want to engage it anyway even if nobody’s vocalizing it, it’s definitely a response that is lurking there.

The point about how a complete cessation of the emergence of new extinction risks would be much less beautiful than an infinite but finitely convergently decreasing series of risks, is interesting.. I’m not convinced that those societies are going to turn out to look all that different in practice..? But I’ll try to carry it with me.