So last time we took a look at three pivotal figures; two of them are in dialogue with the central figure, Rene Descartes. We took a look at the debate between Descartes and Hobbes, and how that is so current and relevant to us today in the debate around the possible creation of strong AI and what that means both scientifically and existentially to us, and we then took a look at what comes out of Descartes’s response to Hobbes.
Descartes builds a defense against Hobbes’s proposal for a completely materialistic artificial intelligence / computer model of the mind in terms that are drawn very strictly and I think rigorously from the central insights of the Scientific Revolution, and that seems to save the human soul from the Hobbesian onslaught, but we pay a really, really devastating price for the Cartesian defense. We have a radical disconnection between mind and body, which is radical because of how embodied your experience of yourself and your world is. A radical disconnection between mind and other minds, because you only have access to other minds through bodies! If there is no possible connection between mind and body, there’s no way you can read other people’s mental states off of their bodily behavior. Then we have the radical disconnection between mind and reality, because Descartes gives us two competing models of how we get in touch with what’s real: one is we track the mathematical (that of course was picked up by Positivism and people who advocate for science as our main access to reality) and then the other is the ‘cogito ergo sum’: all that’s left of the contact with reality is the moment where the mind touches itself, and we get a purely subjective notion of realness (that’s picked up by the Romantic tradition and is also prevalent in our world today).
We swing between the Positivistic and Romantic notions of how we decide what’s real in a completely unstable fashion. We then noted that even your connection to yourself has been undermined because the Cartesian project is so radical in its withdrawal; it’s so radical in its disconnection from mind, body, world, tradition, history, cultural, that all that’s in the cogito, all that is guaranteed to exist is this moment of self-awareness. What you end up with is this completely atomic, completely autobiographically empty self adrift in the terrifying infinite spaces that Pascal talked about.
We talked about Pascal’s response to Descartes, and how Pascal was convinced that Descartes’s attempts (and Pascal was right about this!) to try to deal with the anxiety of the Scientific Revolution by promoting a methodology of searching for certainty would ultimately come to ruin. And of course they have come to ruin. Instead what Pascal pointed out is that we have lost all these other ways of knowing that were so central to the Axial Revolution; all we have left is a spirit of geometry, we have lost the spirit of finesse. We have lost the procedural knowing, the perspectival knowing, and the participatory knowing that are so integral to the transformative experiences that have been central to our discussion of the Axial Age’s legacy. Of course Pascal himself had such a transformative experience, and found the Cartesian framework incapable of addressing or articulating it.
So I gave a talk on the Meaning Crisis on Sunday in the Walled Garden, which mostly was about the agent-arena relationship and some other stuff, and among other things I pointed out that part of the crisis here is a growing sophistication of concepts that breaks down ‘useful bucket errors’ at earlier stages. “It’s fine for Plato to say that truth is goodness and goodness is truth, but we have clearer concepts now and have counterexamples of truth that’s not good and goodness that’s not true.” Zvi pushed back; ‘well, how sure are we about those counterexamples?’
After sleeping on it, I think “actually, they’re more like type definitions than they are like counterexamples.” If one thing is about a correspondence between descriptions of worlds and our particular world, and the other thing is about a correspondence between descriptions of worlds and real numbers that indicate how much one ought to prefer those worlds, for them to be exactly equal you need to a very strange utility function. And it’s much, much harder to make them line up if you have a difference version of ‘good’ than ‘consequentialist utility theory’, as that gives you different types.
Continuing on the type distinction, Vervaeke talks a lot about these four varieties of knowledge: propositional, procedural, perspectival, and participatory. But the Cartesian view is really only comfortable with the propositional knowing. [Actually, isn’t it also about the participatory knowing of being your mind touching itself? But I suppose that’s only a very narrow subset of participatory knowledge.]
One of the things that came up in the conversation was the way in which ‘everything’ can be compiled to propositional knowledge. My favorite example of this is Solomonoff Induction; it’s a formal method for updating on observations to determine what the underlying program for a computable world is. First, you run all possible programs to get their output streams, you compare those output streams against the actual observations you get, and you rule out all programs who disagree with actual observations, and then you have a distribution over the remaining programs to predict what future observations from the world will be. This ‘works’ if by ‘works’ you mean “couldn’t possibly be implemented.” So, good enough for the mathematicians. ;)
But armed with the same style of argument as Solomonoff Induction, you could make the case that really all other things are propositional (in an important way). My participatory knowing—what it’s like to be me participating in an experience—cashes out in terms of physical facts about my brain, and a complicated tower of inferences that recognizes those physical facts as being an instance of participatory knowing. That complicated tower of inferences is a program that could be implemented (and thus is present in) Solomonoff Induction. There might be more things in heaven and earth than are dreamt of in Horatio’s philosophy, but not Solomonoff’s. [Well, except incomputable things, but who cares about those anyway.]
I notice that I’m finding myself more and more dissatisfied with this sort of ‘emulation’ argument. That is, consider the Church-Turing argument that if you have the ability to do general computation, you can implement any other method of doing general computation, and so differences between programming languages / computing substrate / whatever are philosophically irrelevant. But if you’re an engineer instead of a philosopher, this sort of emulation can actually be fiendishly difficult, and require horrifying slowdowns. In reality, thinking of things in the way they’re actually implemented helps you carve reality at the joints / think better thoughts more quickly.
I’m not yet sure how to wrap this up nicely. I think there’s a pitfall where these sorts of emulators / compilers / etc. are used, not necessarily as curiosity-stoppers, but as finesse-stoppers? Like, you could learn how to build skills for dealing with this sort of time, but because it’s philosophically solved, you don’t have the sort of drive to grow.
But I don’t have the positive version of this crystallized yet. I do think it looks something like balance, like trying to be strong in lots of different ways, instead of pretending that a particular way is all-encompassing.
Episode 22: Descartes vs. Hobbes
So I gave a talk on the Meaning Crisis on Sunday in the Walled Garden, which mostly was about the agent-arena relationship and some other stuff, and among other things I pointed out that part of the crisis here is a growing sophistication of concepts that breaks down ‘useful bucket errors’ at earlier stages. “It’s fine for Plato to say that truth is goodness and goodness is truth, but we have clearer concepts now and have counterexamples of truth that’s not good and goodness that’s not true.” Zvi pushed back; ‘well, how sure are we about those counterexamples?’
After sleeping on it, I think “actually, they’re more like type definitions than they are like counterexamples.” If one thing is about a correspondence between descriptions of worlds and our particular world, and the other thing is about a correspondence between descriptions of worlds and real numbers that indicate how much one ought to prefer those worlds, for them to be exactly equal you need to a very strange utility function. And it’s much, much harder to make them line up if you have a difference version of ‘good’ than ‘consequentialist utility theory’, as that gives you different types.
Continuing on the type distinction, Vervaeke talks a lot about these four varieties of knowledge: propositional, procedural, perspectival, and participatory. But the Cartesian view is really only comfortable with the propositional knowing. [Actually, isn’t it also about the participatory knowing of being your mind touching itself? But I suppose that’s only a very narrow subset of participatory knowledge.]
One of the things that came up in the conversation was the way in which ‘everything’ can be compiled to propositional knowledge. My favorite example of this is Solomonoff Induction; it’s a formal method for updating on observations to determine what the underlying program for a computable world is. First, you run all possible programs to get their output streams, you compare those output streams against the actual observations you get, and you rule out all programs who disagree with actual observations, and then you have a distribution over the remaining programs to predict what future observations from the world will be. This ‘works’ if by ‘works’ you mean “couldn’t possibly be implemented.” So, good enough for the mathematicians. ;)
But armed with the same style of argument as Solomonoff Induction, you could make the case that really all other things are propositional (in an important way). My participatory knowing—what it’s like to be me participating in an experience—cashes out in terms of physical facts about my brain, and a complicated tower of inferences that recognizes those physical facts as being an instance of participatory knowing. That complicated tower of inferences is a program that could be implemented (and thus is present in) Solomonoff Induction. There might be more things in heaven and earth than are dreamt of in Horatio’s philosophy, but not Solomonoff’s. [Well, except incomputable things, but who cares about those anyway.]
I notice that I’m finding myself more and more dissatisfied with this sort of ‘emulation’ argument. That is, consider the Church-Turing argument that if you have the ability to do general computation, you can implement any other method of doing general computation, and so differences between programming languages / computing substrate / whatever are philosophically irrelevant. But if you’re an engineer instead of a philosopher, this sort of emulation can actually be fiendishly difficult, and require horrifying slowdowns. In reality, thinking of things in the way they’re actually implemented helps you carve reality at the joints / think better thoughts more quickly.
I’m not yet sure how to wrap this up nicely. I think there’s a pitfall where these sorts of emulators / compilers / etc. are used, not necessarily as curiosity-stoppers, but as finesse-stoppers? Like, you could learn how to build skills for dealing with this sort of time, but because it’s philosophically solved, you don’t have the sort of drive to grow.
But I don’t have the positive version of this crystallized yet. I do think it looks something like balance, like trying to be strong in lots of different ways, instead of pretending that a particular way is all-encompassing.