So I gave a talk on the Meaning Crisis on Sunday in the Walled Garden, which mostly was about the agent-arena relationship and some other stuff, and among other things I pointed out that part of the crisis here is a growing sophistication of concepts that breaks down ‘useful bucket errors’ at earlier stages. “It’s fine for Plato to say that truth is goodness and goodness is truth, but we have clearer concepts now and have counterexamples of truth that’s not good and goodness that’s not true.” Zvi pushed back; ‘well, how sure are we about those counterexamples?’
After sleeping on it, I think “actually, they’re more like type definitions than they are like counterexamples.” If one thing is about a correspondence between descriptions of worlds and our particular world, and the other thing is about a correspondence between descriptions of worlds and real numbers that indicate how much one ought to prefer those worlds, for them to be exactly equal you need to a very strange utility function. And it’s much, much harder to make them line up if you have a difference version of ‘good’ than ‘consequentialist utility theory’, as that gives you different types.
Continuing on the type distinction, Vervaeke talks a lot about these four varieties of knowledge: propositional, procedural, perspectival, and participatory. But the Cartesian view is really only comfortable with the propositional knowing. [Actually, isn’t it also about the participatory knowing of being your mind touching itself? But I suppose that’s only a very narrow subset of participatory knowledge.]
One of the things that came up in the conversation was the way in which ‘everything’ can be compiled to propositional knowledge. My favorite example of this is Solomonoff Induction; it’s a formal method for updating on observations to determine what the underlying program for a computable world is. First, you run all possible programs to get their output streams, you compare those output streams against the actual observations you get, and you rule out all programs who disagree with actual observations, and then you have a distribution over the remaining programs to predict what future observations from the world will be. This ‘works’ if by ‘works’ you mean “couldn’t possibly be implemented.” So, good enough for the mathematicians. ;)
But armed with the same style of argument as Solomonoff Induction, you could make the case that really all other things are propositional (in an important way). My participatory knowing—what it’s like to be me participating in an experience—cashes out in terms of physical facts about my brain, and a complicated tower of inferences that recognizes those physical facts as being an instance of participatory knowing. That complicated tower of inferences is a program that could be implemented (and thus is present in) Solomonoff Induction. There might be more things in heaven and earth than are dreamt of in Horatio’s philosophy, but not Solomonoff’s. [Well, except incomputable things, but who cares about those anyway.]
I notice that I’m finding myself more and more dissatisfied with this sort of ‘emulation’ argument. That is, consider the Church-Turing argument that if you have the ability to do general computation, you can implement any other method of doing general computation, and so differences between programming languages / computing substrate / whatever are philosophically irrelevant. But if you’re an engineer instead of a philosopher, this sort of emulation can actually be fiendishly difficult, and require horrifying slowdowns. In reality, thinking of things in the way they’re actually implemented helps you carve reality at the joints / think better thoughts more quickly.
I’m not yet sure how to wrap this up nicely. I think there’s a pitfall where these sorts of emulators / compilers / etc. are used, not necessarily as curiosity-stoppers, but as finesse-stoppers? Like, you could learn how to build skills for dealing with this sort of time, but because it’s philosophically solved, you don’t have the sort of drive to grow.
But I don’t have the positive version of this crystallized yet. I do think it looks something like balance, like trying to be strong in lots of different ways, instead of pretending that a particular way is all-encompassing.
So I gave a talk on the Meaning Crisis on Sunday in the Walled Garden, which mostly was about the agent-arena relationship and some other stuff, and among other things I pointed out that part of the crisis here is a growing sophistication of concepts that breaks down ‘useful bucket errors’ at earlier stages. “It’s fine for Plato to say that truth is goodness and goodness is truth, but we have clearer concepts now and have counterexamples of truth that’s not good and goodness that’s not true.” Zvi pushed back; ‘well, how sure are we about those counterexamples?’
After sleeping on it, I think “actually, they’re more like type definitions than they are like counterexamples.” If one thing is about a correspondence between descriptions of worlds and our particular world, and the other thing is about a correspondence between descriptions of worlds and real numbers that indicate how much one ought to prefer those worlds, for them to be exactly equal you need to a very strange utility function. And it’s much, much harder to make them line up if you have a difference version of ‘good’ than ‘consequentialist utility theory’, as that gives you different types.
Continuing on the type distinction, Vervaeke talks a lot about these four varieties of knowledge: propositional, procedural, perspectival, and participatory. But the Cartesian view is really only comfortable with the propositional knowing. [Actually, isn’t it also about the participatory knowing of being your mind touching itself? But I suppose that’s only a very narrow subset of participatory knowledge.]
One of the things that came up in the conversation was the way in which ‘everything’ can be compiled to propositional knowledge. My favorite example of this is Solomonoff Induction; it’s a formal method for updating on observations to determine what the underlying program for a computable world is. First, you run all possible programs to get their output streams, you compare those output streams against the actual observations you get, and you rule out all programs who disagree with actual observations, and then you have a distribution over the remaining programs to predict what future observations from the world will be. This ‘works’ if by ‘works’ you mean “couldn’t possibly be implemented.” So, good enough for the mathematicians. ;)
But armed with the same style of argument as Solomonoff Induction, you could make the case that really all other things are propositional (in an important way). My participatory knowing—what it’s like to be me participating in an experience—cashes out in terms of physical facts about my brain, and a complicated tower of inferences that recognizes those physical facts as being an instance of participatory knowing. That complicated tower of inferences is a program that could be implemented (and thus is present in) Solomonoff Induction. There might be more things in heaven and earth than are dreamt of in Horatio’s philosophy, but not Solomonoff’s. [Well, except incomputable things, but who cares about those anyway.]
I notice that I’m finding myself more and more dissatisfied with this sort of ‘emulation’ argument. That is, consider the Church-Turing argument that if you have the ability to do general computation, you can implement any other method of doing general computation, and so differences between programming languages / computing substrate / whatever are philosophically irrelevant. But if you’re an engineer instead of a philosopher, this sort of emulation can actually be fiendishly difficult, and require horrifying slowdowns. In reality, thinking of things in the way they’re actually implemented helps you carve reality at the joints / think better thoughts more quickly.
I’m not yet sure how to wrap this up nicely. I think there’s a pitfall where these sorts of emulators / compilers / etc. are used, not necessarily as curiosity-stoppers, but as finesse-stoppers? Like, you could learn how to build skills for dealing with this sort of time, but because it’s philosophically solved, you don’t have the sort of drive to grow.
But I don’t have the positive version of this crystallized yet. I do think it looks something like balance, like trying to be strong in lots of different ways, instead of pretending that a particular way is all-encompassing.