I appreciate the concept of “tracking which world the referent lives in”. Plausibly tracking the referent-world in your head is useful for preventing blindness to certain optimization channels, not letting the social world constrain you too much,[1] etc.
That being said, I don’t think this is true:
I claim that this example generalizes: insofar as Joe’s “fake thinking” vs “real thinking” points to a single coherent distinction, it points to thoughts which represent things in other worlds vs thoughts which represent things in our physical world.
One example from Joe’s post that got ingrained in my memory was this passage about Derek Parfit.
And I got a similar vibe, too, from various professors. I would talk to them, and I would get some strange sense like “oh, they are holding some real project. They believe in it, they believe it’s possible to make real progress, they think it’s possible to do this together.” I only met Parfit a few times, but he famously had a ton of this, even as he was dying. “Non-religious ethics is at an early stage,” he writes on the last page of Reasons and Persons. Look, there, that sense of project. And to talk, as Parfit did, about having “wasted much of his life” if his brand of moral realism were false – that too implies a sense of trying to do something.
I take this seriousness and intensity — actually trying to meet reality face-to-face — as core to the distinction that Joe is pointing at (according to my interpretation of it, at least). Parfit actually believed that there is something out there, Reality, that could “hit him in the face”, if he got it wrong, and the part of Reality that would hit him in the face would not quite be physical, unless you want to say “It is physical because it is implemented on physical brains.” but then we lose the distinction between real thinking and fake/fictional thinking because all thinking is implemented on physical brains (or whatever mind-substrate). Alternatively, you could ground it in the physical world by saying that it should have some implications for the moral convergence of certain classes of agents but I don’t think this was essential to Parfit’s project and it would be coherent for him to aim for objective moral truth without the assumption that any kind of moral convergence occurs.[2]
Or take math. The monster group most likely isn’t instantiated anywhere in reality except on a cognitive/computational substrate that was (downstream from minds) particularly interested in abstract algebra. It is, in the sense you’re using this word here, fictional. But does it mean that all thinking about the monster group is fake in the sense that Joe’s using this word? I don’t think so. Andrew Wiles proved Fermat’s Last Theorem, which is straightforwardly physical-world-interpretable in terms of real countable thingies. I think his project, which resulted in a proof of the theorem, is a good example of real thinking. But his thinking would not be less real if, instead of FLT, he chose something about the monster group as a target.
I appreciate the concept of “tracking which world the referent lives in”. Plausibly tracking the referent-world in your head is useful for preventing blindness to certain optimization channels, not letting the social world constrain you too much,[1] etc.
That being said, I don’t think this is true:
One example from Joe’s post that got ingrained in my memory was this passage about Derek Parfit.
I take this seriousness and intensity — actually trying to meet reality face-to-face — as core to the distinction that Joe is pointing at (according to my interpretation of it, at least). Parfit actually believed that there is something out there, Reality, that could “hit him in the face”, if he got it wrong, and the part of Reality that would hit him in the face would not quite be physical, unless you want to say “It is physical because it is implemented on physical brains.” but then we lose the distinction between real thinking and fake/fictional thinking because all thinking is implemented on physical brains (or whatever mind-substrate). Alternatively, you could ground it in the physical world by saying that it should have some implications for the moral convergence of certain classes of agents but I don’t think this was essential to Parfit’s project and it would be coherent for him to aim for objective moral truth without the assumption that any kind of moral convergence occurs.[2]
Or take math. The monster group most likely isn’t instantiated anywhere in reality except on a cognitive/computational substrate that was (downstream from minds) particularly interested in abstract algebra. It is, in the sense you’re using this word here, fictional. But does it mean that all thinking about the monster group is fake in the sense that Joe’s using this word? I don’t think so. Andrew Wiles proved Fermat’s Last Theorem, which is straightforwardly physical-world-interpretable in terms of real countable thingies. I think his project, which resulted in a proof of the theorem, is a good example of real thinking. But his thinking would not be less real if, instead of FLT, he chose something about the monster group as a target.
With standard caveats around ethical injunctions and the unilateralist curse.
FWIW, I’m saying this as someone who has, to a first approximation, meta-ethical views that are quite different from Parfit’s.