Given that I still think after all this trying that you are confused and that I never wanted to put this much work into the comments on this post, I give up trying to explain further as we are making no progress. I unfortunately just don’t have the energy to devote to this right now to see it through. Sorry.
Well if you don’t have C, then you have to build up the truth some other way because you don’t have the ability to ground yourself directly in it because truth exists in the map rather than the territory. So then you are left to ground yourself in what you do find in the territory, and I’d describe the thing you find there as telos or will rather than truth because it doesn’t really look like truth. Truth is a thing we have to create for ourselves rather than extract. The rest follows from that.
See my other comment, but assuming to know something about how to compute C would just already be part of C by definition. It’s very hard to talk about the criterion of truth without accidentally saying something that implies it’s not true because it’s an unknowable thing we can’t grasp onto. C is basically a statement that, if included in a valid argument about the truth of P, causes the argument to tell us either P or ~P. That’s definitionally what it means to be able to know the criterion of truth.
That you want to deny C is great, because I think (as I’m finding with Said), that we already agree, and any disagreement is the consequence of misunderstanding, probably because it comes too close to sounding to you like a position that I would also reject, and the rest of the fundamental disagreement is one of sentiment, perspective, having worked out the details, and emphasis.
There is no reason, as far as I can tell; the latter quote just adds extremely impossible magic, out of nowhere and for no reason.
I’m saying the thing in the first quote, saying C exists, is the extremely impossible magic. I guess I don’t know how to convey this part of the argument any more clearly, as it seems to me to follow directly and objections I can think of to it hinge on assuming things you would know contingent on what you think about C and thus are not admissible here.
Maybe it would help if I gave an example? Let’s say C exists. Okay, great, now we can tell if things are true independent of any mind since C is a real fact of the world, not a belief (it’s part of the territory). Now I can establish as a matter of fact (or rather we have no way to express this correctly, but the fact can be established independent of any subject) whether or not the sky is blue independent of any observer because there is an argument contingent on C which tells us whether the statement “the sky is blue” is true or false. Now this statement is true or false in the territory and not in necessarily in any map. We’d say this is a realist position rather than an anti-realist one. This would have to mean then that this fact would be true for anything we might treat as a subject of which we could ask “does X know the fact of the matter about whether or not the sky is blue”. Thus we could ask if a rock knows whether or not the sky is blue and it would be a meaningful question about a matter of fact and not a category error like it is when we deny the knowability of C because then we have taken an anti-realist position. This is what I’m trying to say about saying there are universally compelling arguments if we assume C: the truth of matters then shifts from existing in the map to existing in the territory, and so now there can be universally compelling arguments for things that are true even if the subject is too dumb to understand them they will still be true for them regardless.
I’m not sure that helps but that’s the best I can think up right now.
Eliezer uses “convincing a rock” as a self-evidently absurd reductio, but it sounds like you don’t actually see it that way?
Yep, I agree, which is why I point it out as something absurd that would be true if the counterfactual existence of C were instead factual.
Yes, exactly, you get it. I’m not sure what confusion remains or you think remains. The only point seems here:
But of course it wouldn’t. What? This seems completely unrelated to compellingness (universal or otherwise). I have but to build a mind that does not implement the procedure in question, or doesn’t implement it for some specific argument(s), or does implement it but then someone reverses it (cf. Eliezer’s “little grey man”), etc.
The counterfactual I’m proposing with C is exactly one that would allow not just any mind, but literally anything at all to comprehend A. The existence of C would create a universe wholly unlike our own, which is why I think we’re all in agreement that the existence of such a thing is extremely unlikely even though we can’t formally prove that it doesn’t exist.
Sorry, I mean to say “A is a mind-independent argument for the truth value of P and there exists by our construction such an A for all P that would convince even rocks”.
Nope, this is explicitly what I wanted to avoid doing, although I note I’ve already been sucked in way deeper into this than I ever meant to be.
And I am saying: this is wrong and confused. If “the criterion of truth is knowable”, that has exactly zero to do with whether there exist universally compelling arguments. Criterion of truth or no criterion of truth, I can always build a mind which fails to be convinced by any given argument you propose. Therefore, any argument you propose will fail to be universally compelling.
So I don’t disagree with Eliezer’s post at all; I’m saying he doesn’t give a complete argument for the position. It seems to me the only point of disagreement is that you think knowability of the criterion of truth does not imply the existence of universally compelling arguments, so let me spell that out. This is to say, why is it that you can build a mind that fails to be convinced by any given argument, because Eliezer only intimates this and doesn’t fully explain it.
Suppose we know the criterion of truth, C; that is, there exists (not counterfactually but actually as in anyone can observe this thing) a procedure/algorithm to assess if any given statement is true. Let P be a statement. Then there exists some argument, A, contingent on C such that A implies P or ~P. Thus for all P we can know if P or ~P. This would make A universally compelling, i.e. A is a mind-independent argument for the truth value of all statements that would convince even rocks.
Since it seems we’re all in agreement C does not exist, I think any disagreement we have lingering is about something other than the point I originally laid out.
Also, for what it’s worth since you bring up computability theory, knowing the criterion of truth would also imply being able to solve the halting problem since you could always answer the question “does this program halt?“.
(Also, I love the irony that I may fail to convince you because no argument is universally compelling!)
I agree with Kaj on this point, however I also don’t think you’re intentionally trying to respond to a strawman version of what we’re presenting; what we’re arguing for hinges on what seems to be a subtle point for most people (it doesn’t feel subtle to me but I am empathetic to technical philosophical positions being subtle to other people), so it’s easy to conflate our position with, say, postmodernist-style epistemic relativism, since although it’s drastically different than that it’s different for technical reasons that may not be apparent from reading the broad strokes of what we’re saying.
I suspect what’s going on in this discussion is something like the following: me, Kaj, TAG, and others are coming from a position that relatively small in idea space, but there’s other ideas that sort-of pattern match if you don’t look too close at the details that are getting confused for the point we’re trying to make, and then people respond to these other ideas rather than the one we’re holding. Although we’re trying our best to cut idea space such that you see the part we’re talking about, the process is inexact because although I’ve pointed to it with the technical language of philosophy the technical language of philosophy is easily mistaken for non-technical language since it reused common words (physics sometimes has the same problem: you pick a word because it’s a useful metaphor but give it a technical meaning, and then people misunderstand because they think too much in terms of the metaphor and not in terms of the precise model being referred to by the word) and requires a certain about of fluency with philosophy in general. For example, in all the comments on this post, I think so far only jessicata has asked for clarification in a way that clearly is framed in terms of technical philosophy.
This is not to necessarily demand that you engage with technical philosophy if you don’t want to, but it is I suspect why we continue to have trouble communicating (or if there are other reasons this is a major one). I don’t know a way to explain these points that isn’t in that language and not also easily confused for other ideas I wouldn’t endorse, though, so there may not be much way forward in presenting metarationality to you in a way that I would agree that you understand it and allows you to express a rejection I would consider valid (if indeed such a reason for rejection exists; if I knew one I wouldn’t hold these views!). The only other ways we have of talking about these things tend to rely much more on appeal to intuitions that you don’t seem to share, and transmitting those intuitions is a separate project from what I want to do, although Kaj’s and others’ responses do a much better job than mine of attempting that transmission.
The non-existence of universally compelling arguments has nothing to do with whether “the criterion of truth is knowable”, or “epistemic circularity”, or any other abstruse epistemic issues, or any other non-abstruse epistemic issues.
There cannot be a universally compelling argument because for any given argument, there can exist a mind which is not persuaded by it.
This feels to me similar to saying “don’t worry about all that physics telling us we can’t travel faster than light, we have engineering reasons to think we can’t do it” as if this were a dismissal of the former when it’s in fact an expression of it. Further, Eliezer doesn’t really prove his point in that post if you want a detailed philosophical explanation of the point. Instead, as is often the case, Eliezer is smart and manages to come to a conclusion consistent with the philosophical details despite making arguments at a level where it’s not totally clear he can support the claims he’s making (which is fine because he wasn’t writing to do that, but it does make his words on the subject less relevant here because they’re talking to a different level of abstraction).
Thus, it seems that you’re just agreeing with me even if you’re talking at a different level of abstraction, but I take it from your tone you meant to disagree, so maybe you meant to press some other point that’s not clear to me from what you wrote?
Hmm, I think there is some kind of category error happening that you think I’m asking for universally compelling arguments because I agree they don’t and can’t exist as a straightforward corollary of epistemic circularity. You might feel that I do though because I think if you assume to know the criterion of truth or to be able to learn it this would be equivalent to saying you could find a universally compelling argument, because this is exactly the positivist stance. If you disagree then I suspect whatever disagreement we have has become extremely esoteric since I don’t see a natural space into which you could claim the criterion of truth is knowable and that there are no universally compelling arguments.
I’m not sure it’s all so deep as you suppose. Instead, I think Facebook is good enough for our purposes and other tools have other tradeoffs that make them unappealing to many users. For example, things like Discord are unappealing to me because they expect you to treat them like chat rather than email, unlike Facebook which is more like email than chat, and so I stay away from it because I don’t want to be pulled on in the ways chat-like things pull on your attention. Other people want things that are more like Discord and less like Facebook, so they make the opposite choice. Same goes for other possible options like Twitter, Tumblr, etc..
Would be disruptors like to talk a lot about how inertia is keeping people on old things, but given that people will quickly abandon old things with lots of supposed inertia as soon as something better enough comes along, I think it’s more likely that Facebook is just good enough that the people using it reasonably don’t abandon it because there isn’t something better enough to be worth the switching cost. That we could make something slightly better (or even much better) along certain dimensions suggests we can’t make something better enough in general (yet).
“Someone might then respond, “well if it’s so ordinary, what’s this whole thing about post/metarationality being totally different from ordinary rationality, then?” Honestly, beats me. I don’t think it really is particularly different, and giving it a special label that implies that it’s anything else than just a straightforward application of ordinary rationality is just confusing matters and doing everyone a disservice. But that’s the label we seem to have ended up with.”
Maybe what you’re talking about is different from what everyone else who is into “postrationality”, or what have you, is talking about?
(sorry; I can’t seem to nest blockquotes in the comments; that’s the best I could do)
For myself I find this point is poorly understood by most self-identified rationalists, and I think most people reading the sequences come out of them as positivists because Eliezer didn’t hammer the point home hard enough and positivism is the default within the wider community of rationality-aligned folks (e.g. STEM folks). I wish all this disagreement were just a simple matter politics over who gets to use what names, but it’s not because there’s a real disagreement over epistemology. Given that “rationality” was always a term that was bound to get conflated with the rationality of high modernism, it’s perhaps not surprising that those of us who got fed up with the positivists ended up giving ourselves a new name.
This is made all the more complicated because Eliezer does specifically call out positivism as a failure mode, so it makes pinning people down on this all the more tricky because they can just say “look, Eliezer said rationality is not this”. As the responses to this post make clear, though, the positivist streak is alive and well in the LW community given what I read as a strong reaction against the calling out of positivism or for that matter privileging any particular leap of faith (although positivists don’t necessarily think of themselves as doing that because they disagree with the premise that we can’t know the criterion of truth). So this all leads me to the position that we have need of a distinction for now because of our disagreement on this fundamental issue that has many effects on what is and is not considered to be useful to our shared pursuits.
However, there are causal pathways through which we can evaluate whether or not our brains are tracking reality. They have been extensively written about on LessWrong over the years, and a large amount of the core material is collected in a book.
For some, indeed most, parts of our brains, this seems true, but the point is meant to be not all and not reliably enough that we are left with some doubt about what’s really going on.
So, all of the above quotes appear to be obviously, trivially false. Is there some other interpretation, or are they mere deepities?
My point is that our naive use of our senses often deceive us. This is not meant as a line of evidence in support of my position, but an evocative experience you’ve probably had that is along the same lines as the thing I’m gesturing at. It is of course different in that we know those things are illusions because it turns out we have more information than we initially think we do, so I am more interested here in the experience of finding that you ask your senses for information about the world and get back what turns out to be misinformation so you have something concrete the grasp onto as a referent for the kind of thing I’m pointing at when I make the more general point about the problem of perception.
Thank you for correcting me on the blind spot thing.
I remain sceptical about this unquestionable core. The argument for its existence looks isomorphic to the proof of God as first cause, first knowledge, and first good. But I’ll leave that aside. What constitutes working with it?
It is related, but only because it’s the existence of the unexaminable core that creates the free variable that allows us to pick the leap of faith we want to take, be it to God or something else. In fact this is what I would accuse most rationalists of: taking a leap of faith to positivism (that we can establish the truth value of every assertion, or more properly because rationalists are also Bayesians the likelihood of the truth of every assertion), even if it’s done out of pragmatism. Working with the unexaminable means remaining deeply skeptical that we know anything or even can know anything and considering the possibility that we are deeply deluded. Most of the time this doubt ends up working out in favor of rationality, but sometimes it seems to not, or at least you’re less certain that it does. This invites us to reconsider our most fundamental assumptions about the world and how we know it and be less sure that things are as they seem.
First, to give my comment some context, I’m vegan, so I’ve decided to act to reduce the impact I have on animal suffering in at least the direction of removing my participation in activities that facilitate the creation of more animal suffering. That said, I don’t really think in terms of rights for animals, although sometimes as a matter of political expedience it may make sense to talk in that language. Instead I think in terms of responsibility that comes from self-awareness: we know what we are doing unlike non-human animals, and so we have a responsibility given what we know to act in ways that help animals (and all life) rather than hurt them. Talk of rights has more to do with how we carry out that responsibility in the systems we find ourselves in.
So jumping back in here, my original line of comment was towards cousin_it who seemed to be suggesting (to me) some choice of unquestioned core the way we pick axioms, rather than jumping past that to the real core, which I think is interesting and worthing saying a little about because the reasons for why it’s unquestionable (at least for a practical sense of questioning to which we can reasonably expect an answer) get at the heart of the epistemological issues I see as the root of the postrationalist worldview.
I’ve largely emphasized the problem of the criterion because that’s the formulation of this epistemological issue that has precedence in Western philosophy, but we see the same issue pop up in mathematics and logic as the problem of induction, in analytic philosophy as the grounding problem, and in Bayesian epistemology as the question of the universal prior. But given the direction of this discussion, I’d like to bring up the approach to it from the phenomenological tradition through the problem of perception.
The problem of perception is that, if we use our senses to learn about the world, then we cannot trust our senses to reliably provide us information about our senses. You’ve no doubt experienced this first hand if you’ve ever seen an optical illusion or felt imagined touches or had your smell and taste tricked by unusual combinations of ingredients into making you think you were eating something other than you were. Or our senses may have blind spots, the way you can’t directly look into the center of your own pupil because there’s a blindspot in the middle of your vision. And these are just the times we are able to notice something weird is happening; we literally don’t and can’t know about the things our senses my obscure from us.
If you practice meditation or phenomenological reduction, you’ll find there is a core loop you can’t bracket away of some thing observing itself (or rather in epoche you find you just keep bracketing the same thing over and over again without managing to strip anything further away). We don’t have to put a name on it, but some have called it pure awareness, consciousness, and the inner self. Epoche provides a way to see this thing analytically, and meditation provides a way to experience nothing but it (to experience nothing but experience itself).
So when clone of saturn says you already are the unquestioned core, this is what they are pointing at, something so small and hard and fundamental to how we know that we can’t question it in any meaningful way. And this exposes another way of seeing the difference between rationality and postrationality (or at least a difference of emphasis): the rationalist project seems to me to either deny this hard, unquestionable thing or make universal assumptions about it, and the postrationalist project sees it as a free variable we must learn to work with in different contexts.
But without a clear verbal explanation, not only do I have no good reason at all to believe that “there’s a ‘there’ there”… but neither do you!
I may well have knowledge of things through experience I cannot verbalize well or explain to myself in a systematized way. To suggest it must be that I can only have such things if I can explain them is to assume against the point I’m making in the original post, which you are free to disagree with but I want to make clear I think this is a difference of assumptions not a difference of reasoning from a shared assumption.
Please see the moderation guidelines. I choose to enforce a particular norm I spell out and I’m the ultimate arbiter of that. If anything I am too generous to people and let them get away with a lot of bullshit before I put a stop to things. This is not to say I never make errors, but if I think you made insufficient effort to respond in a good faith way to advance the conversation, understand the other person, and respond in a way that is not simply reacting in frustration, trying to score points, or otherwise speak to some purpose other than increasing mutual understanding, then your comment will be deleted. If you don’t like my garden you can always go talk somewhere else.