Indeed, the scientific history of how observation and experiment led to a correct understanding of the phenomenon of rainbows is long and fascinating.
nshepperd
I’m sorry, what? In this discussion? That seems like an egregious conflict of interest. You don’t get to unilaterally decide that my comments are made in bad faith based on your own interpretation of them. I saw which comment of mine you deleted and honestly I’m baffled by that decision.
If I may summarize what I think the key disagreement is, you think we can know truth well enough to avoid the problem of the criterion and gain nothing from addressing it.
and to be pointed about it I think believing you can identify the criterion of truth is a “comforting” belief that is either contradictory or demands adopting non-transcendental idealism
Actually… I was going to edit my comment to add that I’m not sure that I would agree that I “think we can know truth well enough to avoid the problem of the criterion” either, since your conception of this notion seems to intrinsically require some kind of magic, leading me to believe that you somehow mean something different by this than I would. But I didn’t get around to it in time! No matter.
If I may summarize what I think the key disagreement is, you think we can know truth well enough to avoid the problem of the criterion and gain nothing from addressing it.
That’s not my only disagreement. I also think that your specific proposed solution does nothing to “address” the problem (in particular because it just seems like a bad idea, in general because “addressing” it to your satisfaction is impossible), and only serves as an excuse to rationalize holding comforting but wrong beliefs under the guise of doing “advanced philosophy”. This is why the “powerful but dangerous tool” rhetoric is wrongheaded. It’s not a powerful tool. It doesn’t grant any ability to step outside your own head that you didn’t have before. It’s just a trap.
I don’t have to solve the problem of induction to look out my window and see whether it is raining. I don’t need 100% certainty, a four-nines probability estimate is just fine for me.
Where’s the “just go to the window and look” in judging beliefs according to “compellingness-of-story”?
Of course not, and that’s the point.
The point… is that judging beliefs according to whether they achieve some goal or anything—is no more reliable than judging beliefs according to whether they are true, is in no way a solution to the problem of induction or even a sensible response to it, and most likely only makes your epistemology worse?
Indeed, which is why metarationality must not forget to also include all of rationality within it!
Can you explain this in a way that doesn’t make it sound like an empty applause light? How can I take compellingness-of-story into account in my probability estimates without violating the Kolmogorov axioms?
To say a little more on danger, I mean dangerous to the purpose of fulfilling your own desires.
Yes, that’s exactly the danger.
Unlike politics, which is an object-level danger you are pointing to, postrationality is a metalevel danger, but specifically because it’s a more powerful set of tools rather than a shiny thing people like to fight over. This is like the difference between being weary of generally unsafe conditions that cannot be used and dangerous tools that are only dangerous if used by the unskilled.
Thinking you’re skilled enough to use some “powerful but dangerous” tool is exactly the problem. You will never be skilled enough to deliberately adopt false beliefs without suffering the consequences.
But surely… if one is aware of these reasons… then one can simply redo the calculation, taking them into account. So we can rob banks if it seems like the right thing to do after taking into account the problem of corrupted hardware and black swan blowups. That’s the rational course, right?
There’s a number of replies I could give to that.
I’ll start by saying that this is a prime example of the sort of thinking I have in mind, when I warn aspiring rationalists to beware of cleverness.
Because there’s no causal pathway through which we could directly evaluate whether or not our brains are actually tracking reality.
I don’t know what “directly” means, but there certainly is a causal pathway, and we can certainly evaluate whether our brains are tracking reality. Just make a prediction, then go outside and look with your eyes to see if it comes true.
Schizophrenics also think that they have causal access to the truth as granted by their senses, and might maintain that belief until their death.
So much the worse for schizophrenics. And so?
“Well we can’t go below 20%, but we can influence what that 20% consists of, so let’s swap that desire to believe ourselves to be better than anyone else into some desire that makes us happier and is less likely to cause needless conflict. Also, by learning to manipulate the contents of that 20%, we become better capable at noticing when a belief comes from the 20% rather than the 80%, and adjusting accordingly”.
I have a hard time believing that this sort of clever reasoning will lead to anything other than making your beliefs less accurate and merely increasing the number of non-truth-based beliefs above 20%.
The only sensible response to the problem of induction is to do our best to track the truth anyway. Everybody who comes up with some clever reason to avoid doing this thinks they’ve found some magical shortcut, some powerful yet-undiscovered tool (dangerous in the wrong hands, of course, but a rational person can surely use it safely...). Then they cut themselves on it.
Two points:
-
Advancing the conversation is not the only reason I would write such a thing, but actually it serves a different purpose: protecting other readers of this site from forming a false belief that there’s some kind of consensus here that this philosophy is not poisonous and harmful. Now the reader is aware that there is at least debate on the topic.
-
It doesn’t prove the OP’s point at all. The OP was about beliefs (and “making sense of the world”). But I can have the belief “postrationality is poisonous and harmful” without having to post a comment saying so, therefore whether such a comment would advance the conversation need not enter into forming that belief, and is in fact entirely irrelevant.
-
Well, this is a long comment, but this seems to be the most important bit:
The general point here is that the human brain does not have magic access to the criteria of truth; it only has access to its own models.
Why would you think “magic access” is required? It seems to me the ordinary non-magic causal access granted by our senses works just fine.
All that you say about beliefs often being critically mistaken due to eg. emotional attachment, is of course true, and that is why we must be ruthless in rejecting any reasons for believing things other than truth—and if we find that a belief is without reasons after that, we should discard it. The problem is this seems to be exactly the opposite of what “postrationality” advocates: using the lack of “magic access” to the truth as an excuse to embrace non-truth-based reasons for believing things.
At the risk of putting words in your mouth, it sounds instead as if you think we can assess the criterion of truth, which we cannot and have known we cannot for over 2000 years.
But of course we can, as evidenced by the fact that people make predictions that turn out to be correct, and carry out plans and achieve goals based on those predictions all the time.
We can’t assess whether things are true with 100% reliability, of course. The dark lords of the matrix could always manipulate your mind directly and make you see something false. They could be doing this right now. But so what? Are you going to tell me that we can assess ‘telos’ with 100% reliability? That we can somehow assess whether it is true that believing something will help fulfill some purpose, with 100% reliability, without knowing what is true?
The problem with assessing beliefs or judgements with anything other than their truth is exactly that the further your beliefs are from the truth, the less accurate any such assessments will be. Worse, this is a vicious positive feedback loop if you use these erroneous ‘telos’ assessments to adopt further beliefs, which will most likely also be false, and make your subsequent assessments even more inaccurate.
As usual, Eliezer put it best in Ethical Injunctions.
I will at least agree with you that it’s dangerous to folks who are not already rationalists
Being a rationalist isn’t a badge that protects you from wrong thinking. Being a rationalist is the discipline and art of correct thought. When you stop practising correct thought, when you stop seeking truth, being a rationalist won’t save you, because at that moment you aren’t one.
People used to come to this site all the time complaining about the warning about politics: Politics is the Mind-Killer. They would say “for ordinary people, sure it might be dangerous, but we rationalists should be able to discuss these things safely if we’re so rational”, heedless of the fact that the warning was meant not for ordinary people, but for rationalists. The message was not “if you are weak, you should avoid this dangerous thing; you may demonstrate strength by engaging the dangerous thing and surviving” but “you are weak; avoid this dangerous thing in order to become strong”.
Thank you for providing this information.
However, if this is really what ‘postrationality’ is about, then I think it remains safe to say that it is a poisonous and harmful philosophy that has no place on LW or in the rationality project.
Further, since the criterion for knowing what is true is unreliably known, we must be choosing that criterion on some other basis than truth, and so instead view that prior criterion as coming from usefulness to some purpose we have.
You appear to be saying that since it’s impossible to be absolutely certain that any particular thing is the truth, that makes it ok to instead substitute any other easier to solve criteria. This is an incredibly weak justification for anything.
Thus, for example, rationality is important to the purpose of predicting and understanding the world often because we, through experience, come to know it to be correlated with making predictions that later happen, but other criteria, like compellingness-of-story and willingness-to-life, may be better drivers in terms of creating the world we would like to later find ourselves in.
This talk of alternative criteria having equal value sounds very good and cosmopolitan, but actually we know exactly what happens when you stop using truth as your criteria for “truth”. Nothing good.
This is an important point, one which was stressed in EY’s description of “Crocker’s Rules” (and apparently missed by many, as mentioned later on): imposing something on oneself—a discipline—may be very useful, but is a very different thing to expecting that thing of others, and the justifications often do not carry over.
But if you’re putting Benquo in that category, I really don’t see how we’re going to get more than, say, a post a month on LW, at which point why have LW instead of a collection of personal blogs?
I’d just like to comment that in my opinion, if we only had one post a month on LW, but it was guaranteed to be good and insightful and useful and relevant to the practice of rationality and not wrong in any way, that would be awesome.
The world is full of content. Attention is what is scarce.
Robin has published at least two papers that seem like necessary background reading on this topic:
Note that the logarithmic market scoring rule comes with a built in method of subsidy (in fact, requires such subsidy), and guarantees that there will always be nonzero liquidity.
Removing fees is of course a valuable and necessary first step. There’s never any reason to do something so self-defeating as deliberately destroying liquidity by imposing fees.
What exactly is “moral facts exist” supposed to mean? This whole approach smells off to me—it looks like you’re trying to manipulate your confusion as if it were a known quantity. What metaethical baggage is brought to the table by supposing that “do moral facts exist” is a coherent question at all (assuming you do mean something specific by it)?
The VNM theorem is best understood as an operator that applies to a function that obeys the axioms and rewrites that function in the form where U is the resulting “utility function” producing a real number. So it rewrites your function into one that compares “expected utilities”.
To apply this to something in the real world, a human or an AI, one must decide exactly what refers to and how are interpreted.
We can interpret as the actual revealed choices of the agent. Ie. when put in a position to take action to cause either or to happen, what do they do? If the agent’s thinking doesn’t terminate (within the allotted time), or it chooses randomly, we can interpret that as . The possibilities are fully enumerated, so completeness holds. However, you will find that any real agent fails to obey some of the other axioms.
We can interpret as the expressed preferences of the agent. That is to say, present the hypothetical and ask what the agent prefers. Then we say that if the agent says they prefer ; we say that if the agent says they prefer , and we say that if the agent says they are equal or can’t decide (within the allotted time). Again completeness holds, but you will again always find that some of the other axioms will fail.
In the case of humans, we can interpret as some extrapolated volition of a particular human. In which case we say that if the person would choose if only they thought faster, knew more, were smarter, were more the person they wished they would be, etc. One might fancifully describe this as defining as the person’s “true preferences”. This is not a practical interpretation, since we don’t know how to compute extrapolated volition in the general case. But it’s perfectly mathematically valid, and it’s not hard to see how it could be defined so that completeness holds. It’s plausible that the other axioms could hold too—most people consider the rationality axioms generally desirable to conform to, so “more the person they wished they would be” plausibly points in a direction that results in such rationality.
For some AIs whose source code we have access to, we might be able to just read the source code and define using the actual code that computes preferences.
There are a lot of variables here. One could interpret the domain of as being a restricted set of lotteries. This is the likely interpretation in something like a psychology experiment where we are constrained to only asking about different flavours of ice cream or something. In that case the resulting utility function will only be valid in this particular restricted domain.
> I’d probably still have the same “speak up and share advice” module, but I’d add a function to the front that injects some gentleness and some status-dynamic-defusing words and phrases.
In this case, I have to object to this advice. You can tie yourself in knots trying to figure out what the most gentle way to say something is, and end up being perceived as condescending etc. anyway for believing that X is “obvious” or that someone else “should have already thought of it” (as again, what is obvious to one person may not be obvious or salient to another). Better to just state the obvious.
I think you’re right that wherever we go next needs to be a clear schelling point. But I disagree on some details.
I do think it’s important to have someone clearly “running the place”. A BDFL, if you like.
Please no. The comments on SSC are for me a case study in exactly why we don’t want to discuss politics.
Something like reddit/hn involving humans posting links seems ok. Such a thing would still be subject to moderation. “Auto-aggregation” would be bad however.
Sure. But if you want to replace the karma system, be sure to replace it with something better, not worse. SatvikBeri’s suggestions below seem reasonable. The focus should be on maintaining high standards and certainly not encouraging growth in new users at any cost.
I don’t believe that the basilisk is the primary reason for LW’s brand rust. As I see it, we squandered our “capital outlay” of readers interested in actually learning rationality (which we obtained due to the site initially being nothing but the sequences) by doing essentially nothing about a large influx of new users interested only in “debating philosophy” who do not even read the sequences (Eternal November). I, personally, have almost completely stopped commenting since quite a while, because doing so is no longer rewarding.
“For a true Bayesian, it is impossible to seek evidence that confirms a theory”
The important part of the sentence here is seek. The isn’t about falsificationism, but the fact that no experiment you can do can confirm a theory without having some chance of falsifying it too. So any observation can only provide evidence for a hypothesis if a different outcome could have provided the opposite evidence.
For instance, suppose that you flip a coin. You can seek to test the theory that the result was
HEADS
, by simply looking at the coin with your eyes. There’s a 50% chance that the outcome of this test would be “you see theHEADS
side”, confirming your theory (p(HEADS | you see HEADS) ~ 1
). But this only works because there’s also a 50% chance that the outcome of the test would have shown the result to beTAILS
, falsifying your theory (P(HEADS | you see TAILS) ~ 0
). And in fact there’s no way to measure the coin so that one outcome would be evidence in favour ofHEADS
(P(HEADS | measurement) > 0.5
), without the opposite result being evidence againstHEADS
(P(HEADS | ¬measurement) < 0.5
).
This stuff about rain dancing seems like just the most banal epistemological trivialities, which have already been dealt with thoroughly in the Sequences. The reasons why such “tests” of rain dancing don’t work are well known and don’t need to be recapitulated here.
This has nothing to do with causal pathways, magic or otherwise, direct or otherwise. Magic would not turn a rock into a philosopher even if it should exist.
Yes, carrying out experiments to determine reality relies on Occam’s razor. It relies on Occam’s razor being true. It does not in any way rely on me possessing some magical universally compelling argument for Occam’s razor. Because Occam’s razor is in fact true in our universe, experiment does in fact work, and thus the causal pathway for evaluating our models does in fact exist: experiment and observation (and bayesian statistics).
I’m going to stress this point because I noticed others in this thread make this seemingly elementary map-territory confusion before (though I didn’t comment on it there). In fact it seems to me now that conflating these things is maybe actually the entire source of this debate: “Occam’s razor is true” is an entirely different thing from “I have access to universally compelling arguments for Occam’s razor”, as different as a raven and the abstract concept of corporate debt. The former is true and useful and relevant to epistemology. The latter is false, impossible and useless.
Because the former is true, when I say “in fact, there is a causal pathway to evaluate our models: looking at reality and doing experiments”, what I say is, in fact, true. The process in fact works. It can even be carried out by a suitably programmed robot with no awareness of what Occam’s razor or “truth” even is. No appeals or arguments about whether universally compelling arguments for Occam’s razor exist can change that fact.
(Why am I so lucky as to be a mind whose thinking relies on Occam’s razor in a world where Occam’s razor is true? Well, animals evolved via natural selection in an Occamian world, and those whose minds were more fit for that world survived...)
But honestly, I’m just regurgitating Where Recursive Justification Hits Bottom at this point.
This seems like a gross oversimplification to me. The mind is a complex dynamical system made of locally reinforcement-learning components, which doesn’t do any one thing all the time.
And this seems simply wrong. You might as well say “epistemic rationality and chemical action-potentials were the same all along”. Or “jumbo jets and sheets of aluminium were the same all along”. A jumbo jet might even be made out of sheets of aluminium, but a randomly chosen pile of the latter sure isn’t going to fly.
As for your examples, I don’t have anything to add to Said’s observations.