Personally, I just dismiss scrupulosity as an error. I don’t need a justification for doing this, any more than I need a justification for concluding that if, when doing some mathematics, I derive a contradiction, then I must have made an error somewhere. Call this the absurdity heuristic, call it a strong prior, but obsessing over the unknowable potentially enormous consequences of every breath I take is an obvious failure mode, and I don’t do obvious failure modes. Instead, I do what looks like the right thing for me to do, the thing that only I will do. (That is just a rough description of a rule of thumb, not something with a detailed philosophical analysis behind it.)
This probably makes me a bad person by the standards of the typical EA. I have no interest in dissuading them from their calling, but it is not my calling.
This is an imaginary scenario of someone creating a pocket universe. How does this bear on the question of whether we are in reality living in such a created universe?
Seems we are talking past each other.
It seems to me that we are talking directly to each other’s statements.
and I am tired of explaining why in my view this way of thinking is nice to start, but eventually harmful, I’ve been doing it on this blog for over 5 years.
And I’ve been contesting it even longer, although less frequently.
You too. I am happy to leave a conversation as soon as it appears that everything has been said, whether or not anyone has been convinced of anything. But I would still like to know how you would set about diagnosing and repairing a faulty bicycle. It is in simple, everyday matters like these that one can best see what pays rent and what does not.
Epicycles worked for astronomy and astrology for some centuries.
Nothing works for astrology.
Belief in God among religious people gets you to socialize and be accepted by the community, with all the associated perks, and so is useful, and therefore “good”, if thriving in your community is what you value.
In practice, you are expected to actually believe, not merely pretend to believe—that is, lie your way through every religious ritual.
self-consistency is what you are after, faith would not pay rent and you need to find a “better” way to make sense of the world.
Not a “better” way, but a better way. Reality has no inverted commas.
As it happens, my bicycle has developed a couple of mechanical problems. I already have a rough idea of what needs to be done, but the first thing I need to do when I have the time is examine it to discover exactly what needs to be done—to discover what is true about the faults, and so be able to replace exactly the parts that need to be replaced, clean the parts that need cleaned, lubricate what needs lubricating, and adjust what needs adjusting. This talk about usefulness is an evasion of reality. What is useful to me regarding my bicycle is the truth about it, nothing less.
Whatever you find useful, if you are serious about it, you will find that you need to know the truth about it, to know what will achieve your purposes and what will not.
What you do changes who you are.
That includes whatever you do to avoid this happening.
What makes a model good, or to allude to a much-quoted aphorism of Cox that I find rather irritating, useful? What do you want to do with a model, that you can rate a model on its fitness for that purpose?
It would be interesting to hear whether people recognize the above ideas as something familiar
Very familiar, from multiple sources, all the way back to reading Korzybski as a teenager. “Consciousness of abstracting” is what he called it.
The Suffering Golem is no thought experiment. There are actual people who live with great suffering. Some of them wish to die, but some do not. Should you kill someone who is in untreatable pain, against their definitely expressed, compos mentis wishes? Should such an act be legally not murder but justifiable homicide, justified by the amount of suffering thereby prevented?
I say no. What do others say?
A dictionary will tell you that a question is a sentence worded or expressed so as to elicit information, and it seems to me that that is exactly how the word is used. There is something that one does not know and wishes to know, and a question, addressed to someone who might know, is one means of satisfying that want.
I don’t see what the big deal is.
The quote (in my undertanding of it) is not about “instinct”, i.e. not knowing why you did something. Quite the opposite: it is seeing things clearly enough to make the right choice quickly and knowingly. Recognising what must be done and why, not dithering in “choice”. And this is recommended as the way to live, or to strive to live. Achieving anything requires action, action requires choice, and choices must actually be made, cutting off paths as the sculptor cuts away marble, destroying all the sculptures that could be made except for the one that he has decided to make. The sculptor who sits beside a block of marble, merely contemplating the great works that he might make but never raising his chisel to the stone, is failing as a sculptor.
The thing that you are calling “freedom” seems to be the inability to act, to make a choice. Why would this be a desirable thing?
Here’s something I’ve quoted a couple of times before on LessWrong. Time to bring it out again:
“You pride yourself on freedom of choice. Let me tell you that this very freedom is one of the factors that most confuse and undermine you. It gives you full play for your neuroses, your surface reactions and your aberrations. What you should aim for is freedom from choice! Faced with two possibilities, you spend time and effort to decide which to accept. You review the whole spectrum of political, emotional, social, physical, psychological and physiological conditioning before coming up with the answer which, more often than not, does not even satisfy you then. Do you know, can you comprehend, what freedom it gives you if you have no choice? Do you know what it means to be able to choose so swiftly and surely that to all intents and purposes you have no choice? The choice that you make, your decision, is based on such positive knowledge that the second alternative may as well not exist.”
-- Rafael Lefort, “The Teachers of Gurdjieff”, ch. XIV
Every choice you make removes that choice from you. If your first thought on making a decision is “Was that the right decision?” then you did not make a decision. When you have truly made a decision, the decision is no longer in front of you, it is behind, receding into the past. Every step in the dance moves on, cutting off from realisation all the steps that were not made in order to make this one.
No-one is granted a God’s eye view of the whole garden of forking paths, from where you might experience all the different possibilities together without ever having to choose among them. You only get a single run-through of the game.
Yes, you’re right.
Your second example, 1 > 1⁄2 > 1⁄4 > … > 0, is a well-order. To make it non-well-ordered, leave out the 0.
I’m raising a question more than making an argument. Are there futures that would seem to present-day people completely alien or even monstrous, that nevertheless its inhabitants would consider a vast improvement over our present, their past? Would these hypothetical descendants regard as mere paperclipping, an ambition to fill the universe forever with nothing more than people comfortably like us?
“Of Life only is there no end; and though of its million starry mansions many are empty and many still unbuilt, and though its vast domain is as yet unbearably desert, my seed shall one day fill it and master its matter to its uttermost confines. And for what may be beyond, the eyesight of Lilith is too short. It is enough that there is a beyond.”
As I understand it, there is not yet a good theory of integration on the surreals. Partial progress has been made, but there are also some negative results establishing limitations on the possibilities. Here is a recent paper.
If humanity is replaced by “descendants” which are completely alien or even monstrous from our point of view, did humanity “survive”?
Og see 21st century. Og say, “Where is caveman?”
3-year-old you sees present-day you...
Present you sees 90-year-old you...
90-year-old you sees your 300-year-old great great grandchildren...
These are extremes that I have no experience with. I have had no childhood trauma. I have never had, sought, nor been suggested to have any form of psychological diagnosis or therapy. I have never had depression, mania, anxiety attacks, SAD, PTSD, hearing imaginary voices, hallucinations, or any of the rest of the things that psychiatrists see daily. I have had no drug trips. I laugh at basilisks.
It sometimes seems to me that this mental constitution, to me a very ordinary one, makes me an extreme outlier here.
My recursive suggestion won’t work. One can devise a UTM that gives the shortest code to itself, by the usual reflexivity constructions. The computability theory textbook method looks better. But what theoretical justification can be given for it? Why are we confident that bad explanations are not lurking within it?
Actually, perhaps we shouldn’t be. It has already been remarked by Eliezer that Solomonoff induction gives what looks like undue weight to hypotheses involving gigantic numbers with short descriptions, e.g. 3^^^3, despite the fact that, looking at the world, such numbers have never been useful for anything but talking about gigantic numbers, and proving what are generally expected to be very generous upper bounds for some combinatorial theorems.
The definition of Solomonoff induction is indifferent to the choice of universal Turing machine, because the difference it makes is a bounded number of bits. Two calculations of Kolmogorov complexity using different UTMs will always agree to within a number of bits c, where c depends on both of the UTMs (and measures how easily each can simulate the other).
c can be arbitrarily large.If you pack your UTM full of preferred hypotheses given short codings (e.g. “let it be a human brain”), then you will get those hypotheses back out of it. But that did not come from Solomonoff induction. It came from your choice of UTM.
This raises the question: if, contra the theoretical indifference to choice of UTM, the choice does matter, how should the choice be made? One might consider a UTM having minimal description length, but which UTM do you use to determine that, before you’ve chosen one? Suppose one first chooses an arbitrary UTM T0, then determines which UTM T1 is given the shortest description length by T0, then generates T2 from T1 in the same way, does this necessarily converge on a UTM that in some definable sense has no extra hypotheses stuffed into it? Or does this process solve nothing?
Alternatively, you might go with some standard construction of a UTM out of a computability theory textbook. Those look minimal enough that no complex hypotheses would be unjustly favoured, but it seems to me there is still a theoretical gap to be plugged here.
That is an area in which it appears that experiences differ a great deal. I doubt that Said would recognise these “sub-personalities”, and for that matter, neither do I. I experience myself as a coherent person, made of parts that do not behave like persons.