I like this reading and don’t have much of an objection to it.
This is a bad argument for transhumanism; it proves way too much. I’m a little surprised that this needs to be said.
Consider: “having food is good. Having more and tastier food is better. This is common sense. Transfoodism is the philosophy that we should take this common sense seriously, and have as much food as possible, as tasty as we can make it, even if doing so involves strange new technology.” But we tried that, and what happened was obesity, addiction, terrible things happening to our gut flora, etc. It is just blatantly false in general that having more of a good thing is better.
As for “common sense”: in many human societies it was “common sense” to own slaves, to beat your children, again etc. Today it’s “common sense” to circumcise male babies, to eat meat, to send people who commit petty crimes to jail, etc., to pick some examples of things that might be considered morally repugnant by future human societies. Common sense is mostly moral fashion, or if you prefer it’s mostly the memes that were most virulent when you were growing up, and it’s clearly unreliable as a guide to moral behavior in general.
Figuring out the right thing to do is hard, and it’s hard for comprehensible reasons. Value is complex and fragile; you were the one who told us that!
In the direction of what I actually believe: I think that there’s a huge difference between preventing a bad thing happening and making a good thing happen, e.g. I don’t consider preventing an IQ drop equivalent to raising IQ. The boy has had120 IQ his entire life and we want to preserve that, but the girl has had 110 IQ her entire life and we want to change that. Preserving and changing are different, and preserving vs. changing people in particular is morally complicated. Again the argument Eliezer uses here is bad and proves too much:
Either it’s better to have an IQ of 110 than 120, in which case we should strive to decrease IQs of 120 to 110. Or it’s better to have an IQ of 120 than 110, in which case we should raise the sister’s IQ if possible. As far as I can see, the obvious answer is the correct one.
Consider: “either it’s better to be male than female, in which case we should transition all women to men. Or it’s better to be female than male, in which case we should transition all men to women.”
What I can appreciate about this post is that it’s an attempt to puncture bad arguments against transhumanism, and if it had been written more explicitly to do that as opposed to presenting an argument for transhumanism, I wouldn’t have a problem with it.
This whole conversation makes me deeply uncomfortable. I expect to strongly disagree at pretty low levels with almost anyone else trying to have this conversation, I don’t know how to resolve those disagreements, and meanwhile I worry about people seriously advocating for positions that seem deeply confused to me and those positions spreading memetically.
For example: why do people think consciousness has anything to do with moral weight?
Relevant reading: gwern’s The Narrowing Circle. He makes the important point that moral circles have actually narrowed in various ways, and also that it never feels that way because the things outside the circle don’t seem to matter anymore. Two straightforward examples are gods and our dead ancestors.
Does anyone else get the sense that it feels vaguely low-status to post in open threads? If so I don’t really know what to do about this.
This makes sense, but I also want to register that I viscerally dislike “controlling the elephant” as a frame, in roughly the same way as I viscerally dislike “controlling children” as a frame.
Huh. Can you go into more detail about what you’ve done and how it’s helped you? Real curious.
I think the original mythology of the rationality community is based around cheat codes
A lot of the original mythology, in the sense of the things Eliezer wrote about in the sequences, is about avoiding self-deception. I continue to think this is very important but think the writing in the Sequences doesn’t do a good job of teaching it.
The main issue I see with the cheat code / munchkin philosophy as it actually played out on LW is that it involved a lot of stuff I would describe as tricking yourself or the rider fighting against / overriding the elephant, e.g. strategies like attempting to reward yourself for the behavior you “want” in order to fix your “akrasia.” Nothing along these lines, e.g. Beeminder, worked for me when I experimented with them, and the whole time my actual bottleneck was that I was very sad and very lonely and distracting myself from and numbing that (which accounted for a huge portion of my “akrasia,” the rest was poor health, sleep and nutrition in particular).
This question feels confused to me but I’m having some difficulty precisely describing the nature of the confusion. When a human programmer sets up an IRL problem they get to choose what the domain of the reward function is. If the reward function is, for example, a function of the pixels of a video frame, IRL (hopefully) learns which video frames human drivers appear to prefer and which they don’t, based on which such preferences best reproduce driving data.
You might imagine that with unrealistic amounts of computational power IRL might attempt to understand what’s going on by modeling the underlying physics at the level of atoms, but that would be an astonishingly inefficient way to reproduce driving data even if it did work. IRL algorithms tend to have things like complexity penalties to make it possible to select e.g. a “simplest” reward function out of the many reward functions that could reproduce the data (this is a prior but a pretty reasonable and justifiable one as far as I can tell) and even with large amounts of computational power I expect it would still not be worth using a substantially more complicated reward function than necessary.
IRL does not need to answer this question along the way to solving the problem it’s designed to solve. Consider, for example, using IRL for autonomous driving. The input is a bunch of human-generated driving data, for example video from inside a car as a human drives it or more abstract (time, position, etc.) data tracking the car over time, and IRL attempts to learn a reward function which produces a policy which produces driving data that mimics its input data. At no point in this process does IRL need to do anything like reason about the distinction between, say, the car and the human; the point is that all of the interesting variation in the data is in fact (from our point of view) being driven by the human’s choices, so to the extent that IRL succeeds it is hopefully capturing the human’s reward structure wrt driving at the intuitively obvious level.
In particular a large part of what is selecting the level at which to work is the human programmer’s choice of how to set up the IRL problem, in the selection of the format of the input data, the selection of the format of the reward function, and in the selection of the format of the IRL algorithm’s actions.
In any case, in MIRI terminology this is related to multi-level world models.
Thanks for the mirror! My recommendation is more complicated than this, and I’m not sure how to describe it succinctly. I think there is a skill you can learn through practices like circling which is something like getting in direct emotional contact with a group, as distinct from (but related to) getting in direct emotional contact with the individual humans in that group. From there you have a basis for asking yourself questions like, how healthy is this group? How will the health of the group change if you remove this member from it? Etc.
It also sounds like there’s an implicit thing in your mirror that is something like “...instead of doing explicit verbal reasoning,” and I don’t mean to imply that either.
I appreciate the thought. I don’t feel like I’ve laid out my position in very much detail so I’m not at all convinced that you’ve accurately understood it. Can you mirror back to me what you think my position is? (Edit: I guess I really want you to pass my ITT which is a somewhat bigger ask.)
In particular, when I say “real, living, breathing entity” I did not mean to imply a human entity; groups are their own sorts of entities and need to be understood on their own terms, but I think it does not even occur to many people to try in the sense that I have in mind.
(For additional context on this comment you can read this FB status of mine about tribes.)
There’s something strange about the way in which many of us were trained to accept as normal that two of the biggest transitions in our lives—high school to college, college to a job—get packaged in with abandoning a community. In both of those cases it’s not as bad as it could be because everyone is sort of abandoning the community at the same time, but it still normalizes the thing in a way that bugs me.
There’s a similar normalization of abandonment, I think, in the way people treat break-ups by default. Yes, there are such things as toxic relationships, and yes, I want people to be able to just leave those without feeling like they owe their ex-partner anything if that’s what they need to do, but there are two distinct moves that are being bucketed here. I’ve been lucky enough to get to see two examples recently of what it looks like for a couple to break up without abandonment: they mutually decide that the relationship isn’t working, but they don’t stop loving each other at all throughout the process of getting out of the relationship, and they stay in touch with the emotional impact the other is experiencing throughout. It’s very beautiful and I feel a lot of hope that things can be better seeing it.
What I think I’m trying to say is that there’s something I want to encourage that’s upstream of all of your suggestions, which is something like seeing a community as a real, living, breathing entity built out of the connections between a bunch of people, and being in touch emotionally with the impact of tearing your connections away from that entity. I imagine this might be more difficult in local communities where people might end up in logistically important roles without… I’m not sure how to say this succinctly without using some Val language, but like, having the corresponding emotional connections to other community members that ought to naturally accompany those roles? Something like a woman who ends up effectively being a maid in a household without being properly connected to and respected as a mother and wife.
Yes, absolutely. This is what graduate school and CFAR workshops are for. I used to say both of the following things back in 2013-2014:
that nearly all of the value of CFAR workshops came from absorbing habits of thought from the instructors (I think this less now, the curriculum’s gotten a lot stronger), and
that the most powerful rationality technique was moving to Berkeley (I sort of still think this but now I expect Zvi to get mad at me for saying it).
I have personally benefited a ton over the last year and a half through osmosing things from different groups of relationalists—strong circling facilitators and the like—and I think most rationalists have a lot to learn in that direction. I’ve been growing increasingly excited about meeting people who are both strong relationalists and strong rationalists and think that both skillsets are necessary for anything really good to happen.
There is this unfortunate dynamic where it’s really quite hard to compete for the attention of the strongest local rationalists, who are extremely deliberate about how they spend their time and generally too busy saving the world to do much mentorship, which is part of why it’s important to be osmosing from other people too (also for the sake of diversity, bringing new stuff into the community, etc.).
I think your description of the human relationship to heroin is just wrong. First of all, lots of people in fact do heroin. Second, heroin generates reward but not necessarily long-term reward; kids are taught in school about addiction, tolerance, and other sorts of bad things that might happen to you in the long run (including social disapproval, which I bet is a much more important reason than you’re modeling) if you do too much heroin.
Video games are to my mind a much clearer example of wireheading in humans, especially the ones furthest in the fake achievement direction, and people indulge in those constantly. Also television and similar.
In particular, you shouldn’t force yourself to believe that you’re attractive.
And I never said this.
But there’s a thing that can happen when someone else gaslights you into believing that you’re unattractive, which makes it true, and you might be interested in undoing that damage, for example.
There’s a thing MIRI people talk about, about the distinction between “cartesian” and “naturalized” agents: a cartesian agent is something like AIXI that has a “cartesian boundary” separating itself from the environment, so it can try to have accurate beliefs about the environment, then try to take the best actions on the environment given those beliefs. But a naturalized agent, which is what we actually are and what any AI we build actually is, is part of the environment; there is no cartesian boundary. Among other things this means that the environment is too big to fully model, and it’s much less clear what it even means for the agent to contemplate taking different actions. Scott Garrabrant has said that he does not understand what naturalized agency means; among other things this means we don’t have a toy model that deserves to be called “naturalized AIXI.”
There’s a way in which I think the LW zeitgeist treats humans as cartesian agents, and I think fully internalizing that you’re a naturalized agent looks very different, although my concepts and words around this are still relatively nebulous.
The problem is that the standard justifications of Bayesian probability are in a framework where the facts that you are uncertain about are not in any way affected by whether or not you believe them!
I want to point out that this is not an esoteric abstract problem but a concrete issue that actual humans face all the time. There’s a large class of propositions whose truth value is heavily affected by how much you believe (and by “believe” I mean “alieve”) them—e.g. propositions about yourself like “I am confident” or even “I am attractive”—and I think the LW zeitgeist doesn’t really engage with this. Your beliefs about yourself express themselves in muscle tension which has real effects on your body, and from there leak out in your body language to affect how other people treat you; you are almost always in the state Harry describes in HPMoR of having your cognition constrained by the direct effects of believing things on the world as opposed to just by the effects of actions you take on the basis of your beliefs.
There’s an amusing tie-in here to one of the standard ways to break the prediction market game we used to play at CFAR workshops. At the beginning we claim “the best strategy is to always write down your true probability at any time,” but the argument that’s supposed to establish this has a hidden assumption that the act of doing so doesn’t affect the situation the prediction market is about, and it’s easy to write down prediction markets violating this assumption, e.g. “the last bet on this prediction market will be under 50%.”
I do not. Fortunately, you can just test it empirically for yourself!
General advice that I think basically applies to everybody is to try to lock down sleep, diet, and exercise (not sure what order these go in exactly) solidly.
Random sleep tips:
Try to sleep in as much darkness as possible. Blackout curtains + a sleep mask is as dark as I know how to easily make things, although you might find the sleep mask takes some getting used to. Just a sleep mask is already pretty good.
Blue light from screens at night disrupts your sleep; use f.lux or equivalent to straightforwardly deal with this.
Lower body temperature makes it easier to sleep, so take hot showers at night, which cause your body to cool down in response.
If you’re having trouble falling asleep at a consistent time, consider supplementing small (on the order of 0.1 mg) amounts of melatonin. (Edit: see SSC post on melatonin for more, which recommends 0.3 mg.) A lot of the melatonin you’ll find commercially is 3-5 mg and that’s too much. I deal with this by biting off small pieces, not sure if that’s a good idea. (Melatonin stopped working for me in February anyway, not sure what’s up with that.)
I have thoughts about diet and exercise but fewer general recommendations; the main thing you want here is something that feels good and is sustainable to you.
Other than that, something feels off to me about the framing of the question. I feel like I’d have to know a lot more about what kind of person you are and what kind of things you want out of your life to give reasonable answers. Everything is just very contextual.