On some further thought, although the quote you shared is relevant, it is not exactly the part of the book that I was referring to. I was referring to the teleportation thought experiment in chapter 8 Expect Yourself:
One day, there’s a hitch. The vaporisation module in London malfunctions and Eva – the Eva who is in London, anyway – feels like nothing’s happened and that she’s still in the transportation facility. A minor inconvenience. They’ll have to reboot the machine and try again, or maybe leave it until the following day. But then a technician shuffles into the room, carrying a gun. He mumbles something along the lines of ‘Don’t worry, you’ve been safely teletransported to Mars, just like normal, it’s just that the regulations say that we still need to … and, look here, you signed this consent form …’ He slowly raises his weapon and Eva has a feeling she’s never had before, that maybe this teletransportation malarkey isn’t quite so straightforward after all.The point of this thought experiment, which is called the ‘teletransportation paradox’, is to unearth some of the biases most of us have when we think about what it means to be a self....Is the Eva on Mars (let’s call her Eva2) the same person as Eva1 (the Eva still in London)? It’s tempting to say, yes, she is: Eva2 would feel in every way as Eva1 would have felt had she actually been transported instantaneously from London to Mars. What seems to matter for this kind of personal identity is psychological continuity, not physical continuity.* But then if Eva1 has not been vaporised, which is the real Eva?I think the correct – but admittedly strange – answer is that both are the real Eva.
One day, there’s a hitch. The vaporisation module in London malfunctions and Eva – the Eva who is in London, anyway – feels like nothing’s happened and that she’s still in the transportation facility. A minor inconvenience. They’ll have to reboot the machine and try again, or maybe leave it until the following day. But then a technician shuffles into the room, carrying a gun. He mumbles something along the lines of ‘Don’t worry, you’ve been safely teletransported to Mars, just like normal, it’s just that the regulations say that we still need to … and, look here, you signed this consent form …’ He slowly raises his weapon and Eva has a feeling she’s never had before, that maybe this teletransportation malarkey isn’t quite so straightforward after all.
The point of this thought experiment, which is called the ‘teletransportation paradox’, is to unearth some of the biases most of us have when we think about what it means to be a self.
...Is the Eva on Mars (let’s call her Eva2) the same person as Eva1 (the Eva still in London)? It’s tempting to say, yes, she is: Eva2 would feel in every way as Eva1 would have felt had she actually been transported instantaneously from London to Mars. What seems to matter for this kind of personal identity is psychological continuity, not physical continuity.* But then if Eva1 has not been vaporised, which is the real Eva?I think the correct – but admittedly strange – answer is that both are the real Eva.
My disagreement relating to the no-cloning theorem aside, I have another disagreement about Seth’s conclusion here. Claiming that the correct answer is that they are both the same person really stretches the idea of selfhood. If the teleportation paradox is physically possible (if the machine destroys the body upon scanning it, then how could the “malfunction” be possible?), I find Derek Parfit’s answer (or on YouTube) to the teleportation paradox more persuasive.
Parfit argues that any criteria we attempt to use to determine sameness of person will be lacking, because there is no further fact. What matters, to Parfit, is simply “Relation R”, psychological connectedness, including memory, personality, and so on.
I think your reasoning only works if the fraction of people who use impressive signalling is sufficiently small. If most people use it everybody starts to price it in. Then if you apply for a job at skill level X you have to show impressiveness at skill level X+N—otherwise you won’t get it. Correspondingly on the dating market. You can still try honest signalling but your chances will go down. It only works if you can reliably detect honest signallers.
See also The Evolution of Trust.
I think it works as long as they benefit from the rules and the overall scheme and trust you that you help them grow. If the rules get relaxed or extended as they grow (which may be fast). Some parents try to create fear of things they believe to be dangerous or that they want the kids to avoid for other reasons. If kids figure out that these things are not actually dangerous they will wonder what else they have been lied to. They may be mistaken in this so it is important to back it up. One example is: “Don’t put your finger in the door gap (esp the one at the hinges).” “Why not? Doesn’t look dangerous.” Explain levers by demonstrating with a nut or something else they know is pretty hard.
Is there a public-facing API endpoint for the Algolia search system? I’d love to be able to say to my discord bot “Hey wasn’t there a lesswrong post about xyz?” and have him post a few links
And you can trust them to follow these rules even unattended.
There’s a particular kind of lack of self alignment that leads to even feeling like some time is “dead”, so I want to push back against that but also give you some answers.
The deal is that the seemingly dead time is time you need in order to function. Not all of us have the same bodies capable of the same things: there’ll basically always be someone more productive than you and someone lazier than you to compare yourself against. But what we can say is that there’s a variety of kinds of rest bodies need to do things not only like repair of organs but also for healthy upkeep. Your brain, for example, needs time to do things like memory consolidation, and that can’t happen if you’re spending the time cramming new information in.
That said, we can find lots of activities that don’t ask a lot of our bodies and especially of our brains. Simple chores, watching videos, listening to “boring” stories, playing video games, the list goes on.
Not every moment of everyday needs to be productive because not every moment of the day can be productive: one only need be responsible up to one’s physical limits, and beyond that the priority is recovery rather than production. “Dead” time is better framed as time actively recovering to do more stuff later.
I read the comment you’re responding to as suggesting something like “your impression of Unreal’s internal state was so different from her own experience of her internal state that she’s very confused”.
Very insightful comment, Steven. Putting it that way, I agree with you that the quantum fluctuations (most likely) don’t actually matter for our experience, and yes I was nitpicking.
This quote from Frank Wilczek claims that we are yet to attribute any high-level phenomena to quantum fluctuations:
Consistency requires the metric field to be a quantum field, like all the others. That is, the metric field fluctuates spontaneously. We do not have a satisfactory theory of these fluctuations. We know that the effects of quantum fluctuations in the metric field are usually—in our experience so far, always—small in practice, simply because we get very successful theories by ignoring them! From delicate biochemistry to exotic goings-on at accelerators to the evolution of stars and the early moments of the big bang, we’ve been able to make precise predictions, and have seen them accurately verified, while ignoring possible quantum fluctuations in the metric field. Moreover, the modern GPS system maps out space and time directly. It doesn’t allow for quantum gravity, yet it works very well. Experimenters have worked very hard to discover any effect that could be ascribed to quantum fluctuations in the metric field, or, in other words, to quantum gravity. Nobel Prizes and everlasting glory would attend such a discovery. So far, it hasn’t happened.
(Epistemic status: earworm)
No-one will have the endurance to claim on his insurance / Lloyd’s of London will be loaded when they go! - Tom Lehrer, “We will all go together when we go”
I think I’d phrase the key insight I see in “consequentialism might harm survival” different: consequentialism is computationally expensive, and sometimes you don’t have the resources to produce the desired outcome because you don’t have the time, energy, or ability to work out all the details. Thus, short-circuited consequential can produce worse results than other moral philosophies.
That being said, fully executed consequentialism can deal with circumstances other approaches might have a harder time with. For example, deontology works well if the rules match the environment you’re operating in. Drop into a new environment at the rules might no longer be well adapted to produce good outcomes. Similarly for virtue ethics: what’s virtuous and produces good outcomes might be different in different contexts, and so may more struggle to adapt in consequentialism.
In all cases it seems to be a matter of when the moral calculations were performed. In consequentialism they happen just in time, and so we may fail to do enough of them to generate good results. In others, we do them ahead of time, which means we may have computed the right answer for the wrong situation and not have a good way to generate something better quickly because the mechanism of determining rules or virtues happens over decades or centuries of cultural evolution.
This is great, thanks!
Was wondering if you knew of any sources of how efficacy wanes over time (or persists) for two-doses of Moderna? I’m not actually sure if I do need a booster since I have no clue what baseline I’m working with.
If you flip a coin to make a decision (eg, which path to walk), and it comes up heads, does that mean that the corresponding path corresponds to higher anthropic measure than the tails path?
Geoff was interested in publishing a transcript and a video, so I think Geoff would be happy with you publishing the audio from the recording you have.
Yep, I think CEA has in the past straightforwardly misrepresented and sometimes even lied (there is a talk on the history of EA by Will and Toby that says some really dubious things here, IIRC) in order to not mention Leverage’s history with Effective Altruism. I think this was bad, and continues to be bad.
Yep, I think the situation is closer to what Jeff describes here, though, I honestly don’t actually know, since people tend to get cagey when the topic comes up.
Seems worth noting that you probably can’t track everyone, so some people may have opened it but been uncounted. If (as I expect) they work through embedding images, then someone with images disabled by default wouldn’t be counted unless they specifically enabled them. (I am such a person, but didn’t get an email this year.)
If you google “most valuable thing in the world,” it shows you a bunch of lists of fancy items covered with rare gems. I think civil engineering and infrastructural projects are much more valuable than these things when we talk about utility and how much value people can get out of them. It’s also useless if their utilitarian values aren’t being put to use. Efficiency and practicality also plays a huge part. A lot of those Middle East oil countries built a lot of skyscrapers in the middle of nowhere. They probably would’ve stretched their money a bit if they didn’t just copy existing civil engineering designs and came up with something that makes more sense given their own local ecosystem.
Hey, I’m currently finalizing my pipeline. I will contact Richard till end of this week. I hope to deliver a first sketch with two or three famous post nicely formatted and an idea of how to structure the curation process efficiently.
An Hermes Birkin bag costs $10k minimum, average $60k. They’re super high status. Guest wanted one from soon after she moved to NYC’s upper east side, was walking towards another woman on the sidewalk, and the other woman instead of getting out of the way oriented to sort of direct her to walk into a garbage can. Then brushed her with her bag on the way past, which she thinks was a Birkin. She thinks it was the bag that gave her this power.
But the weird thing is, even if you have the money for a bag it’s really hard to get one. There’s a waiting list for the waiting list. Hermes says this is because the bags are so hard to make. They’re made from unusual leathers like crocodile and ostrich, they’re hand stitched, and you have to train for years to make them. Lolno, if they wanted to make more they’ve had 30 years to build up a supply chain. Actually they just want them to be scarce because it makes them higher value.
Guest had a friend give her advice, apparently if her husband went into a store saying he wanted to buy it as a gift that might help. But no luck. Eventually he found one in Japan: they said they didn’t have any, he said he needed it, repeat a few times (which is super rude of him for Japan), and eventually they sold one to him. Another person found that the trick was if you’re already spending thousands of dollars in an Hermes store, and pretend it’s an afterthought when you say “oh do you have any Birkin bags?” they’ll sell you one.
The other weird thing is that the bags are actually kind of underwhelming. They don’t look that great. But host still finds herself impressed, feeling like because other people love them maybe it’s something wrong with her that she doesn’t see it. Uses words like sacred and religious to describe the experience of actually holding (maybe just seeing?) one.
Everyone in this episode is aware of how ridiculous they sound.
Huh, I think I hallucinated a result from the TruthfulQA paper where you fine-tuned on most of the dataset but didn’t see gains on the held-out portion.
Okay, new AMA question: have you already done the experiment that I hallucinated? If not, what do you think would happen?