I have to mention that Mozilla Room names get autogenerated when you make them. You can change them, but they pick the initial name. And the name of the room we built, a name we did not pick, was automatically selected to be “Expert Truthful Congregation”. The kabbles are strong with this one, as Ray says.
(I added your linked image to your comment.)
Feedback form if you’d like to fill it out: https://docs.google.com/forms/d/e/1FAIpQLSd5bgmdN3pGFiGZWCmwqzN6QA3jjVDELJ4x6KhpKZbQDHAH-A/viewform
Well, that sure was something.
Zvi and Robin did a great job hashing out the details of the policy proposal, and I appreciate them doing this so quickly (I contacted them on Tuesday). My thanks to the 5 or so people who joined the call to ask questions, and also to the 100 people who watched for the full 2 hours. (I was only expecting 40-80 people to even show up, so I am a bit surprised!)
The Mozilla Hubs experiment was an experiment. The first 20 minutes were hectic, with people asking all the usual questions you ask at parties like “CAN ANYONE HEAR ME!”, “WHERE AM I?” and “Why is there a panda?”, but after that it calmed down.
It was kinda awkward, there were no body language or visual cues to follow when you should speak in a group conversation, so there was a lot of silence. Eventually there were two rooms of 15-20 people in a big circle conversation, and it started getting pretty chill, and I had a good time for like an hour before leaving to cook pasta (thank you to the guy who shared an improved pasta recipe with us all, it made my lunch better). That said, we’ll pick a different platform as the main one in future.
So yeah. I’m gonna reach out to people to do more debates, ping me if you have an idea for a conversation you want to have. Thanks all for coming :)
P.S. Feedback form if you’d like to fill it out: https://docs.google.com/forms/d/e/1FAIpQLSd5bgmdN3pGFiGZWCmwqzN6QA3jjVDELJ4x6KhpKZbQDHAH-A/viewform
Okay, I took a post out of my drafts and it’s ready to post, and I commit to posting it. I’ve pinged a person for permission to quote them, and when they get back to me I’ll hit publish.
I have had a helluva day preparing for the debate+meetup tomorrow. I’ll try to get something out before I go to bed, might be short, might be about covid, sorry about that.
Thinking more, I think there are good arguments for taking actions that as a by-product induce anthropic uncertainty; these are the standard hansonian situation where you build lots of ems of yourself to do bits of work then turn them off.
But I still don’t agree with the people in the situation you describe because they’re optimising over their own epistemic state, I think they’re morally wrong to do that. I’m totally fine with a law requiring future governments to rebuild you / an em of you and give you a nice life (perhaps as a trade for working harder today to ensure that the future world exists), but that’s conceptually analogous to extending your life, and doesn’t require causing you to believe false things. You know you’ll be turned off and then later a copy of you will be turned on, there’s no anthropic uncertainty, you’re just going to get lots of valuable stuff.
I just don’t think it’s a good decision to make, regardless of the math. If I’m nearing the end of the universe, I prefer to spend all my compute instead maximising fun / searching for a way out. Trying to run simulations to make it so I no longer know if I’m about to die seems like a dumb use of compute. I can bear the thought of dying dude, there’s better uses of that compute. You’re not saving yourself, you’re just intentionally making yourself confused because you’re uncomfortable with the thought of death.
Now that’s fun. I need to figure out some more stuff about measure, I don’t quite get why some universes should be weighted more than others. But I think that sort of argument is probably a mistake—even if the lawful universes get more weighting for some reason, unless you also have reason to think that they don’t make simulations, there’s still loads of simulations within each of their lawful universes, setting the balance in favour of simulation again.
Another big reason why (a version of it) makes sense is that the simulation is designed for the purpose of inducing anthropic uncertainty in someone at some later time in the simulation. e.g. if the point of the simulation is to make our AGI worry that it is in a simulation, and manipulate it via probable environment hacking, then the simulation will be accurate and lawful (i.e. un-tampered-with) until AGI is created.
Ugh, anthropic warfare, feels so ugly and scary. I hope we never face that sh*t.
That’s interesting. I don’t feel comfortable with that argument, it feels too much like random chance whether or not we should expect ourselves to be in an interventionist universe or not, whereas I feel like I should be able to find strong reasons to not be in an interventionist universe.
I don’t buy that it makes sense to induce anthropic uncertainty. It makes sense to spend all of your compute to run emulations that are having awesome lives, but it doesn’t make sense to cause yourself to believe false things.
My crux here is that I don’t feel much uncertainty about whether or not our overlords will start interacting with us (they won’t and I really don’t expect that to change), and I’m trying to backchain from that to find reasons why it makes sense.
My basic argument is that all civilizations that have the capability to make simulations that aren’t true histories (but instead have lots of weird stuff happen in them) will all be philosophically sophisticated to collectively not do so, and so you can always expect to be in a true history and not have weird sh*t happen to you like in The Sims. The main counterargument here is to show that there are lots of civilizations that will exist with the powers to do this but lacking the wisdom to not do it. Two key examples that come to mind:
We build an AGI singleton that lacks important kinds of philosophical maturity, so makes lots of simulations that ruins the anthropic uncertainty for everyone else.
Civilizations at somewhere around our level get to a point where they can create massive numbers of simulations but haven’t managed to create existential risks like AGI. Even while you might think our civilization is pretty close to AGI, I could imagine alternative civilizations that aren’t, just like I could imagine alternative civilizations that are really close to making masses of ems but that aren’t close enough to AGI. This feels like a pretty empirical question about whether such civilizations are possible and whether they can have these kinds of resources without causing an existential catastrophe / building singleton AGI.
The relevant intuition to the second point there, is to imagine you somehow found out that there was only one ground truth base reality, only one real world, not a multiverse or a tegmark level 4 verse or whatever. And you’re a civilization that has successfully dealt with x-risks and unilateralist action and information vulnerabilities, to the point where you have the sort of unified control to make a top-down decision about whether to make massive numbers of civilizations. And you’re wondring whether to make a billion simulations.
And suddenly you’re faced with the prospect of building something that will make it so you no longer know whether you’re in the base universe. Someday gravity might get turned off because that’s what your overlords wanted. If you pull the trigger, you’ll never be sure that you weren’t actually one of the simulated ones, because there’s suddenly so many simulations.
And so you don’t pull the trigger, and you remain confident that you’re in the base universe.
This, plus some assumptions about all civilizations that have the capacity to do massive simulations also being wise enough to overcome x-risk and coordination problems so they can actually make a top-down decision here, plus some TDT magic whereby all such civilizations in the various multiverses and Tegmark levels can all coordinate in logical time to pick the same decision… leaves there being no unlawful simulations.
Hot take: The actual resolution to the simulation argument is that most advanced civilizations don’t make loads of simulations.
Two things make this make sense:
Firstly, it only matters if they make unlawful simulations. If they make lawful simulations, then it doesn’t matter whether you’re in a simulation or a base reality, all of your decision theory and incentives are essentially the same, you want to take the same decisions in all of the universes. So you can make lots of lawful simulations, that’s fine.
Secondly, they will strategically choose to not make too many unlawful simulations (to the level where the things inside are actually conscious). This is because to do so would induce anthropic uncertainty over themselves. Like, if the decision-theoretical answer is to not induce anthropic uncertainty over yourself about whether you’re in a simulation, then by TDT everyone will choose not to make unlawful simulations.
I think this is probably wrong in lots of ways but I didn’t stop to figure them out.
Please RSVP for the event here, so we know how many people are coming.