Can’t you just pull the plug before it can run any simulations?
They were in black text.
I saw two more posts that I’ve already read. I have Unread Only checked. I think there’s some problem with the unread filter.
Several years ago, back before I deconverted and learned about Less Wrong, I sometimes used this without trying- I would pray to “God,” and “God” would usually make better decisions than my intuitive judgements- not because of a higher power (it would be impossible to simulate a being more intelligent than myself) but because I was really simulating myself, minus several cognitive restraints.
After I left religion, I stopped doing that anymore, because I basically thought praying was beneath me- although now that I’ve read this post, I will start doing it again. But recently, I’ve been doing something similar. I simulate a being who has never encountered our universe before and I explain to it various aspects of ordinary life, finding what does and doesn’t make sense. There have been some interesting reactions, such as: “But why would they believe without evidence?” “They insist that they can rely on faith-” “Don’t use the f-word!” or “You’re telling me that people decide who they are going to spend their entire lives with based on WHO THEY WANT TO HAVE SEX WITH?” It can be pretty helpful.
This post was very rational.
I checked- it was on already, and I turned it off and back on. The post isn’t currently showing up.
I’m on my phone and I can’t find the complain button, so I’ll post this here, since someone on the LW team has to see it. In the three recommended posts at the top of the home screen, I saw The Best Textbooks on Every Subject by lukeprog, and read it. A few hours later, I saw the same article, back in the recommended posts again, supposedly unread. I clicked on it and scrolled through it so I could remove it from the recommended posts. It came back again. And again. Can someone help?
Maybe just find a lot of reputable diets that might work, start on one, and then if you start regaining weight switch to something else. Not sure if this would work, but it could probably solve problems concerning getting used to it.
My brain is reacting to this information with extreme shock.
“That cannot possibly be true,” says my brain. “No one could ever be subject to that effect, no one with even the slightest shred of sanity...”
Then my brain remembers what humanity is like.
“Oh, wait, yeah, that seems pretty likely.”
There’s no guarantee that death will be destroyed. If we make Unfriendly AI, then humanity is gone. If we start nuclear war, than humanity is gone. As Eliezer discusses here: https://www.youtube.com/watch?v=D6peN9LiTWA while we might hope to do whatever we can to stop death, while we might have that as our end goal, that does not justify a belief that we will succeed.
On hpmor.com, when HPMOR was separated into six PDFs, this was the final chapter of Book One of HPMOR… is it supposed to be that way or not?
How are you supposed to do this? I know that it could be useful in many situations, but after reading the Sequences and looking at CFAR resources, I’m not able to doublethink. If I find that a fact is true, I can refuse to think about its truth, I can act as if it weren’t true to a certain degree, but I can’t actually bring myself to change my beliefs without evidence even when it’s better to believe a lie. How are we supposed to use the Dark Arts?
If the zombies are writing these consciousness papers, then they would have to have our beliefs, and they would strongly believe that THEY were conscious. So how do we know that we’re conscious? If we weren’t, we would still think we were, so there’s really no way to determine if we’re actually the zombies.
While the guess that seems like it has the highest probability is the most important to test, anything that seems to have a moderately high probability should be tested, as long as it doesn’t take up too many resources. This is particularly important when experiments take a long time- if Hypothesis A is more likely than Hypothesis B, but testing either would take 3 years, you don’t want to just test A and risk taking 3 years when you could test both at the same time and determine if either was correct.
It’s probably best to not update based on expertise. Even though that would usually improve accuracy, because the experts are more likely to be right than chance, or than most people’s opinions, it stops anyone from creating anti-expert opinions. Accuracy isn’t as important as discovery, and the only way anyone can discover anything new is if they find things that seem probable despite disagreeing with the experts, and if you update too much just because of who believes something, you’ll very rarely make any scientific progress.
What about Eliezer? He founded Less Wrong- why isn’t he part of the team anymore?
I was wondering- what happened on June 16, 2017? Most of the users on Less Wrong, including Eliezer, seemed to have “joined” at that point, but Less Wrong was created on February 1, 2009, and I’ve seen posts from before 2017.
Is there a 2018 or 2019 survey anywhere? I tried to find it, and I’ve seen some things from both you and Yvain, but I can’t find any surveys past this one.
Zyzzx Prime could always do either:
1. No rulers; every single member votes on every issue
2. Select scientists (not leading scientists, of course, just average ones) and have them work on genetic engineering. No one can know who they are, and they work at minimum wage. (Of course, it could be hard to convince them to do this.)
From what I’ve seen, most people seem to argue two-box, and the one-boxers usually just say that Omega needs to think you’ll be a one-boxer, so precommit even if it later seems irrational… I haven’t seen this exact argument yet, but I might have just not read enough.