Yes, many people understand this fallacy on some level:
Nisan
It seems to me that this addresses two very different purposes for moral judgments in one breath.
A possible role for deontic morality is assigning blame or approbation. For this it is necessary to take the agent’s intent and level of knowledge into consideration. Blaming or approving people for their actions is part of the enforcement process in society.
I’m not trying to justify deontology; I’m observing that this feature and others (like the use of reference classes and the bit about keeping promises) make deontology well-suited to (or a natural feature of) prerational societies.
As long as the simulations which involve terrible suffering constitute a tiny proportion of the simulations, your response ought to be the same as if there is only one copy of you and it has a tiny probability of suffering terribly – which is just like real life.
ETA: What you ought to worry about is what will happen to you after the AI is done with the simulation.
When you make a decision that results in fewer people living than might have lived, Phil Goetz calls that “killing a person”. If you are an aid recipient, then giving up your aid and your life to save another person will not change the number of people who live, so it doesn’t count as “killing a person”.
If, however, you have the means to save two people with the aid you’re receiving, then you’re “killing a person” by not sacrificing your life—assuming your life counts as much as anyone else’s.
- 10 Feb 2010 2:18 UTC; 0 points) 's comment on Shut Up and Divide? by (
Someone doesn’t like Bach because he was traumatized by an exposure to classical music at a tender age? Give me a break. Music is like languages, not math—the surest way to learn to like Bach is full immersion at a young age, not a graduated curriculum that starts from “lower” forms of music.
Cool, I got a 90. This quiz actually tests your calibration on estimating the veracity of a very special class of statement, for which your prior is .5 and which is often deliberately tricky. To give a (made-up) example:
“Henry VI defeated Richard III at the Battle of Bosworth Field in 1485.” (It was Henry VII, not Henry VI.)
This class of statement doesn’t show up that often in real life.
He was raised Jewish with the idea that it is unclean to have animals in the home
Where is he from, if you don’t mind my asking? The Jewish cultures in the United States that I’m familiar with are okay with pets.
The penultimate paragraph about our beliefs isn’t about Bayesianism so much as heuristics and biases. Unless you were a Bayesian from birth, for at least part of your life your beliefs evolved in a crazy fashion not entirely governed by Bayes’ theorem. It is for this reason that we should be suspicious of the beliefs based on assumptions we’ve never scrutinized.
Is it true that people process things so differently?
Yes. I’d love to list a bunch of first- and second-hand anecdotal evidence for this, but anecdotal evidence is not great evidence. Instead, consider the example of synaesthesia, and the fact that synaesthetes can live for decades without realizing that not all people are synaesthetes. It’s easy not to notice huge differences in the way people’s minds work.
I have fond childhood memories of many hours tracing the circuit diagram of the adding circuit : ) God, I was so nerdy. I wanted to know how a computer worked and that book helped me avoid a mysterious answer to a mysterious question. Learning, in detail, how a specific logic circuit works really drove home how much I had yet to learn about the rest of the workings of a computer.
I would hesitate under any circumstances to take a dose of 30 mg/kg.
Likewise, you’re not obliged to feel a “smart” feeling that you don’t like, as long as you’re smart enough to remember what good advice it might have given you.
On the other hand, it can be useful to encourage smart feelings in order to motivate you to achieve your values. If you want to do this, then the second and fifth cells in the last row can be “yes/yes”.
Right. Sometimes well-meant but unsolicited suggestions don’t do anyone any good.
Here’s a puzzle that involves time travel:
Suppose you have just built a machine that allows you to see one day into the future. Suppose also that you are firmly committed to realizing the particular future that the machine will show you. So if you see that the lights in your workshop are on tomorrow, you will make sure to leave them on; if they are off, you will make sure to leave them off. If you find the furniture rearranged, you will rearrange the furniture. If there is a cow in your workshop, you will spend the next 24 hours getting a cow into your workshop.
My question is this: What is your prior probability for any observation you can make with this machine? For example, what are the odds of the windows being open?
I stipulated that you’re committed to realizing the future because otherwise, the problem would be too easy.
I’m assuming that if you act contrary to what you see in the machine, fate will intervene. So if you’re committed to being contrary, we know something is going to occur to frustrate your efforts. Most likely, some emergency is going to occur soon which will keep you away from your workshop for the next 24 hours. This knowledge alone is a prior for what the future will hold.
- 22 Mar 2010 22:22 UTC; 0 points) 's comment on Open Thread: March 2010, part 3 by (
I was thinking of a closed time-like curve governed by general relativity, but I don’t think that tells you anything. It should depend on your commitment, though.
I could tell you that time travel works by exploiting closed time-like curves in general relativity, and that quantum effects haven’t been tested yet. But yes, that wouldn’t be telling you how to handle probabilities.
So, it looks like this is a situation where the prior you were born with is as good as any other.
Right. Materialism tells us that we’re probably going to die and it’s not going be okay; the right way to feel good about it is to do something about it.
Agreed. In fact, this is even stated in the post:
You are the way you are because of two things:
the laws that describe your soul-pieces or particles, whatever those laws may be, and
the way they’re put together,
ETA: Perhaps you are even saying that the first item should be struck from that list. I’d agree with you.
This is my reaction too. This is a decision involving Omega in which the right thing to do is not update based on new information. In decisions not involving Omega, you do want to update. It doesn’t matter whether the new information is of an anthropic nature or not.