I don’t think it is obvious that you have to multiply by some other number.
I don’t know how conscious experience works. Some views (such as Eliezer’s) hold that it’s binary: either a brain has the machinery to generate conscious experience or it doesn’t. That there aren’t gradations of consciousness where some brains are “more sentient” than others. This is not intuitive to me, and it’s not my main guess. But it’s on the table, given my state of knowledge.
Most moral theories, and moral folk theories, hold to the common sense claim that “pain is bad, and extreme pain is extremely bad.” There might be other things that are valuable or meaningful or bad. We don’t need to buy into hedonistic utilitarianism wholesale, to think that pain is bad.
Insofar as we care about reducing pain and it might be that brains are either conscious or not, it might totally be the case that we should be “adding up the experience hours”, when attempting to minimize pain.
And in particular, after we understand the details of the information processing involved in producing consciousness, we might think that weighting by neuron count is as dumb as weighting by the “the thickness of the coper wires in the computer running an AGI.” (Though I sure don’t know, and neuron count seems like one reasonable guess amongst several.)
I mean, I agree that if you want to entertain this as one remote possibility, sure, go ahead, I am not saying morality could not turn out to be weird. But clearly you can construct arguments of similar quality for at least hundreds if not thousands or tens of thousands distinct conclusions.
If you currently want to argue that this is true, and a reasonable assumption on which to make your purchase decisions, I would contend that yes, you are also very very confused about how ethics works.
Like, you can have a mutual state of knowledge about the uncertainty and the correct way to process that uncertainty. There are many plausible arguments for why random.org will spit out a specific number if you ask it for a random number, but it is also obvious that you are supposed to have uncertainty about what number it outputs. If someone shows up and claims to be confident that random.org will spit out a specific number next, they are obviously wrong, even if there was actually a non-trivial chance the number they were confident in will be picked.
The top-level post calculates an estimate in-expectation. If you calculate something in-expectation you are integrating your uncertainty. If you estimate that a randomly publicy traded company is worth 10x its ticker price, you might not be definitely wrong, but it is clear that you need to have a good argument, and if you do not have one, then you are obviously wrong.
I don’t think it is obvious that you have to multiply by some other number.
I don’t know how conscious experience works. Some views (such as Eliezer’s) hold that it’s binary: either a brain has the machinery to generate conscious experience or it doesn’t. That there aren’t gradations of consciousness where some brains are “more sentient” than others. This is not intuitive to me, and it’s not my main guess. But it’s on the table, given my state of knowledge.
Most moral theories, and moral folk theories, hold to the common sense claim that “pain is bad, and extreme pain is extremely bad.” There might be other things that are valuable or meaningful or bad. We don’t need to buy into hedonistic utilitarianism wholesale, to think that pain is bad.
Insofar as we care about reducing pain and it might be that brains are either conscious or not, it might totally be the case that we should be “adding up the experience hours”, when attempting to minimize pain.
And in particular, after we understand the details of the information processing involved in producing consciousness, we might think that weighting by neuron count is as dumb as weighting by the “the thickness of the coper wires in the computer running an AGI.” (Though I sure don’t know, and neuron count seems like one reasonable guess amongst several.)
I mean, I agree that if you want to entertain this as one remote possibility, sure, go ahead, I am not saying morality could not turn out to be weird. But clearly you can construct arguments of similar quality for at least hundreds if not thousands or tens of thousands distinct conclusions.
If you currently want to argue that this is true, and a reasonable assumption on which to make your purchase decisions, I would contend that yes, you are also very very confused about how ethics works.
Like, you can have a mutual state of knowledge about the uncertainty and the correct way to process that uncertainty. There are many plausible arguments for why random.org will spit out a specific number if you ask it for a random number, but it is also obvious that you are supposed to have uncertainty about what number it outputs. If someone shows up and claims to be confident that random.org will spit out a specific number next, they are obviously wrong, even if there was actually a non-trivial chance the number they were confident in will be picked.
The top-level post calculates an estimate in-expectation. If you calculate something in-expectation you are integrating your uncertainty. If you estimate that a randomly publicy traded company is worth 10x its ticker price, you might not be definitely wrong, but it is clear that you need to have a good argument, and if you do not have one, then you are obviously wrong.