We’ll make it a double territory.
DanielLC
I think drinking is also about the idea that it might cause problems to people who aren’t fully grown. I don’t know if that’s true, but I don’t think that matters politically.
Deontology is funny like that. Making a one-in-a-million chance of each of a million people dying is fine, but killing one is not. Not even if you make it a lottery so each of them has a one-in-a-million chance of dying, since you’re still killing them.
Is that actually illegal or just against the rules? I would expect it would be perfectly legal to start your own, although I could see why people might object if you don’t at least limit it to make sure it stays at safe levels. And if you do limit it, you’ll have all those advantages you said, but not the obvious one of not having cheaters. It’s just as hard to tell if someone’s doping more than they should as it is to tell if they’re doing it at all.
I think babies are more person-like than the animals we eat for food. I’m not an expert in that though. They’re still above someone in a coma.
It’s not about communication. It’s not even about sensing. It’s about subjective experience. If your mind worked properly but you just couldn’t sense anything or do anything, you’d have moral worth. It would probably be negative and it would be a mercy to kill you, but that’s another issue entirely. From what I understand, if you’re in a coma, your brain isn’t entirely inactive. It’s doing something. But it’s more comparable to what a fish does than a conscious mammal.
Someone in a coma is not a person anymore. In the same sense that someone who is dead is not a person anymore. The problem with killing someone is that they stop being a person. There’s nothing wrong with taking them from not a person to a slightly different not a person.
If we butchered some mass murderer we could save the lives of a few taxpayers with families that love them
A mass murderer is still a person. They think and feel like you do, except probably with less empathy or something. The world is better off without them, and getting rid of them is a net gain. But it’s not a Pareto improvement. There’s still one person that gets the short end of the stick.
If they really don’t care about humans, then the AI will use all the resources at its disposal to make sure the paradise is as paradisaical as possible. Humans are made of atoms, and atoms can be used to do calculations to figure out what paradise is best.
Although I find it unlikely that the S team would be that selfish. That’s a really tiny incentive to murder everyone.
There are reasons why you shouldn’t kill someone in a coma that doesn’t want to be killed when they’re in a coma even if you disagree with them about what makes life have moral value. If they agreed to have the plug pulled when it becomes clear that they won’t wake up, then it seems pretty reasonable to take out the organs before pulling the plug. And given what’s at stake, given permission, you should be able to take out their organs early and hasten their deaths by a short time in exchange for making it more likely to save someone else.
And why are you already conjecturing about what we would have wanted? We’re not dead yet. Just ask us what we want.
A person in solitary still has experiences. They just don’t interact with the outside world. People in a coma are, as far as we can tell, not conscious. There are plenty of animals that people are okay with killing and eating that are more likely to be sentient than someone in a coma.
I wouldn’t call luminiferous aether just plain wrong. Asking what it’s made from doesn’t make a lot of sense, but saying that that means it doesn’t exist would be like saying electrons don’t exist because they don’t have a volume.
Personally, I don’t trust the concept of values. It’s already so complex and fragile, I’m afraid it doesn’t actually exist.
It’s something of a simplification. People are not ideal utility-maximizers. But they’re close enough that it works well.
There are various ways to get infinite and infinitesimal utility. But they don’t matter in practice. Everything but the most infinite potential producer of utility will only matter as a tie breaker, which will occur with probability zero.
Cardinal numbers also wouldn’t work well even as infinite numbers go. You can’t have a set with half an element, or with a negative number of elements. And is there a difference between a 50% chance of uncountable utilons and a 100% chance?
How badly could a reasonably intelligent follower of the selfish creed, “Maximize my QALYs”, be manhandled into some unpleasant parallel to a Pascal’s Mugging?
They’d be just as subject to it as anyone else. It’s just that instead of killing 3^^^3 people, they threaten to torture you for 3^^^3 years. Or offer 3^^^3 years of life or something. It comes from having an unbounded utility function. Not from any particular utility function.
The first is certainly good for teaching math, but in general they both have advantages and disadvantages. It’s good to have a lot of methods for solving problems, but it’s also important to have general methods that can each solve many problems.
Here’s how I look at it. Suppose you want to prove A, so you look for evidence until either you can prove it for p = 0.05, or it’s definitely false. Let E be this experiment proving A, and !E be disproving it. P(A|E) = 0.95, and P(A|!E) = 0. Let’s assume the prior for A is P(A) = 0.5.
P(A|E) = 0.95
P(A|!E) = 0
P(A) = 0.5
By conservation of expected evidence, P(A|E)P(E) + P(A|!E)P(!E) = P(A) = 0.5
0.95 P(E) = 0.5
P(E) = 0.526
So the experiment is more likely to succeed than fail. Even though A has even odds of being true, you can prove it more than half the time. It sounds like you’re cheating somehow, but the thing to remember is that there are false positives but no false negatives. All you’re doing is proving probably A more than definitely not A, and probably A is more likely.
But P(A|E) = 0.05. That was an assumption here. Had the probability been different, P(E) would have been different.
We have values besides inclusive genetic fitness.
That number is meaningless to me. Can you tell me micromorts per mile or something?
There are other possibilities. Maybe it’s pushing on dark matter, so it works from the reference frame of the dark matter.
What I always feel like a character should do in that situation (technology permitting) is to turn on a tape recorder, fight the villain, and listen to what they have to say afterwards. And then try to figure out how to fix the problems the villain is pointing out instead of just feeling bad about themselves.
I guess that sort of works for this. You could write down what the voice in your head is saying, and then read it when you’re not feeling terrible about yourself. And discuss it with other people and see what they think.
The problem with just trusting someone else is that unless you are already on your deathbed, and sometimes not even then, there is nothing you can say where their response will be “killing yourself would probably be a good idea”. There is no correlation between their response and the truth, so asking them is worthless.
I got an asset for Unity published. It’s called HexGrid. It’s basically an engine for tactical RPGs/wargames on a hexagonal grid.
You assume people will commit suicide if their life is not worth living. People have a strong instinct against suicide, so I doubt they’d do it unless their life is not worth living by a wide margin.