However, I am willing to let that dog, or a million dogs, or any number of dogs, be tortured to save my grandmother from the same fate.
This sounds a bit like the dustspeck vs. torture argument, where some claim that no number of dustspecks could ever outweigh torture. I think that there we have to deal with scope insensitivity. On the utilitarian aggregation I recommend section V of following paper. It shows why the alternatives are absurd. http://spot.colorado.edu/~norcross/2Dogmasdeontology.pdf
Hello everyone!
I’m a 21 years old and study medicine plus bayesian statistics and economics. I’ve been lurking LW for about half a year and I now feel sufficiently updated to participate actively. I highly appreciate this high-quality gathering of clear-thinkers working towards a sane world. Therefore I oftenly pass LW posts on to guys with promising predictors in order to shorten their inferential distance. I’m interested in fixing science, bayesian reasoning, future scenarios (how likely is dystopia, i.e. astronomical amounts of suffering?), machine intelligence, game theory, decision theory, reductionism (e.g. of personal identity), population ethics and cognitive psychology. Thanks for all the lottery winnings so far!