“And this is where I fell down as a rationalist. I remembered several occasions where my doctor would completely fail to panic at the report of symptoms that seemed, to me, very alarming. And the Medical Establishment was always right. Every single time. I had chest pains myself, at one point, and the doctor patiently explained to me that I was describing chest muscle pain, not a heart attack. So I said into the IRC channel, “Well, if the paramedics told your friend it was nothing, it must really be nothing—they’d have hauled him off if there was the tiniest chance of serious trouble.”″
My own “hold on a second” detector is pinging mildly at that particular bit. Specifically, isn’t there a touch of an observer selection effect there? If the docs had been wrong and you ended up dying as a result, you wouldn’t have been around to make that deduction, so you’re (Well, anyone is) effectively biased to retroactively observe outcomes in which if the doctor did say you’re not in a life threatening situation, you’re genuinely not?
Or am I way off here?
Okie, and yeah, I imagine you would have noticed.
Also, of course, docs that habitually misdiagnose would presumably be sued or worse to oblivion by friends and family of the deceased. I was just unsure about the actual strength of that one thing I mentioned.
First, questions like “if the agent expects that I wouldn’t be able to verify the extreme disutility, would its utility function be such as to actually go through spending the resources to cause the unverifiable disutility?”
That an entity with such a utility function exists would manage to stick around long enough in the first place itself may drop the probabilities by a whole lot.
Perhaps best to restrict ourselves to the case of the disutility being verifiable, but only after the fact. (Has this agent ever pulled this soft of thing before? etc..) and that verification doesn’t open in the present a causal link allowing for other means of preventing the disutility. There’s alot going on here.
I’m not sure, but maybe the reasoning would go not so much for the single specific case, but the process would reason by computing the expected utility of following a rule which would result in it being utterly vulnerable to any agent that merely claims to be capable of causing bignum units of disutility.
Something reasoning along the lines of following such a rule would allow agents in general to order the process to cause plenty of disutility. And that, in itself, would seem to have plenty of expected disutility.
However, if after chugging through the math, it didn’t balance out and still the expected disutility from the existance of the disutility threat was greater, then perhaps allowing oneself to be vulnerable to such threats is genuinely the correct outcome, however counterintuitive and absurd it would seem to us.
I think this all revolves around one question: Is “disutility of dust speck for N people” = N*”disutility of dust speck for one person”?
This, of course, depends on the properties of one’s utility function.
How about this… Consider one person getting, say, ten dust specks per second for an hour vs 106060 = 36,000 people getting a single dust speck each.
This is probably a better way to probe the issue at its core. Which of those situations is preferable? I would probably consider the second. However, I suspect one person getting a billion dust specks in their eye per second for an hour would be preferable to 1000 people getting a million per second for an hour.
Suffering isn’t linear in dust specks. Well, actually, I’m not sure subjective states in general can be viewed in a linear way. At least, if there is a potentially valid “linear qualia theory”, I’d be surprised.
But as far as the dust specks vs torture thing in the original question? I think I’d go with dust specks for all.
But that’s one person vs buncha people with dustspecks.
Oh, just had a thought. A less extreme yet quite related real world situation/question would be this: What is appropriate punishment for spammers?
Yes, I understand there’re a few additional issues here, that would make it more analogous to, say, if the potential torturee was planning on deliberately causing all those people a DSE (Dust Speck Event)
But still, the spammer issue gives us a more concrete version, involving quantities that don’t make our brains explode, so considering that may help work out the principles by which these sorts of questions can be dealt with.
Uh… If there’s no such thing as qualia, there’s no such thing as actual suffering, unless I misunderstand your description of Dennett’s views.
But if my understanding is correct, and those views were correct, then wouldn’t the answer be “nobody actually exists to care one way or another?” (Or am I sorely mistaken in interpreting that view?)
I’m pretty sure I wasn’t doing that. ie, I did, given certain assumptions, commit to SPECKS in my reply.
For the record, my current view is if the choice is between torture vs single speck event total per person for bignum people, I’d go with the SPECKS
I do not consider the situation as linear, however. ie, two dust specks for one person is not precisely twice as bad as a single dust speck in one person, nor is that exactly as bad as two people each experiencing a single dust speck. In fact, I’d suspect that it’d be reasonable to consider a single dust speck per person total has a finite disutility even in the limiting case of infinite people.
If the situation instead is “torture vs an additional dust speck per person for bignum people” then I’d want to know how many dust specks per person were already allocated, and as that number increased from 0, I’d probably lean a bit more toward TORTURE. But, of course, I know there’d have to be some value after which it’d really make no difference to add an additional dust speck or not, so back to SPECKS.
If I couldn’t obtain that information, then I’d at least want to know how many others are going to be asked this. ie, is this isolated, or are there going to be some number of people “tested” like this such that if all answered SPECKS, then the result would be effectively worse than the TORTURE option, then, well, if I knew how many would be asked, and how many saying yes it would take, and if I knew some statistical properties of their utility functions and so on, then effectively I’d choose randomly, but setting the probability for the choice such that the expected utility for the outcome under the assumption that everyone used that heuristic would be maximized. (This is assuming direct communication between all the askeees isn’t an option and so on. if it is, then that random heuristic wouldn’t be needed)
If even that option was disallowed, well, I’d have to estimate based on whatever distribution of possibilities for each of those things that represented my current (At the time) state of knowledge.
THIS is the point at which I get a bit stumped. If we say though “you have to make a decision, make it right now, even if it isn’t that great” I’m still going to go with SPECKS, though, admitedly, with far less confidence that it’s correct than what I said above.
Of course, now that I have a fallback last choice given no furthere knowledge/ability to consider, doing something about the whole situation that set up this issue would be something to investigate heavily. Also, I’d want to be developing a better model of exactly how to measure amount of effective suffering per “unit” suffering. I suspect it’d be some function of that plus how much it interferes with/overflows other possible states, etc etc etc.
As far as your overall point about people avoiding the decision, well, while it may be wise to avoid the habit of hiding from any uncomfortable decision, this is a bit different. I really can’t see asking for a bit more information in the context of an edge case that was constructed to prod at our normal decision making methods and that was asked as a hypothetical thought experiment, AND was a type of situation that I’d consider to be incredibly insanely mindexplodingly unlikely to pop up in Real Life(tm) any time soon as entirely unreasonable.
(chuckles on a meta level though, I just noticed that I seem to have chosen all possible options: commit to a specific choice, blabber about confusing aspects, ask for more information, and attempted to justify not commiting to a specific choice. There must be some sort of prize for this. :D)
Just thought I’d comment that the more I think about the question, the more confusing it becomes. I’m inclined to think that if we consider the max utility state of every person having maximal fulfilment, and a “dust speck” as the minimal amount of “unfulfilment” from the top a person can experience, then two people experiencing a single “dust speck” is not quite as bad as a sigle person two “dust specks” below optimal. I think the reason I’m thinking that is that the second speck takes away more proportionally than the first speck did.
Oh, one other thing. I was assuming for my replies both here and in the other thread that we’re only talking about the actual “moment of suffering” caused by a dust speck event, with no potential “side effects”
If we consider that those can have consequences, I’m pretty sure that on average those would be negative/harmful, and when the law of large numbers is invoked via stupendously large numbers, well, in that case I’m going with TORTURE.
For the moment at least. :)
Hrm… Recovering’s induction argument is starting to sway me toward TORTURE.
More to the point, that and some other comments are starting to sway me away from the thought that disutility of single dust speck events per person becomes sublinear as people experiencing it increases (but total population is held constant)
I think if I made some errors, they were partly was caused by “I really don’t want to say TORTURE”, and partly caused by my mistaking the exact nature of the nonlinearity. I maintain “one person experiencing two dust specks” is not equal to, and actually worse, I think, than two people experiencing one dust speck, but now I’m starting to suspect that two people each experiencing one dust speck is exactly twice as bad as one person experiencing one dust speck. (Assuming, as we shift number of people experiencing DSE that we hold the total population constant.)
Thus, I’m going to tentatively shift my answer to TORTURE.
Incidentally, I decided upon consideration of what the math would actually look like for the type of utility function that I’d currently consider reasonable, that given a fixed population, disutility would be basically linear in number of people experiencing dust speck events (the other nonlinearities about one person experiencing a bunch of events would hold though) so am shifting my answer, tenatively, to TORTURE. (Just sticking this comment in this thread since I also made the other claim in this thread.)
Recovering: chuckles no, I meant thinking about that, and rethinking about what the actual properties of what I’d consider to be a reasonable utility function led me to reject my earlier claim of the specific nonlinearity that lead to my assumption that as you increase the number of people that recieve a spec, the disutility is sublinear, and now I believe it to be linear. So huge bigbigbigbiggigantaenormous num specks would, of course, eventually have to have more disutility than the torture. But since to get to that point knuth arrow notation had to be invoked, I don’t think there’s any worry that I’m off to get my “rack winding certificate” :P
But yeah, out of context this debate would sound like complete nonsense… “crazy geeks find it difficult to decide between dust specks and extreme torture.”
I do have to admit though, Andrew’s comment about individual living 3^^^3 times and so on has me thinking again. If “keep memories and so on of all previous lives = yes” (so it’s really one really long lifespan) and “permanent physical and psychological damage post torture = no”) then I may take that. I think. Arrrgh, stop messing with my head. Actually, no, don’t stop, this is fun! :)
Jacob Stein: Oy Vey, since you insist, here’s some evolved watches: http://www.youtube.com/watch?v=mcAq9bmCeR0 (it’s about ten minutes long, btw, and a bit slow at the start. But if evolved watches you must have, evolved watches you will get.)
This may be a stupid question but… what about kin selection? How did that develop? Wouldn’t something like group selection have had to have happened at some point for kin selection to end up showing up in the first place?
ie, imagine a couple families/clans/whatever of some species. one happens to have a member that has “magic gene(s) of kin selection juju”, and the other… doesn’t.
Let’s say eventually in the former, sometimes members having that gene get to breed just often enough so that the gene/complex/whatever starts spreading around through the family. Then, that would help promote the success of that family and thus that gene.
Or am I completely way way off here? And if so, how does kin selection develop in the first place then? Thanks.
Kaj: Ah, thanks. Then I guess I was a bit unclear as to what counted as group selection. ie, I thought a family would count as a “group” for these purposes.
Pete: I was just thinking the same thing, that we ought to start a wiki to do this project. Questions do come up though like “where ought one draw the line between the simple and nonsimple”? This question relates even ti billswift’s comment about the name.
For instance, in physics, ought we include Hamilton’s equations/the hamiltonian? There’s certainly understanding to be found by considering a system in those terms. But deriving those and so on probably is a bit deeper than what one might want to consider “easy math”… or maybe not. Those are in some ways the starting point that leads to the deep stuff.
There’s probably analogous questions in other fields. So we have to decide what we’re going to consider the “easy” math.
Incidentally, I’ve taken to using the term “afaithist” for myself rather than “atheist” largely due to above mentioned issues. I’m not all that concerned so much about various religious beliefs rather than the notion of the virtue of non rational/anti rational belief, including various “must not question” flavors. Questions like existance of god/etc etc are almost incidental, questions of “mere” (eheh) fact.
Tom: If there was such a convincing eminar, perhaps it contains such a convincing argument that it’s genunitely correct. Modify it to “Utterly Convincing and Irresistable Five-Minute Brainwashing Seminar On Why.....” :)
Eliezer, I dunno about Christianity, and it wouldn’t, in this case, be eternal, but isn’t there something about some Buddhists who’ve tried to get into/be reborn into some hell plane when they die to help those trapped there?
At least I seem to have this memory of reading stuff along those lines.
Also, actually, I know I’ve heard Jewish stories about various Rabbis supposedly making contracts and shuffling stuff around to give up their share in The-World-To-Come for the sake of another. Perhaps not identical, but the theme does show up here and there.
Hrm… I’m not sure having just “that one guy that’s the Known Contrary Guy” would have the desired effect.
Maybe I’m completely wrong, but by personal expectation would be that having just one would instead be a convinient “bad example”… actually resulting in a reduction of dissent among the rest by having people partly wanting to, in a sense, avoid being like that guy. Having more than one may be better since then it wouldn’t be “that one crazy guy that you don’t want to be like” but “just some people that disagree.”
Or am I completely and utterly wrong on the psychology of this?
You’re right, and the thing that depresses me is that we can see this and yet at least I have barely any notion of what to do about it. Actually… (Well, actually, the relevent thought belongs on the Open Thread, so I’ll go there...)
I take your point, though I guess for “atheist hymns”, or the closest things theirof, perhaps the first place to look would be Filk music? There’re very very very few professional filkers, and most Filk is with untrained voices and so on, and has to be appreciated as just as it is, just for fun… but there’s some good stuff too… Fire in the Sky, Hope Eyerie, etc.. (at least in my view)
One in particular that formed more or less out of the composer’s frustration with a Young Earth Creationist is actually pretty good… perhaps one of the nicest attacks on YEC around, specifically “Word of God”
The ones I mentioned are at http://www.prometheus-music.com/eli/virtual.htm (Surprise! is just plain fun though! dunno about deep artistic merit, it’s just fun. :))
Of course, tastes vary...