once flipped the box experiment and made ey the jailor. he blocked me immediately and forever.
AndyWood
Please. Generating so many paragraphs here displaying this sort of smug assurance in your own conclusions about highly controversial topics is the exact opposite of “overcoming bias”.
One person doesn’t need to pretend that he doesn’t grasp something until a certain critical mass of the “right” people catch up. Correctness isn’t up for a vote, and the feeling that it is is nothing more than an artifact of social wiring.
You do not have to accept the conclusion. You also do not have to insist that someone else mimic your own uncertainty about any given topic. At the least, perhaps you should go and make sure his reasoning is flawed before you do.
Hence we are back to the old puzzle of reconciling our feelings of free will with the fact that all of our decisions are ultimately completely determined by factors outside of ourselves.
The part I bolded is never necessary, is it? Factors in the deterministic processes in my brain are factors inside myself, by definition. Is there really still a debate about free will? I’m at a loss to understand why. The subjective perception of free will is easily explainable in a fully deterministic world.
Consider the statement “This mess is your fault.” We typically interpret such a statement to endow the actor with freedom of choice, which he has exercised badly. Furthermore, we typically characterize the nature of morality as requiring that freedom. But that interpretation should be taken figuratively. If taken literally, it is superfluous. It is more correct to interpret the statement like “The glitch is in you.” This sense of the original statement reflects the determinism in the process of choosing. Matters of reward and punishment are not affected. That they should be is a misguided intuition, because they are choices too, to be interpreted similarly.
(‘Glitch’ is defined as whatever causes the actor to produce undesirable results. The judgement of what is undesirable is wholly outside the scope of this topic. Conflating this with primate social status is not only incoherent, but irrelevant.)
Roland: I would suggest that you might be associating the phrase “moral responsibility” with more baggage (which it admittedly carries) than you need to. I find I can discard the baggage without discarding the phrase. That we call behavior caused by, for example, power lust, “worse” than behavior caused by a tumor, is like a convention. It may not be strictly rational, but it is based on a distinction. Perhaps it is more practical. Perhaps there are other reasons.
Imagine two cars, one of which is 1⁄5 as efficient as the other. We can call the less efficient one “worse”, because we have defined inefficiency as bad relative to our interests. We do not require that the car have freedom of choice about its efficiency before we pass judgement. Many mistakes in philosophy sprout from the strong intuitive wish to endow humans with extra non-physical ingredients that very complicated machines would not have.
Psy-Kosh: I think of locating responsibility as a convention. My favorite convention is to locate responsibility in the actor who carries out the deed deemed “bad.” For example, suppose that I got mugged last night while walking home. My actions and choices were factors in the mugging, but we locate responsibility squarely within the attacker. Even if another person instructed him to attack me, I still locate responsibility in the attacker, because it was his decision to do it. However, I might assign a separate crime to his boss for the specific act of instructing (but not for the act of attacking). The reason that I prefer this convention is that it seems elegant, and it simplifies tasks like writing laws and formulating personal life-approaches. It is not that I think it is “right” in some universal sense. I view the attitudes adopted by societies similarly—as conventions of varying usefulness.
Hopefully: I call Lenin “bad,” not to influence anything, but because I mean to say that he really is bad relative to a defined set of assumptions. This framework includes rules such as “torturing and killing is bad.” The question of where, exactly, we get rules like this, and whether they are universal or arbitrary is not one that I find particularly interesting to debate. I will briefly state that my own concept of such rules derives mostly from empathy—from being able to imagine the agony of being tortured and killed.
Hopefully: I’m not sure how the part that you quoted relates to ommission bias. If you were referring to the rest of my comment, feel free to include harmful inaction in the same category as harmful action when deciding where to locate responsibility, and pardon me for not explicitly calling out the negative case.
I am unsure about whether your meaning is perhaps something more general. For example, the question of exactly what effect all of my decisions today had on the global economy and population is one that appears intractable to me, so I don’t lost sleep about it. For the same reason, I would be suspicious of pragmatic policies unless their effects could be reasonably shown to be unambiguous.
All that said, I do not claim to have any special knowledge or interest in moral philosophy. Also, I make no claim that my preferred way of locating responsibility is optimal, or “true” in any sense—only that it is the most appealing to me. If you think there is a more optimal way of looking at it, I would be interested to hear it. What I do have strong views on is the compatibility between choice and determinism, which is really the only thing I originally intended to comment on here.
Well said. The fact of deliberation being deterministic does not obviate the need to engage in deliberation. That would be like believing that running the wrong batch program is just as effective as running the right one, just because there will be some output either way.
Hopefully: I’ll thank you not to attribute to me positions that I don’t hold. I assure you I am well aware that all of the abovementioned deciding, yea this very discussion and all of my reasoning, concluding, and thought-pattern fixing, are deterministic processes. Thank you.
James Baxter: I think that was in poor taste.
Doly: My suggestion would be to keep reading and thinking about this. There is no contradiction, but one has to realize that everything is inside the dominion of physics, even conversations and admonitions. That is, reading some advice, weighing it, and choosing to incorporate it (or not) into one’s arsenal are all implemented by physical processes, therefore none are meta. None violate determinism.
Robin Z: I have not yet seen an account of classical compatibilism (including the one you linked) that was not rife with (what I consider) naive language. I don’t mean that impolitely, I mean that the language used is not up to the task. The first concept that I dispense with is “free,” and yet the accounts I have read seem very interested in preserving and reconciling the “free” part of “free will.” So, while I highly doubt that CC is equivalent to my view in the first place, I’m still curious about what view you adopted to replace it.
Hopefully: Are you trying to say that personal agency is illusory? If I say, “The human that produced these words contains the brain which executed the process which led to action (A),” that is a description of personal agency. That does not preclude the concept “person” itself being a massively detailed complex, rather than an atomic entity. I, and I expect others here, do not feel a pressing need to contort our conversational idioms thusly, to accurately reflect our beliefs about physics and cognition. That would get tiresome.
Caledonian and Laura ABJ: Those are interesting points on their own, but rather far removed from the point of the post. This illustration is not meant to say “The comic book authors earnestly tried to represent an alien mind realistically, and here’s how they failed.” It’s simply a picture that serves well as an illustration of subjective evaluation, especially where the subjects are very different. Also, the fact that humans happen to be similar to one other with regard to this specific type of evaluation is an interesting discussion, but besides the point of this one.
I have thought on this, and concluded that I would do nothing different. Nothing at all. I do not base my actions on what I believe to be “right” in the abstract, but upon whether I like the consequences that I forecast. The only thing that could and would change my actions is more courage.
Dynamically Linked: I suspect you have completely misrepresented the intentions of at least most of those who said they wouldn’t do anything differently. Are you just trying to make a cynical joke?
Unknown: I don’t think that it is morally wrong to accuse people of lying. I think it detracts from the conversation. I want the quality of the conversation to be higher, in my own estimation, therefore I object to commenters accusing others of lying. Not having a moral code does not imply that one need be perfectly fine with the world devolving into a wacky funhouse. Anything that I restrain myself from doing, would be for an aversion to its consequences, including both consequences to me and to others. I agree with you about the fallacy of projecting, and it runs both ways.
Dynamically: It appears that you have a fixed preconception of what behavior “human nature” requires, and you will not accept answers that don’t adhere to that preconception.
For me, these questions create a tangle of conflicts between the real and the hypothetical. This is my best attempt to untangle, so far. First, if there were a tablet that could actually somehow be shown to reveal objective morality, I suspect that I might never have had any qualms about committing atrocities in the first place, since I would be steeped in a culture that unanimously approved. We already see this in the real world, merely as a result of controversial tablets that only some agree on! If you mean, what if I suddenly discovered the tablet just now, then I find I am unable even to imagine how the present real me could be convinced of the authenticity of the tablet. I don’t believe in (do not find evidence for) objective morality, so what possible argument could pursuade me that the tablet was it? And if, purely for the sake of the thought experiment, I grant the possibility that I could be convinced, even though I cannot imagine it, my conception of what it must mean to be convinced seems to imply surrender to that morality by definition. For if I hold out and continue not to murder, then I have not truly conceded that the morality of the tablet is objective. In short, the full implication of true belief in the objectivity of the tablet IS commitment to do its will, but I don’t believe in any such thing, so to me there is no question.
As to what I would want the tablet to say: Minimize physical and psychological pain in the individual. Maximize happiness in the individual (in a way that is not vulnerable to silly arguments about “pegging the bliss-o-meter”). I say, “in the individual”, in strong opposition to dust specks. I remain puzzled by why the “shut up and multiply” maxim would not be accompanied by “shut up and divide”. (That is, 3^^^3 specks / 3^^^3 individuals = no pain.) I remain open to good arguments to the contrary—I haven’t read one yet. I note that my tablet would be made completely obsolete if we ever engineered the capacities for pain and pleasure out of ourselves. I wonder what moralities, if there were even a use for them, would look like then?
Hal: I wouldn’t do it, nor do I think I’d want to live in a world governed thusly. My reasoning is that it violates individual liberty and self-possession. It seems to imply that individuals are somehow the “eminent domain”, as it were, of society. I reject that. I say that nobody has the right to spend the baby’s life. Granted, this is more of a political stance than a moral one. I can’t claim that there’s an objective reason to value individual rights so highly, but it is a fact that I do. I know you said the baby wouldn’t suffer, but this question still put me in mind of the idea that pain and happiness may not be the same currency. It may not be valid to try to offer suffering as a payment for happiness.
Laura: Yes, I absolutely steal the key. Given the context of the original question, I had in mind the right to life, in particular. I didn’t make this distinction until you asked this question. I happen not to think that the right to property is anything like as valuable as the right to life. (By “right” I mean nothing more than ground rules that society has “agreed” on.) Again, I have a problem with acting as though an individual’s life is the eminent domain of society. As in Shirley Jackson’s “The Lottery,” the picture looks very different depending on whether you are the beneficiary or the sacrifice.
It could be that ordinary political ideas are inadequate for a world in which a superintelligence is available. Part of the reason that the idea of forcefully sacrificing the few for the many is repulsive to me is that, in general in the present world, nobody knows enough to be trusted to make reliable utility predictions of such gravity. Even still, in a world with AI, the problem of non-consent remains. It’s all well and good to speak of utility, but next time, it could be you! How does it come to be that each individual has forfeited control over her/his own destiny? Is it just part of “the contract?”
It seems to me that some of these explanations for beauty are overkill. Start from the straightforward idea that natural selection shaped our pattern-recognition hardware, in all of its varieties, for “ordinary” evolutionary reasons. Then suppose that we discovered ways of contriving input (e.g. music, art) that exploited and tickled our pre-existing hardware, after the fact. I don’t see the need for music itself to have developed from anything that increased fitness.
Similarly, for sunsets and rainbows, suppose that we already had hardware that responded to color, as well as perceptual responses to scale, and even the intelligence to think about how much bigger the world and the sky is than us. Is it not enough to say that sunsets and rainbows supply sensory input that engages this pre-existing hardware in concert, provoking the feelings that we experience as wonder? Why would the specific source of sensory input itself have to have imparted a benefit?
Consider those trippy graphical music visualizers. They exploit our sensitivity to color, light, and particularly motion, but it does not follow that we need to have encountered anything specifically like them in the ancestral environment. It may be worth thinking about why our hardware interprets certain characteristics of sensory experiences as pleasant or discordant, but I think this can be done at a lower level that does not require ancestral exposure to anything like the compound phenomenon in question. Once you have the hardware, any sufficiently intense stimulation of it is bound to produce some reaction. There need not be any specific flavor of meaning (evolutionary psychological) in the input source.
I have been in love, and taken ecstasy. (As it happens, I have also taken ecstasy with someone I was in love with.) I do think being in love is more complex than the feeling induced by those chemicals.
It seems to me that one of the biggest parts of being in love is the pervasive fixation on that one person. Those obsessive thought patterns that are like wanting to saturate yourself with that person’s essence. An ecstacy trip can’t really give you that, and of course by itself it can’t supply the person.
However, the feeling of being on X is somewhat similar to that of touching, or embracing, someone you’re in love with (perhaps even a little more intense?). Also, the feelings induced by X are of the sort that make you very well-disposed towards humanity, so that you might feel something like spontaneous love towards strangers you happened upon. (I have also experienced this kind of thing while in love.) So I would say that some of the visceral feelings are common to both, as would be expected if they involve the same/similar neurochemicals, but the experiences aren’t that similar. Certainly, chemical factors are very much in play when one is in love, but chemicals cannot synthesize the experience of exploring, absorbing, and integrating with another’s mind, emotions, personality, and life.
If the alien is able to predict your decision, it follows that your decision is a function of your state at the time the alien analyzes you. Then, there is no meaningful question of “what should you do?” Either you are in a universe in which you are disposed to choose the one box AND the alien has placed the million dollars, or you are in a universe in which you are disposed to take both boxes AND the alien has placed nothing. If the former, you will have the subjective experience of “deciding to take the one box”, which is itself a deterministic process that feels like a free choice, and you will find the million. If the latter, you will have the subjective experience of “deciding to take both boxes”, and you will find nothing in the opaque box.
In short, the framing of the problem implies that your decision-making process is deterministic (which does not preclude it being a process that you are conscious of participating in), and the figurative notion of “free will” does not include literal degrees of freedom. If you must insist on viewing it as a question of what the correct action is, it’s to take the one box. Regardless of your motivation, even if your reason for doing so is this argument, you will find yourself in a universe in which events (including thought events) have led you to take one box, and these are the same universes in which the alien places a million dollars in the box.