To the degree “thinking” or “deciding” actually exists, it’s not clear to me that we as individuals are the actual agents, rather than observer subcomponents with an inflated sense of agency, perhaps a lot like the neurons but with a deluded/hallucinated sense of agency.
Hopefully_Anonymous
“much the” should read “much like the”
J Thomas, whether or not foxes or rabbits think about morality seems to me to be the less interesting aspect of Tim Tyler’s comments.
As far as can tell this is more about algorithms and persistence. I aspire to value the persistence of my own algorithm as a subjective conscious entity. I can conceive of someone else who values maximizing the persistence odds of any subjective conscious entity that has ever existed above all. A third that values maximizing the persistence odds of any human who has ever lived above all. Eliezer seems to value maximizing the persistence of a certain algorithm of morality above all (even if it deoptimizing the persistence odds of all humans who have ever lived). Optimizing the persistence odds of these various algorithms seems to me to be in conflict with each other, much the algorithm of the fox having the rabbit in it’s belly is in conflict with the algorithm of the rabbit eating grass, outside of the foxes belly. It’s an interesting problem, although I do of course have my own preferred solution to it.
Ben, you write “Do you strive for the condition of perfect, empty, value-less ghost in the machine, just for its own sake...?”.
But my previous post clearly answered that question: “I’d sacrifice all of that reproductive fitness signalling (or whatever it is) to maximize my persistence odds as a subjective conscious entity, if that “dilemma” was presented to me.”
I’m fine with a galaxy without humor, music, or art. I’d sacrifice all of that reproductive fitness signalling (or whatever it is) to maximize my persistence odds as a subjective conscious entity, if that “dilemma” was presented to me.
Daniel Reeves, I checked out your bio. Very impressive stuff, and best of success with your work and research!
Richard, Thanks, the SEP article on moral psychology was an enlightening read.
“Someone sees a slave being whipped, and it doesn’t occur to them right away that slavery is wrong. But they go home and think about it, and imagine themselves in the slave’s place, and finally think, “No.”″
I think lines like this epitomize how messy your approach to understanding human morality as a natural phenomenon is. Richard (the pro), what resources do you recommend I look into to find people taking a more rigorous approach to understanding the phenomenon of human morality (as opposed to promoting a certain type uncritically)?
Weird, jsalvati is not my sock puppet, but the 11:16pm post above is mine.
Frame it defensively rather than offensively and a whole heck of a lot of people would take that pill. Of course some of us would also take the pill that negates the effects of our friends taking the first pill, hehehe.
should read: (like whether we should give primacy to minimizing horrific outcomes or to promoting social aesthetics like “do not murder children”).
I think the child on train tracks/orphan in burning building tropes you reference back to prey on bias, rather than seek to overcome it. And I think you’ve been running from hard questions rather than dealing with them forthrightly (like whether we should give primacy to minimizing horrific outcomes or to promoting social aesthetics like “do not murder children” or minimizing horrific outcomes). To me this sums up to you picking positions for personal status enhancement rather than for solving the challenges we face. I understand why that would be salient for a non-anonymous blogger. I hope you at least do your best to address them anonymously. Otherwise we could be left with a tragedy of the future outcomes commons, with all the thinkers vying for status over maximizing our future outcomes.
Mark, I think you over-identify with whoever controls the nuclear weapons in the US arsenal. I think their existence is a complex phenomenon, and I’m not sure it can be reduced to “I am an American citizen and voter, therefore I exert partial control and ownership of the weapons in the nuclear aresenal”.
Beyond that, I think a major source of bias is people who let the status quo and power/hegemony alignment do a lot of their argumentative legwork for them. I think you’re doing that here, but it’s a much bigger problem warping our models of reality than this instance.
Frelkins, You shifted rather quickly from what I think is the stronger argument against MAD (greater catastrophic risk due to human error and irrationality) to what I think is a weaker argument against MAD (a claim that some states are suicidal). I think you should focus on the stronger argument.
Also, I think the claim that a world without the type of MAD one gets from nukes is a world where all politics is solved through war is, I think, inaccurate. Some politics seems to be solved through war, others don’t, both before and after MAD. It may be true that there’s never been direct conflict on sovereign territory between two nations that both have nuclear strike capability against each other, but that’s a small swath of history.
I’m not arguing against MAD, or against the concept that nuclear proliferation results in a more peaceful world. But I’m not sold on it yet either. It’s worth more study, it seems to me.
Frelkins, I think the main perceived flaw in this line of reasoning is that error and irrational decision making are possible, and with viable MAD set up, the results could be catastrophic.
I’m with James Miller and Caledonian on this one, and I want it taken further. Caledonian, I think the cognitive bias is good old repugnancy bias. How I’d like it taken further: I think what we want to avoid is not (1) horrific outcomes due to war from a specific type of technology, nor (2) horrific outcomes due to war generally, but (3) horrific outcomes generally. As such, beyond using nuclear weapons (which I’m not convinced prevents any of the three, though it may), how about greatly increasing the variety of human medical experimentation we engage in, including medical experimentation without consent, and breeding and cloning people, and making genetic knockout people and disease models of people to the extent that there will be a net decrease in horrific outcomes (death, suffering, etc.)? Sort of Dr. Ishii meets Jonas Salk.
“And because we can more persuasively argue, for what we honestly believe, we have evolved an instinct to honestly believe that other people’s goals, and our tribe’s moral code, truly do imply that they should do things our way for their benefit.”
Great post overall, but I’m skeptical of this often-repeated element in OB posts and comments. I’m not sure honest believers always, or even usually, have a persuasion advantage. This reminds me of some of Michael Vassar’s criticism of nerds thinking of everyone else as a defective nerd (nerd defined as people who value truth-telling/sincerity over more political/tactful forms of communication.
I haven’t gotten through your whole post yet, but the “postmodernist literature professor” jogged my memory about a trend I’ve noticed in your post. Postmodernists, and perhaps in particular postmodernist literature professors seem to be a recurring foil. What’s going on there? Is a way to break out of that analytically? I sense that as a deeper writer and thinker you’ll go beyond cartoonish representations of foils, if nothing more to reflect a deeper understanding of things like postmodernist literature professors as natural phenomena. It seems to me to be more a barrier to knowledge and understanding than an accurate summation of something in our reality (postmodernist literature professors).
Caledonian, you make some good posts, but here I think your lates post fall in the category of anti-knowledge. I recommend trying to stay away from heroic narratives and morality plays (Watson, Skinner GOOD, Freud BAD) and easy targets, like those that express the wish-fulfilling belief that the mind mystically survives the death of the body.
Whether the mind does survive the death of the body in a sufficiently large universe/multiverse (with multiple “exact” iterations of us) is a more complicated question, in that black box/”magic” area of why our internal narrative sense of personal identity apparently survives over a punctuated swath of timespace configurations, in a changing variety of material compositions/blobs of amplitude probability distribution in the first place.
I jotted it off messy, but I think the point remains that although in principle our existence as minds may be perfectly normal since it’s part of reality, it seems pretty damn weird compared to our evolved intuitions.
There’s this weird hero-worship codependency that emerges between Eliezer and some of his readers that I don’t get, but I have to admit, it diminishes (in my eyes) the stature of all parties involved.