Assume guilt!
NMJablonski
All the reliable literature I have read says that:
a) Conventional produce and organic produce are nutritionally equivalent
b) Organic produce is more prone to rancidity as fewer preservatives are used
c) Organic produce will make you popular with people who wear glasses with no lenses
So, there are people who disagree with what you posted, and may be inclined to argue about it. That, combined with the idea shared in the Paul Graham quote in this very thread (about politics frequently being used as a form of identity) leads to defensiveness, leads to rationalization, and leads to stupidity.
So, in order to avoid stupid arguments, people would prefer fewer posts like your quote on LW.
Konkvistador is a concerned Tutsi living in a politico-cultural regime which seems increasingly pleased at the prospect of watching Hutus eat Tutsis.
I don’t see Clippy as a troll.
The Clippy character is well developed, interesting, and has become a part of this community. I have never felt that Clippy’s comments were intrusive, and I do not understand why anyone would go so far as to damn Clippy for breaking character by donating to SIAI.
I think I must be missing something.
Clearly there’s a group of people who dislike what I’ve said in this thread, as I’ve been downvoted quite a bit.
I’m not perfectly clear on why. My only position at any point has been this:
I see a universe which contains intelligent agents trying to fulfill their preferences. Then I see conversations about morality and ethics talking about actions being “right” or “wrong”. From the context and explanations, “right” seems to mean very different things. Like:
“Those actions which I prefer” or “Those actions which most agents in a particular place prefer” or “Those actions which fulfill arbitrary metric X”
Likewise, “wrong” inherits its meaning from whatever definition is given for “right”. It makes sense to me to talk about preferences. They’re important. If that’s what people are talking about when they discuss morality, then that makes perfect sense. What I do not understand is when people use the words “right” or “wrong” independently of any agent’s preferences. I don’t see what they are referring to, or what those words even mean in that context.
Does anyone care to explain what I’m missing, or if there’s something specific I did to elicit downvotes?
Silas, you’re spending too much time talking about JGWeissman here. In his last post he offered to drop all meta points in this discussion and focus on object-level reality. If you think you’re right about the issues accept his offer and move the discussion there.
This particular post is moving into sarcastic flamewar territory.
I swear, if you can make an ironclad rational argument for Mormonism, I will personally convert.
Seconded. I am entirely open to models of the universe that better fit the evidence at hand than the ones I have. If you (calcsam) can present a convincing case for the accuracy and validity of your beliefs I will adopt them as well.
Evolved from both simpler winged aircraft and simpler rockets.
All the base components that went into the space shuttle still existed on a line of technogical progress from the basic to the advanced. Actually, the space shuttle followed Gall’s Law precisely.
The lift mechanism was still vertically stacked chemical rockets of the sort that had already flown for decades. The shuttle unit was built from components perfected by the Gemini and Apollo programs, and packed into an aerodynamic form based on decades of aircraft design.
Reducing technologically, the shuttle still depends on simple systems like airfoils, rockets and nozzles, gears, and other known quantities.
Certainly my good sir.
http://www.mayoclinic.com/health/organic-food/NU00255/NSECTIONGROUP=2
Austin Less Wrong Meetup, Saturday April 23rd, 12:00 Noon
Based upon my experiences, physical truths appear to be concrete and independent of beliefs and opinions. I see no cases where “right” has a meaning outside of an agent’s preferences. I don’t know how one would go about discovering the “rightness” of something, as one would a physical truth.
It is a poor analogy.
Edit: Seriously? I’m not trying to be obstinate here. Would people prefer I go away?
New edit: Thanks wedrifid. I was very confused.
I’m sorry. It’s clear that you’re motivated to “win” an argument, not get at reality.
For the record, words do not have intrinsic meanings. If you are willing to use simpler words that we are likely to agree on to explain what you mean by “moral”, “right” and “good” then I will be happy to read it. Otherwise, I just cannot take you seriously enough to continue this.
EDIT: If you really would like to discuss this I suggest we move to the LessWrong IRC channel instead of making a long person to person thread here.
What is it about conduct that makes it right and good as opposed to wrong and evil?
What is it that determines these attributes, if not human preference?
The only thing I have consistently rejected on LW is the metaethics. I find that a much simpler Friedmanite explanation of agents pursuing their separate interests fits my experience.
For example, I would pay a significant amount of money to preserve the life of a friend, and practically zero money to preserve the life of an unknown stranger. I would spend more money to preserve the life of a successful scientist or entrepreneur, than I would to preserve the life of a third world subsistence farmer.
This is simply because I value those persons differently. I recognize that some people have an altruistic terminal value of something like:
“See as many agents as possible having their preferences fulfilled.”
… and I can see how the metaethics sequence / discussion are necessary for reducing that terminal value to a scientific, physical metric by which to judge possible futures (especially if one wants to use an AI). But, since I don’t share that terminal value, I’m consistently left underwhelmed by the metaethics discussions.
That said, this looks like an ambitious sequence. Good luck!
Further reply:
I was contemplating this exchange and wondering whether Gall’s Law has any value (constrains expected experience).
I think it does. If an engineer today claimed to have successfully designed an Albucierre engine, I would probably execute an algorithm similar to Gall’s Law and think:
The technology does not yet exist to warp space to any degree, nor is there an existing power source which could meet the needs of this device. The engineer’s claim to have developed a device which can be bound to a craft, controllably warp space, and move it faster than light is beyond existing technological capability. We are too many Gall Steps away for it to be probable.
it’s just a value that if revealed would derail any and all threads
By saying that in a community as insatiably curious as LW you now have dozens of people (including me) persistently wondering what the heck it could be.
:)
the only violations are “Better dead than Red” and the mention of a spouse and children.
I have to say, I was also a little puzzled that the idea being accepted here was communism. To give it a favorable interpretation, I just assume it’s being used as an cultural idiom to convey the idea of preferring death to submission to an ideological opponent.
(Nod)
I still think it’s a pretty simple case here. Is there a set of preferences which all intelligent agents are compelled by some force to adopt? Not as far as I can tell.
This is crisp, clear, and one of the best short explanations of the issue I’ve read.