What corporations do is very different from biological evolution, but if a corporation develops a successful idea then it is likely to be copied by other corporations without anything like biological reproduction entering the picture.
George_Weinberg2
Maybe I should have said something more like “conceivably could be” rather than “is likely to be”. Certainly I didn’t mean to imply that every firm in an industry will immediately copy somebody else’s good idea. There isn’t even a guarantee that a good idea will be recognized as one in the company in which it originates.
But the point is that ideas can be copied without anything like biological reproduction taking place. Why they so seldom are is an interesting question, I’ve added Deming to my “to read” list.
It seems to me that normative statements like “let us go and serve other gods” aren’t really something you can have a rational debate about. The question comes down to “which do you love more, your god or me”, and the answer should always be “God”… according to God.
Similarly, one could have a rational debate about whether a command economy will outperform a market economy or vice versa (although the empirical evidence seems pretty one-sided), but a statement like “all people ought to be socially and economically equal” seems like something that just has to be accepted or rejected.
If you met John Barnes and he argued that he’s doing the right thing, would it be appropriate to sock him in the jaw?
No, because the statement that “the only appropriate response to some arguments is a good swift sock in the jaw” is not itself one of the arguments whose appropriate response is a sock in the jaw. There may or may not be any such arguments, but socking him in the jaw is admitting that he is fundamentally right. Of course, it might be appropriate to sock him for some other reason :-)
One can argue that Buzz Aldrin had a special right to sock the guy that you or I would not. To me, claiming the moon landing was faked is just an absurd statement. Saying it in front of Buzz is unjustifiably calling the man a fraud and a liar. Buzz shouldn’t have to put up with that kind of crap.
Or at least invent a cultometer, so we can check our cultempature?
It’s a bad sign if we develop identifiable cliques. Because of general attitudes it stands to reason agreements and disagreements won’t be randomly distributed, but ideally we shouldn’t “agree” or “disagree” with others because we agreed or disagreed with them in the past. It probably wouldn’t be too hard to develop some sort of voting software that measured cliquishness if there’s a demand for it.
Of course, the real disaster would be if people start saying things like “Eliezer is always right”. Nobody is always right.
“Since you are so concerned about the interactions of clothing with probability theory,” Ougi said, “it should not surprise you that you must wear a special hat to understand.”
But isn’t this almost the exact opposite of what the student was saying? Questioning the robes indicates to me that the student felt there was not any interaction between learning probability theory and clothing, and that therefore it served some other purpose, presumably differentiating between an in group and an out group.
Or am I just nuts for trying to argue with you about the internal thoughts of your own fictional characters?
Ni no Tachi figured out how to use the hammer, but Bouzo only sold them without understanding their value.
“A bird in the hand is worth what you can get for it.”—Ambrose Bierce
Fiction is fiction, but it seems to me that that if student objects to wearing silly clothes and his master responds by ordering him to wear yet sillier clothes, it’s a lot more plausible that the student will conclude his master is a quack and drop out than that he’ll decide to extend his master’s teaching by taking silly clothes to a whole new level.
Maybe the whole point of this exercise is to remind us that one can’t come to reliable conclusions from fictional evidence? If so, well, maybe I haven’t learned anything… but at least I’ve learned I haven’t learned anything.
Martin Gardner has a chapter on these “look-see” proofs in Knotted Donuts.
It seems to me that your argument relies on the utility of having a probability p of gaining x being equal to p times the utility of gaining x. It’s not clear to me that this should be true.
The trouble with the “money pump” argument is that the choice one makes may well depend on how one got into the situation of having the choice in the first place. For example, let’s assume someone prefer 2B over 2A. It could be that if he were offered choice 1 “out of the blue” he would prefer 1A over 1B, yet if it were announced in advance that he would have a 2⁄3 chance of getting nothing and a 1⁄3 chance of being offered choice 1, he would decide beforehand that B is the better choice, and he would stick with that choice even if allowed to switch. This may seem odd, but I don’t see why it’s logically inconsistent.
For some reason this post reminds me of the Buddhist parable asceticsim now, nymphs later.
I don’t think it’s all that uncommon to begin cultivating an art for some specific purpose, proceed to cultivate it largely for its own sake, and eventually to abandon the original purpose.
It would certainly facilitate communication, though, if people could agree on what words mean rather than having personal definitions. No doubt it’s unrealistic to expect everyone to agree on precisely where the boundary between yellow and orange lies, but tigers aren’t even a yellowish orange.
The essential idea behind reductionism, that if you have reliable rules for how the pieces behave then in principle you can apply them to determine how the whole behaves, has to be true. To say otherwise is to argue that the airplane can be flying while all its constituent pieces are still on the ground.
But if you can’t do a calculation in practice, does it matter whether or not it would give you the right answer if you could?
There’s a way you could make the heat=motion concept much clearer to Carnot. When one studies kinematics, one generally makes the approximation that macroscopic bodies are rigid, and the motions of the body refer to center of mass motion, or perhaps rotation about some axis. If you explain that “heat” refers to the motion of the constituent particles relative to each other, I think a scientist of Carnot’s day would understand the idea pretty quickly.
I think this sort of thing might be what people mean when they talk about a “bridging theory”.
Well, I remember wondering as a graduate student how how one was supposed to go about deciding what problems to work on, and not coming up with a good answer . A fellow student suggested that your project is worth working on if you can get it funded, but I think he was kidding. Or maybe not.
Most experimentalists really aren’t in the business of supporting or refuting hypotheses as such. It’s more a matter of making a measurement, and yes they will be comparing their results to theoretical predictions, but ideally experimentalists should be disinterested in the result, that is, they care about making as accurate a measurement as possible but don’t have any a priori preference of one value over another.
I see no reason to believe there is such a thing as an objective definition of “fair” in this case. The idea that an equal division is “fair” is based on the assumption that none of the three has a good argument as to why he should receive more than either of the others. If one has a reasonable argument as to why he should receive more, the fairness argument breaks down. In fact, none of the three really have a good argument as to why he is entitled to any of it, and I can’t see why it would be wrong for any of the first one to grab it to claim the whole pie under “right of capture”.
what’s the standard reply to someone who says, “Friendly to who?” or “So you get to decide what’s Friendly”?
This is an important question. I don’t believe there is such a thing as an objective definition of friendliness, I’d doubt that “reasonable” people can come to an agreement as to what friendliness means. But I’m eager to be proven wrong, keep writing.
Why do people seem to mean different things by “I want the pie” and “It is right that I should get the pie”?
These really are different statements. “I am entitled to fraction x of the pie” means more or less the same as “a fair judge would assign me fraction x of the pie”.
But a fair judge just means the judge has no personal relationship with any of the disputing parties and makes his decision based on some rational process, not arbitrarily. It isn’t necessarily true that there’s a unique solution that a fair judge would decide upon. One could say that whoever saw it first or touched it first is entitled to the whole pie, or that it should be divided strictly equally, or that it be divided on a need-based or merit-based, or he could even make the gods must be crazy/idiocy of Solomon solution and say it’s better that the pie be destroyed than allowed to exist as a source of dissent. In my (admittedly spotty) knowledge of anthropology, in most traditional pie-gathering societies, if three members of a tribe found a particularly large and choice pie they would be expected to share it with the rest of the tribe, but they would have a great deal of discretion as to how the pie was divided, they’d keep most of it for themselves and their allies.
This is not to say that morality is nothing but arbitrary social convention. Some sets of rules will lead to outcomes that nearly everyone would agree are better than others. But there’s no particular reason to believe that there could be rules that everyone will agree on, particularly not if they have to agree on those rules after the fact.
Slightly OT for this thread: there should always be a prominent link on the right to the open thread. As things are, it gets heavy usage the first couple days of the month, then falls off the bottom of the page before anyone can read most of the comments. Look, it’s gone again already!
I know I’ve said this before, but I think it was on the open thread and it fell off the bottom of the page before anyone read it.
I think it’s probably useful to taboo the word “should” for this discussion. I think when people say you “should” do X rather than Y it means something like “experience indicates X is more likely to lead to a good outcome than Y”. People tend to have rule-based rather than consequence based moral systems because the full consequences of one’s actions are unforeseeable. A rule like “one shouldn’t lie” comes about because experience has shown that lying often has negative consequences for the speaker and listener and possibly others as well, although the particular consequences of a particular lie may be unforeseeable.
I don’t see how there can be agreement as to moral principles unless there is first a reasonably good agreement as to what constitutes good and bad final states.
There’s a big difference between saying “morality is the product of human minds” and saying “morality is purely arbitrary”. Similarly, there’s a big difference between saying “there are objective reasons why we make the moral judgments we do” and “all moral questions have objective answers which in no way depend on human minds”.
Life is not a zero sum game. I think nearly everyone would agree that it would be advantageous to nearly everyone if one could somehow guarantee that neither one’s self nor one’s loved ones would be killed at the cost of forgoing the ability to kill one’s enemies. I think this fact, not repeated arbitrary assertion, is the basis for the nearly universal belief that “murder is wrong”. I think the fact that, in many societies, refraining from killing those outside one’s own tribe does nothing to prevent those outside the tribe from killing one’s self or one’s loved ones, and not arbitrary bigotry, is the reason that in those societies killing those outside one’s tribe does not count as murder.
I have a question about this picture.
Imagine you have something like a chess playing program. It’s got some sort of basic position evaluation function, then uses some sort of look ahead to assign values to the instrumental nodes based on the terminal nodes you anticipate along the path. But unless the game actually ends at the terminal node, it’s only “terminal” in the sense that that’s where you choose to stop calculating. There’s nothing really special about them.
Human beings are different from the chess program in that for us the game never ends, there are no “true” terminal nodes. As you point out, we care what happens after we are dead. So wouldn’t it be true that in a sense there’s nothing but instrumental values, that a “terminal value” just means that a point at which we’ve chosen to stop calculating, rather than saying something about the situation itself?