That missing word: “of”.
Lake
@ Unknown: Well, one reason why our point of view is more valid than their’s is that we exist and they don’t.
In addition, it is probably worth stressing that inclusive fitness is not, strictly speaking, the goal of anything at all. Goals only make sense relative to intentions, values and so forth—the usual accoutrements of mentality. These are all things that we humans (and perhaps some other creatures) possess, but which evolution, and our genes, do not. No minds, you see. Despite appearances.
This said, there might be something to be said for engineering or breeding descendants whose drives are more harmonious than our own. For instance, they might be happier. Still, there’s no particular reason why we should choose to make inclusive fitness the goal of all their striving, as opposed to something else.
Eliezer—wasn’t Jeff’s comment intended to suggest, not that there isn’t a bias, but that the bias may be adaptive? Offhand I can’t imagine quite what edge it might supply, but perhaps some story could be told.
Presumably the advantage of making Jupiter into a person rather than a ball of gas is not simply that we get an extra person to think about, but that it also allows us to explain various natural phenomena in a peculiarly satisfying way—as the traces of intelligible actions. Not that these explanations would have much to recommend them if you seriously wanted to understand the pheonomena. But literary writers are not, for the most part, in that business; “poetic truth” is an alienans predication, like “Tennessee whiskey”.
“Savannah poets” is a superb coinage, btw. Is it yours?
@ Caledonian: eh? But why would the ability to suspend one’s social-operations module at will make it boring to look at stories while using that module? And in what sense is one seeing them “directly” when one stops treating them as simulated social interactions?
Perhaps “learning” is the wrong word. But “recognition” seems too restrictive to capture everything that makes a good story good. There’s also surprise—when an author uses the reader’s capacity for recognition against them. Surely you admit that this is pretty much the life-blood of storytelling. And, for that matter, it strikes me that it probably can teach you something—about your own inferential dispositions, if nothing else.
Not to mention a bitchin’ soap opera.
Also, what notion of value do you have in mind, if not something that pushes your primate buttons? And if you’re so down on the pleasures of narrative, why read sci-fi at all? Why not just, you know, read sci?
″… if not something that presses your primate buttons.”
Still waiting.
What, you mean you start finding it everywhere? If only.
You could call it “Overcoming Fun”.
Eliezer’s polemical tone is one of the great strengths of his pedagogical approach, IMO.
Hang on. @ Caledonian and Psy-Kosh: Surely mathematical language is just language that refers to mathematical objects—numbers and suchlike. Precise, unambiguous language doesn’t count as mathematics unless it meets this condition.
Eliezer: Are you looking for a new definition of “fairness” which would reconcile the partisans of existing definitions? Or are you just pointing out that this is a sort of damned-if-you-do, damned if-you-don’t problem, and that any rule for establishing fairness will piss somebody or other off? If the latter, from the point of view of your larger project, why not just insert a dummy answer for this question—pick any definition that grabs you—and see how it fits with the rest of what you need to work out. Or work through several different obviously computable answers.
As fair as it goes, it seems plausible-ish that fairness has to do with equality of something—resources, or opportunity, or utility, or whatever—but I doubt whether there’s any general agreement over what should be equalised, and I don’t see the value of descending to a meta level of discussion to sort the question out. Meta-discussions would have to be answerable to fairness anyway, if they were to be fair, and that looks circular. So why not cut the knot and pick whatever answer is nearest to hand?
I suppose that’s just to second Paul Gowder’s point that the political problem is insurmountable. But I imagine few things would resolve a political problem faster then the backing of an all-powerful supermind.
@Paul: You seem to suggest that we all take the same things to be reasons, perhaps even the same reasons. Is this warranted?
There’s at least one other intuition about the nature of morality to distinguish from the as-preference and as-given ideas. It’s the view that there are only moral emotions—guilt, anger and so on—plus the situations that cause those emotions in different people. That’s it. Morality on this view might profitably be compared with something like humour. Certain things cause amusement in certain people, and it’s an objective fact that they do. At the same time, if two people fail to find the same thing funny, there wouldn’t normally be any question of one of them failing to perceive some public feature of the world. And like the moral emotions, amusement is sui generis—it isn’t reducible to preference, though it may often coincide with it. The idea of being either a realist or a reductionist about humour seems, I think, absurd. Why shouldn’t the same go for morality?
Hear hear to Dynamically Linked’s last paragraph.
@ Ian C. Couldn’t Subhan claim that as a restatement of his own position? His notion of wanting clearly encompasses more than mere whims. Perhaps he would say that a certain subset of desires, objectively grounded in the constitution of the mind, count as moral impulses.
Actually, is Subhan meant to be male? Apologies if not.
I gestured at one possible answer to that question. A situation has a moral dimension if it engages moral emotions—which can presumably be listed.
Re. your last remark, wouldn’t a distinction between premise-circularity and rule-circularity do the trick?
Perhaps I’m being dim, but a prior is a probability distribution, isn’t it? Whereas Occam’s Razor and induction aren’t: they’re rules for how to estimate prior probability. Or have I lost you somewhere?
@ Tiiba # 1: Without wishing to second-guess Eliezer, I’d suggest that his prolonged examination of the buggy, ad-hoc character of human intelligence may be intended to preface a discussion AI, its goals and methods. After all, the contrast with human intelliegence could be illuminating.