A good question to keep in mind is how much real power the electorate has, as opposed to entrenched bureaucrats or de facto oligarchies.
Question. I admit I have a low EQ here, but I”m not sure if 4) is sarcasm or not. It would certainly make a lot of sense if “I’ve been glad to see in this thread that we LW’s do, in fact, put our money where our mouths are when it comes to trying to navigate, circumvent, or otherwise evade the Mindkiller.” were sarcasm.
I would have said we had information on 2), but I’ve made so many wrong predictions about Donald Trump privately that I think my private opinion has lost all credibility there. 1) makes sense.
I can see why you might be afraid of war breaking out with Russia, but why do you consider Islam a major threat? Maybe you don’t and I’m misinterpreting you, but given how little damage terrorist attacks actually do isn’t Islam a regional problem to which the West has a major overreaction problem?
I was trying to say with my second paragraph that we specifically cannot be sure about that. My first paragraph was simply my best effort at interpreting what I think hairyfigment thinks, not a statement of what I believe to be true.
From my vague recollections I think the idea is worth looking up one way or the other. After all, a massive portion of modern culture is under the impression there are no gender differences and there are other instances of clear major misconceptions I actually can attest to throughout history. But I don’t have any idea with the Romans.
Clarification please. How do you avoid this supposed vacuity applying to basically all definitions? Taking a quick definition from a Google Search:
A: “I define a cat as a small domesticated carnivorous mammal with soft fur, a short snout, and retractile claws.”
B: “Yes, but is that a cat?”
Which could eventually lead back to A saying that:
A: “Yes you’ve said all these things, but it basically comes back to the claim a cat is a cat.”
Maybe we should be abandoning the objectivity requirement as impossible. As I understand it this is in fact core to Yudkowsky’s theory- an “objective” morality would be the tablet he refers to as something to ignore.
I’m not entirely on Yudkowsky’s side in this. My view is that moral desires, whilst psychologically distinct from selfish desires, are not logically distinct and so the resolution to any ethical question is “What do I want?”. There is the prospect of coordination through shared moral wants, but there is the prospect of coordination through shared selfish wants as well. Ideas of “the good of society” or “objective ethical truth” are simply flawed concepts.
But I do think Yudkowsky has a good point both of you have been ignoring. His stone tablet analogy, if I remember correctly, sums it up.
“I think Eliezer is correct in showing that the only solution is avoiding contact at all.”: Assumes that there is such a thing as an objective solution, if implicitly.
“The difference is not between two cars, yours and mine, but between a passegner ship and a cargo ship, built for two different purpose and two different class of users.”: Passenger and cargo ships both have purposes within human morality. Alien moralities are likely to contradict each other.
“There’s not much objectivity in that.”: What if objectivity in the sense you describe is impossible?
“Why is it so important that our morality is the one that motivates us? People keep repeating it as though its a great revelation, but its equally true that babyeater morality motivates babyeaters, so the situation comes out looking symmetrical and therefore relativistc.”: If it isn’t, then it comes back to the amoralist challenge. Why should we even care?
The Open Question argument is theoretically flawed because it relies too much on definitions (see this website’s articles on how definitions don’t work that way, more specifically http://lesswrong.com/lw/7tz/concepts_dont_work_that_way/).
The truth is that humans have an inherent instinct towards seeing “Good” as an objective thing, that corresponds to no reality. This includes an instinct towards doing what, thanks to both instinct and culture, humans see as “good”.
But although I am not a total supporter of Yudowksy’s moral support, he is right in that humans want to do good regardless of some “tablet in the sky”. Those who define terms try to resolve the problem of ethical questions by bypassing this instinct and referencing instead what humans actually want to do. This is contradictory to human instinct, hence the philosophical force of the Open Question argument but it is the only way to have a coherent moral system.
The alternative, as far as I can tell, would be that ANY coherent formulation of morality whatsoever could be countered with “Is it good?”.
I think hairyfigment is of the belief that the Romans (and in the most coherent version of his claim you would have to say male and female) were under misconceptions about the nature of male and female minds, and believes that “a sufficiently deep way” would mean correcting all these misconceptions.
My view is that we really can’t say that as things stand. We’d have to know a lot more about the Roman beliefs about the male and female minds, and compare them against what we know to be accurate about male and female minds.
On a purely theoretical level (which is fun to talk about so I think worth talking about) I would like to see one of the high status and respected members of the rationalist movement (Yudowsky, Hanson etc) in power. They’d become corrupt eventually, but do a lot of good before they did.
On a practical level, our choices are the traditional establishment (which has shown its major flaws), backing Trump, or possibly some time in the future backing Sanders. Unless somebody here has a practical way to achieve something different, that’s all we have.
(EDIT: For what it’s worth, I base my trust on their works, somewhat on their theories on rationality, and the fact that reviewing ideas in far mode for so long has them “nailed” to policies. Without, say, an implacable Congress in their way, I think they’d do enough good to outweigh their inevitable corruption)
What is this even? I don’t get it.
Got it. Thanks.
cousin_it, if you’re still paying attention- I’m curious why you think this about Eliezer.
Major question. Where do you fit the kind of truth that comes from realising an idea is incoherent, therefore must be wrong?
(For clarity, my view is that the whole notion of ‘affective truth’ is just plain wrong, but I have nothing to say on that which hasn’t been already said)
Good to know, but does that research clarify whether happiness is overall higher or lower in the long run?
I can believe that that’s true for a significant portion of humanity- that they would choose to have children even knowing it would be bad for their happiness in the long run. It isn’t true for me, though, and there are large numbers of people for whom it isn’t (or else childlessness in the West wouldn’t have risen so much).
Why is that?
I have an independent income. I demand a transfer, and if I don’t get it I quit.
If I go on about it enough in conversation, people will have to realise. I won’t made it explicit directly to them, but them realising will discourage others.
Because it makes it obvious to people that I’m taking my policy seriously.