I don’t have any principled objection to this policy, other than that as rationalists, we want to have fun, and this policy makes LW less fun.
blacktrance
It follows, then, that someone who advocates unbounded freedom of contract in a democratic republic wants the state to be such a third party and for it to not have objections to any contract terms.
If people enter into such contracts, it’s because they prefer to be in them. Why do your preferences override theirs?
I don’t see how this is relevant. People would prefer not to be able to defect in a prisoner’s dilemma—that’s their own preference.
But not everyone who would have gotten a slavery contract would get a work-for-money contract. Also, while one side (the slaves/employees) are being made better off, the other side is being made worse off.
The first philosopher sounds like an egoist trying to convince altruists. The second philosopher sounds like a sophisticated egoist trying to convince vulgar egoists. I have to question the terminology, though—the goal of egoists is to win, so “acting selfishly” is whatever behavior benefits the egoist, even if it requires benefiting others.
Solipsism is my problem and mine alone.
For what it’s worth, personal experience tells me otherwise.
I suggest self-modifying to remove your deontology module.
What exactly are selfish motivations?
What’s wrong with doing things for your own enjoyment if you value them more highly than the well-being of strangers?
Isn’t the well-being of strangers a component of your well-being? (Assuming you care about them.)
Some of Huemer’s arguments against Objectivism are good (particularly the ones about the a priori natures of logic and mathematics), but his arguments against the core of Objectivism (virtue ethical egoism) fall short, or at best demonstrate why Objectivism is incomplete rather than wrong.
Ubjrire, gur tbnyf bs rtbvfgf qb ybbx qvssrerag sebz gur tbnyf bs nygehvfgf, ng yrnfg nygehvfgf nf Enaq qrsvarq gurz.
Juvyr ure ivyynvaf ner fbzrjung rknttrengrq va gur frafr gung crbcyr va cbjre hfhnyyl qba’g guvax va gubfr grezf (gubhtu gurve eurgbevp qbrf fbzrgvzrf fbhaq fvzvyne), va zl rkcrevrapr gurer vf n tbbq ahzore bs beqvanel crbcyr jub guvax dhvgr fvzvyneyl gb ure ivyynvaf. Enaq’f rknttrengvba vf cevznevyl gung vg vf ener gb svaq nyy bs gur artngvir genvgf bs ure ivyynvaf va crbcyr jub qb zbenyyl bowrpgvbanoyr guvatf, ohg ng yrnfg n srj bs gubfr genvgf ner gurer.
Gung’f fbzrjung orfvqrf gur cbvag, gubhtu. Znal crbcyr jubz Enaq jbhyq qrfpevor nf nygehvfgf ner abg yvxr gur ivyynvaf bs ure obbxf va gung gurl trarenyyl qba’g jnag gb sbepr bguref gb borl gurve jvyy (ng yrnfg abg rkcyvpvgyl). Vafgrnq, gurve crefbany orunivbe vf frys-unezvat (vanccebcevngr srryvatf bs thvyg, ynpx bs nffregvirarff, oryvrs gung gur qrfverf bs bguref ner zber vzcbegnag guna gurve bja, qrfver gb cyrnfr bguref gb gur cbvag gung gur ntrag vf haunccl, npgvat bhg bs qhgl va gur qrbagbybtvpny frafr, trahvar oryvrs va Qvivar Pbzznaq, rgp). Nygehvfz vf arprffnevyl onq, ohg nygehvfgf ner abg arprffnevyl crbcyr jub unez bguref—vg vf cbffvoyr naq pbzzba sbe gurve orunivbef/oryvrsf gb znvayl unez gurzfryirf.
Enaq’f ivyynvaf ner nygehvfgf, ohg abg nyy Enaqvna nygehvfgf ner ivyynvaf—znal ner ivpgvzf bs artngvir fbpvrgny abezf, pbtavgvir qvfgbegvbaf, onq cneragvat, rgp.
The idea of a priori knowledge is not that it’s intuitive, but that it is not dependent on experience for it to be conceivable. Though addition may be hard to teach without examples, it abstractly makes sense without reference to anything in the physical world. Similarly, the truth of the statement “a bachelor is an unmarried man” does not require any experience to know—its truth comes from the definition of the word “bachelor”.
this would make it difficult to explain how we could care about anyone else’s happiness—how we could treat people as ends in themselves, rather than instrumental means of obtaining a warm glow of satisfaction
And why should we actually treat people as “ends in themselves”? What’s bad about treating everything except one’s own happiness as instrumental?
In this context, what does “more altruistic” (as in 1) mean? Does it mean that you want to change your beliefs about what is right to do, or that given your current beliefs, you want to give more to charity (but, for whatever reason, find it difficult to do so)? If it’s the former, it seems contradictory—it’s saying “it’s right for me to be more altruistic than I currently am, but I don’t believe it”. If it’s the latter, the transition between 1 and 2 wouldn’t happen, because your belief about the optimum level of altruism either wouldn’t change (if you are currently correct about what the optimal amount of altruism is for you) or it would change in a way that would be appropriate based on new information (maybe giving more to charity is easier once you get started). I can see your estimation of your optimum level of altruism changing based on new information, but I don’t see how it would lead to a transition such as that between 1 and 2. Even if charity is very easy and very enjoyable, it doesn’t follow that you should value all humans equally.
People can intentionally maximize anything, including the number of paperclips in the universe. Suppose there was a religion or school of philosophy that taught that maximizing paperclips is deontologically the right thing to do - not because it’s good for anyone, or because Divine Clippy would smite them for not doing it, just that morality demands that they do it. And so they choose to do it, even if they hate it.
Long-time lurker, first-time poster. I’m 21, male, and a college student majoring in economics and minoring in CS. I first heard of Eliezer Yudkowsky when a couple of my friends discovered Harry Potter and the Methods of Rationality two years ago. I started reading it and enjoyed it immensely at first, but as the plot eclipsed what I’d call the “cool tricks”, I became less interested and dropped it. More recently, a different friend linked me to Intellectual Hipsters. After reading it, I read several sequences and was hooked.
My journey to rationality was started by my parents (both of whom are atheists with degrees in STEM fields). I was provided with numerous science books as a child, and I was taught the basics of the scientific method, as well as encouraged to think analytically in general. They also introduced me to science fiction. I grew up in a heavily religious part of the US, so I frequently had to defend my beliefs. Then I discovered what people call “arguing on the Internet”, which I found I enjoy. That caused me to refine and develop my beliefs.
My current beliefs. I’m a quasi-Objectivist (in the Ayn Rand sense), though politically I’m a classical liberal (pragmatic libertarian). I’m not particularly interested in AI or cryonics (though I support transhumanism). I’m a compatiblist (free will and determinism are not mutually exclusive). I think technological and scientific progress will continue to reduce limitations on humans, and that’s a good thing.