“Always go with your first intuition on multiple choice” reflects advice that’s specifically good for students who are anxious because they’re taking a test. The student will generally select the correct answer (or at least the one that’s most likely to be correct). If they’re somewhat uncertain about it, they’ll then start to feel anxious; this anxiety will build over time, resulting in a more and more pessimistic assessment of how likely they are to be correct, resulting in even more anxiety. This continues until the student is either sufficiently pessimistic to think that the original answer was not the best or else changes it simply to relieve the stress. This happens even though no new information has been received, implying said change is unlikely to be correlated with correctness and more likely simply reflects a failure of human psychology.
In short, a test is an especially bad test case (pun fully intended) for this because the amount of bias being introduced increases over time with anxiety, rather than decreasing.
Which is why, again, I’m suggesting Yudkowsky’s writings describing compatibilism. There is a sense in which objective morality exists, and a sense in which it doesn’t; there is similarly a sense in which the world is deterministic and a sense in which we have free will, and the appearance of conflict has to do with our intuitions being too vague and needing to be sharpened and defined better.
Depends on what you mean by “Illusion” and “Ethics.” I’d actually agree that the question of “Does an objective code of ethics exist” is confused like the free will one, and that there’s a sense in which it does and a sense in which it doesn’t.
The sense in which it does is twofold. First, codes of ethics can be objectively wrong; for example, any set of ethics which does not attempt to maximize an expected utility function must be inconsistent (See Yudkowsky’s post on the von Neumann-Morgenstern axioms). So there’s a sense in which moral systems can be straight-up bad. Another criterion that can rule out a moral system is strict Pareto inefficiency: If you have two moral systems, and every single agent agrees that they would be worse off under one of them than the other, then you really should chuck out that worse system.
However, out of these systems, you’re not going to find only a single moral law printed on the fabric of the universe, regardless of how hard you try. Try starting with the Euthyphro dilemma, and just replace “God” with “The Universe.” Even if in some far-off corner of Alpha Centauri there did actually end up being a “Moral thermometer” that measures how moral the universe is, and it went up whenever I kicked a little kid in the face, I’d tell that thermometer to **** off. The idea of a “Natural law of the universe” is a pretty bad one, given that even if it existed, there would be no reason to follow it if it clearly conflicted with the general human idea of morality.
A brief note: I’m not 100% sold on the many-worlds hypothesis—Bohmian interpretations strike me as similarly plausible, but I’m not going to discuss this right now because I doubt I’m educated enough to do so at a high level that doesn’t just retread old arguments. With that out of the way, let’s assume many-worlds is correct.
Given the existence of many-worlds, interpreting making a decision as “Choosing your own Everett branch” is not correct for one simple reason: In any case in which your decisions depend on something going on at the quantum level, you will simultaneously make every single decision you possibly could have made. There’s a sense in which you’re accidentally making the error of importing classical intuitions of “One world” into many-worlds—in this case, the mistake is in believing that there is only one you, who can only make one decision. The reality is that all possible worlds already exist: Everything that has happened or will happen is fully captured by the mathematics of quantum mechanics, and you can’t change anything about it. You can’t change what ever has or ever will happen.
Now, the question becomes the same as for any determinist universe: whether or not determinism, and the fact that all decisions you will ever make are fully predictable by mathematics, actually makes ethics pointless. In this case, I suggest looking back at Yudkowsky’s post on dissolving the question of free will, and then posting your answer here when you think you’ve got it. It’s a good exercise, since it took me a while to figure it out myself. I look forward to seeing your answer.
You’re right, although 1850-1900 captures the Second Industrial Revolution.