Coming back a few months later, what did I even mean by “cutting corners”?
Somebody doesn’t understand the difference between the thing and the appearance of the thing, and I can’t tell whether it’s my past self or the hypothetical EAs being discussed.
The commons effect of existential risks may complicate that example. (Shorter-term existential risks make longer-term existential risks less impactful until the shorter-term ones are solved.)
(Pun intended? The former name of Bell Labs, and so on...)
The Centre for Effective Altruism, I believe.
Data point: I remember that System 1 is the fast, unconscious process by associating it with firstness—it’s more primal than slow thinking. This is probably somewhat true, but it defeats the purpose (?).
I would draw a line between “fighting change by punishing defection” and “coordination to maintain a meaning”.
Who cares? You can still use “schelling point” to discuss coordination by unstated shared background knowledge, even if it’s ALSO used to mean any piece of common knowledge.
You can, but then it’ll be unclear whether you’re using the “common” or “true jargon” meaning whenever you could legitimately mean either. (In the OP’s examples, both the common and true-jargon meanings of “Schelling point” were potentially relevant.) Even if you build a reputation for always using the original meanings of words, there will be people who don’t know the original meaning, and people who don’t know of your reputation. Some people will misinterpret you unless you explicitly state “the Schelling point, as in the original sense of an unstated but agreed-upon point” each time you use it for the first time in a given context.
In short, having two words in the same semantic space causes misunderstandings and frustration. You can get around it by essentially assigning the technical term to a longer word (“Schelling point but, you know, the actual one” instead of simply “Schelling point”), but this has its costs. (See: how shorter words feel more fundamental. Calling the rapid-takeoff intelligence explosion “FOOM” was probably wise, naming “coordination failures” Moloch was probably the single most effective way of getting people to fight them, etc.)
Can’t recursivity be a cycle containing hardware as a node?
This is the old version, kept for the sake of not deleting old things. It is not meant to be an accurate description of modern LW.
Most mathematically-competent commenters agreed that the expected utility of lotteries was bad. Some people disagreed that the utility of expectation was bad, though. Yudkowsky was arguing against these commenters, saying that both expected utility and utility of expectation are bad. The arguments in the post you linked are not the main reasons Yudkowsky does not play the lottery, but rather the arguments that convey the most new information about the lottery (and whatever the lottery is being used to illustrate).
The implication, as I see it, is that since (by your definition) any sufficiently intelligent AI will be able to determine (and motivated to follow) the wishes of humans, we don’t need to worry about advanced AIs doing things we don’t want.
1. Arguments from definitions are meaningless.
2. You never stated the second parenthetical, which is key to your argument and also on very shaky ground. There’s a big difference between the AI knowing what you want and doing what you want. “The genie knows but doesn’t care,” as it is said.
3. Have you found a way to make programs that never have unintended side effects? No? Then “we wouldn’t want this in the first place” doesn’t mean “it won’t happen”.
Processing is what you need to embed a mathematical process into your universe, I agree, but that doesn’t necessarily imply that there is a Universal Processor in which our universe is embedded, or even that this hypothesis is meaningful. (For one, what universe does this processor live in? Processors bridge universes, in a sense—they don’t explain existence, but pass it off to the “larger” world.)
Note: “it’s justified by being true” doesn’t help distinguish cults. You seem to be aware of this, though, because you still count that component of cultishness as true.
I don’t quite understand the conclusion, so this question might be wrong, but—is a line really necessary? Do we need a discrete “acceptable/unacceptable” judgment assigned to each action, or is it the universal agreement that’s most active in causing the effect you’re talking about?
[inspired by this comment, but not entirely a response; still relevant]
Assume utilitarianism and altruism. You’re trying to help the world. There’s a large pit of suffering that you could throw your entire life into and still not fill. So you do as much as you can. You maximize your positive impact on the world.
But argmax requires a set of possible actions. What are these actions? “Be a superhuman who needs no overhead to turn work into donations” is not a valid action. Given what you can do, taking into account physical and psychological limitations, you maximize positive impact. And this requires cutting corners. If you try your hardest to squeeze every last cent of your life into altruism, this has significant negative effects on you, and thus on your altruism. You might burn out. You might lose effectiveness. So to optimize to the fullest, don’t optimize too hard.
So rational “optimize just for altruism” apparently destroys itself. To optimize for altruism, you have to do things that look like they’re selfish.
I think Ustice is talking about three books. In that case, an answer could be “through the book Nonviolent Communication.” You are probably asking for more detail than that, though.
Typo thread: “The vast majority of discussion in this area seems to consist of people who are annoyed at ML systems are learning based on the data.” I think that should be “...systems that are learning...” or “...who are annoyed that ML systems...”
“After, therefore the fulfillment of.” Is this your argument, or is there something more implied that I’m not seeing?
As it is, this seems to Prove Too Much.
Zombie Dennett: which is more likely? That philosophers could interpret the same type of experience in fundamentally different ways, or that Dennett has some neurological defect which has removed his qualia but his ability to sense and process sensory information?
Consciousness continuity: I know I’m a computationalist and [causalist?], and I am weakly confident that most LWers share at least one of these beliefs. (Speaking for others is discouraged here, so I doubt you’ll be able to get more than a poll of beliefs, or possibly a link to a previous poll.)
Definitions of terms: computationalism is the view that cognition, identity, etc. are all computations or properties of computations. Causalist is a word I made up to describe the view that continuity is just a special form of causation, and that all computation-preserving forms of causation preserve identity as well. (That is, I don’t see it as fundamentally different if the causation from one subjective moment to the next is due to the usual evolution of brains over time or due to somebody scanning me and sending the information to a nanofactory, so long as the information that makes me up isn’t lost in this process.)
The cultural differences—the object-level information that Aristotle is lacking—are significant. This is true even if you are talking about things that differ from both of you by more than your difference to Aristotle.