I think his issue is that there are multiple attractors.
I’m curious how you made the images/graphs for this post. (Clarification: not this one in particular; it’s a common style on LW. It’s entirely possible that it’s very basic knowledge, and in that case I apologize for being off-topic. It’s just that I’m interested in making a post of this type and I don’t yet know how, or how to know how, to make a graph that isn’t scanned from paper or Excel-type. In particular, I can’t express this type of graph style in words, so I can’t google it.)
“...Shakespear, by some reasonable...”
There is a quantum mechanical property which you may not be aware of. It is incredibly hard, if not impossible to cause two worlds to merge. (Clarification: It may be possible when dealing with microscopic superposed systems, but I suspect it would take a god to merge two worlds which have varied as much as you describe.) There is regular merging of “worlds”, but that occurs on the quantum level between “worlds” that don’t differ macroscopically.
This is because any macroscopic difference between two worlds is sufficient for them to not be the same, and it’s not feasible to put everything back the way it was.
However, this does not apply to merging people. I suspect that most theories of personhood that allow this are not useful, though. (Why are the two near-copies not the same person, but you remain the same person after losing the small memory? That’s a strange definition of personhood.) That is, you can define personhood any way you like, and you can hack this—for example, to remove “yourself” from bad worlds. (Simply define any version of you in a bad world to not be a person, or to be a different person.) But that doesn’t mean that you can actually expect the world to suddenly become good. (Or you can, but at that point everyone starts ignoring you.)
You may already know this, but you can click “Save as Draft” to move this post to your drafts. You may then be able to delete it. (It’s counterintuitive, I know—save-as usually creates a copy.)
I don’t think you’ve understood this article if that’s your response. The point of the article is that real human beings can in fact set up GoFundMe pages, and many more things, but economic models rarely include all these options. It is only through restricting the options to be considered that we can model unboundedly rational agents. Stuart Armstrong is trying to raise awareness of the limitations of restricted-option models.
(I’m not saying that to be rude, but because I think people can benefit from considering the possibility “I have completely misunderstood what this person is trying to tell me”, and responses like yours are mostly only made by people who have completely misunderstood. There’s always the possibility that I’m the completely wrong one—if so, I’d be glad to understand the intended meaning your post is trying to convey, and which I am not seeing.)
I previously tried to do something similar—define an objective complexity upper bound by adding the complexity within a UTM to the complexity of the UTM. I don’t recall how I defined the complexity of a UTM—I don’t think it was recursive, though. Perhaps I used an average over all UTMs, weighted by the complexity of each UTM? But that still requires us to already have a definition of complexity for UTMs! (Unless there’s a unique fixed point in the assignment of complexity to UTMs, but then I’d suspect that function to be non-computable. And uniqueness doesn’t seem very likely anyway.)
(I originally posted this as a response to the wrong post.)
Some feedback on the new features:
It doesn’t seem possible to retract answers. This seems to be a bug, especially since attempting to retract them results in the listed number of comments decrementing (eventually to negative values). This decrement vanishes upon reloading.
This post describes certain guidelines for answers. The commenting guidelines are posted above the comment box. Could the answering guidelines be posted similarly?
In the code, switch is misspelled as swich.
I am suspicious of claims that ideological differences arise from fundamental neurological differences—that seems to uphold the bias toward a homogeneous enemy. (That doesn’t mean it’s impossible, but that it’s more likely to be falsely asserted than claims that we’re not biased toward.) Could you link to the studies that you say support your statement?
It seems that there are two questions here: what “humanity’s goals” means, and what “alignment with those goals” means. An example of an answer to the former is Yudkowsky’s Coherent Extrapolated Volition (in a nutshell, what we’d do if we knew more and thought faster).
Edit: Alternatively, in place of “humanity’s goals”, this might be asking what “goals” itself means.
Edit: This might be too simple (to be original and thus useful), but can’t you just define “alignment” to be the degree to which the utility functions match?
Perhaps this just shifts the problem to “utility function”—it’s not as if humans have an accessible and well-defined utility function in practice.
Would we want to build an AI with a similarly ill-defined utility function, or should we make it more well-defined at the expense of encoding human values worse? Is it practically possible to build an AI whose values perfectly match our current understanding of our values, or will any attempted slightly-incoherent goal system differ enough from our own that it’s better to just build a coherent system?
Historical significance is a social reality just as much as short-term interest is.
And I think there’s a difference between art-markets-in-the-abstract and art-markets-as-currently-implemented. The former is certainly necessary for professional artists to function in a capitalist society, but I don’t think the latter is. And it’s the latter that the OP seems to be arguing against—not the idea of selling art in general, but the current cabal that rigs the prices.
Vaniver’s answer covers instrumental rationality. That is probably the easiest way to persuade the average person that rationality is worth considering.
However, for some people, improving their models of the word is its own reward. If this is true for one, then epistemic rationality may appeal as much as instrumental rationality, if not more.
This requires a particular viewpoint on curiosity. If you are only looking for a rush of understanding, then mysticism may be just as effective. This requires you to value not just the [simplicity? wideness?] of your model but also its truth. One possible motivation for this value could be that true models are both interesting and useful, while false models are only interesting. (That is, a complex goal system [one with many subgoals] is more likely to produce this kind of outward desire.)
You’ve still misunderstood. I’m worried about LW being associated with the alt-right because of the terminological overlap. I’m not concerned about being personally tarred with that.
I agree that this might be something that is counterproductive and itself harmful to discuss, though. And since it seems that people are aware and thinking about this, I don’t have much of a reason to ring the alarm bell anymore. I won’t continue this or bring it up elsewhere in public comments.
I think you are misunderstanding me, and I’d like to clarify some things.
I did not read the great-grandparent as a dog whistle. I did not and do not think that paperoli’s use of the term NPC indicates that they are alt-right. I think that it will indicate to other people that they are alt-right.
My politics-detector is not being used to directly indicate “do I like this?” to me. Rather, I am using it as a proxy for “will people in general dislike this?”, combined with the knowledge that certain political standpoints are very costly to seem to hold.
I only brought up the political aspect because I saw the alt-right as being relevant to LW in that we should be trying, with some nonzero amount of effort, to avoid being associated strongly with them. As a result, I considered “avoid comments like this” to be a net positive action. I am now less certain that it’s worth the cost, but I think it was worth stating that the alternative (ignoring terminological overlap) does have a cost.
Edit: and the Visceral Repulsion and the Terminological Overlap referred to different parts of the great-grandparent—“some people don’t have souls” and “NPC” respectively.
Are the downvotes for the weasel reasoning around “politicization”, or for incorrectly asserting that LW is at risk of being tarred with “alt-right”, or for overestimating the harmfulness is being tarred with “alt-right”, or something I haven’t thought of? I am trying to see where the disagreement is, but I only see mutual misunderstanding.
In a nutshell, where’s the disagreement? What specific things do you think I am incorrect about? I would like to engage with your beliefs but I don’t know where they differ from mine.
This isn’t politicization. It’s already politicized. I’m responding to the existing politicization of the word “NPC”.
I will note that I made two distinct points in the grandparent, and failed to distinguish them enough. The content alone caused my visceral repulsion, while the similarity to the alt-right caused my instrumental desire to avoid seeming more like the alt-right. This is motivated by the current association between LW and the alt-right, which I worry could become as significant in shaping outsiders’ views as the Phyg term did. Perhaps this is unlikely, but I’d rather not risk it for the sake of a joke.
In particular, it’s not my politics detector I’m worried about. It’s that this could contribute to LW setting off outsiders’ politics detectors.
And we don’t have to worry about expunging every shibboleth because we only need to do this for relatively well-known shibboleths of movements that we are considered to be associated with but which we would rather not be. It’s not that steep of a slippery slope.
If we’re talking about politics, my standpoint is pragmatic, not moral. I don’t think it’s fair that we should avoid terms found in hated screeds. I think it’s instrumentally optimal to do so.
I know that. Most people don’t.
My post combined two points: I think that is Viscerally Repulsive because of the content, and I think it’s Harmful For LW because of the similarity to the alt-right. I don’t think I separated the two enough. I understand that LW didn’t borrow the term from the alt-right, and I don’t think it would be a bad thing even if it had. But using the same terminology as the alt-right is costly, and I don’t think we should pay those costs for the sake of bad jokes.
(If anyone thinks I’m backpedaling, notice that I said this in the grandparent, just not spelled out as much.)
I think that the skill needed to avoid Fake Reductions is similar to the skill needed to program a computer (although at a much higher level). Students who are learning to program often call their variables certain names, which lets humans understand what they are for, and assume that the computer will understand as well. To get past this, they must learn that the program needs to have all the algorithm put inside. An English explanation of an algorithm piggybacks on your internal understanding of it. When reading the English, you can use that understanding, but the computer doesn’t have any access to it.
In a nutshell, they need to go past labels and understand what structure the label is referring to.