Indeed, even knowing that in general I’m not a very jealous person, I was surprised at my own reaction to this thread: I upvoted a far greater proportion of the comments here than I usually do. I guess I’m more compersive than I thought!
derefr
There’s a specific failure-mode related to this that I’m sure a lot of LW has encountered: for some reason, most people lose 10 “agency points” around their computers. This chart could basically be summarized as “just try being an agent for a minute sheesh.”
I wonder if there’s something about the way people initially encounter computers that biases them against trying to apply their natural level of agency? Maybe, to coin an isomorphism, an “NPC death spiral”? It doesn’t quite seem to be learned helplessness, since they still know the problem can be solved, and work toward solving it; they just think solving the problem absolutely requires delegating it to a Real Agent.
A continuum is still a somewhat-unclear metric for agency, since it suggests agency is a static property.
I’d suggest modelling a sentience as a colony of basic Agents, each striving toward a particular utility-function primitive. (Pop psychology sometimes calls these “drives” or “instincts.”) These basic Agents sometimes work together, like people do, toward common goals; or override one-another for competing goals.
Agency, then, is a bit like magnetism—it’s a property that arises from your Agent-colony when you’ve got them all pointing the same way; when “enough of you” wants some particular outcome that there’s no confusion about what else you could/should be doing instead. In effect, it allows your collection of basic Agents to be abstracted as a single large Agent with its own clear (though necessarily more complex) goals.
This seems to suggest that modelling people (who may be agents) as non-agents has only positive consequences. I would point out one negative consequence, which I’m sure anyone who has watched some schlock sci-fi is familiar with: you will only believe someone when they tell you you are caught in a time-loop if you already model them as an agent. Substitute anything else sufficiently mind-blowing and urgent, of course.
Since only PCs can save the world (nobody else bothers trying, after all), then nobody will believe you are currently carrying the world on your shoulders if they think you’re an NPC. This seems dangerous somehow.
I note that this suggests that an AI that was as smart as an average human, but also as agenty as an average human, would still seem like a rather dumb computer program (it might be able to solve your problems, but it would suffer akrasia just like you would in doing so.) The cyberpunk ideal of the mobile exoself AI-agent, Getting Things Done for you without supervision, would actually require something far beyond equivalent to an average human to be considered “competent” at its job.
Not wanting to give anything away, I would remind you that what we have seen of Harry so far in the story was intended to resemble the persona of an 18-year-old Eliezer. Whatever Harry has done so far that you would consider to be “Beyond The Impossible”, take measure of Eliezer’s own life before and after a particular critical event. I would suggest that everything Harry has wrought until this moment has been the work of a child with no greater goal—and that, whatever supporting beams of the setting you feel are currently impervious to being knocked down, well, they haven’t even had a motivated rationalist give them even a moment of attention, yet.
I mean, it’s not like Harry can’t extract a perfect copy of Hermione’s material information-theoretic mass (both body and mind) using a combination of a fully-dissected time-turner, a pensieve containing complete braindumps of everyone else she’s ever interacted with, a computer cluster manipulating the mirror of Erised into flipping through alternate timelines to explore Hermione’s reactions to various hypotheticals, or various other devices strewn about the HP continuum. He might end up with a new baby Hermione (who has Hermione’s utility function and memories) who he has to raise into being Hermione again, but just because something doesn’t instantly restore her, doesn’t mean it isn’t worth doing. Or he might end up with a “real” copy of Hermione running in his head, which he’ll then allow to manifest as a parallel-alter, using illusion charms along with the same mental hardware he uses for occlumency.
In fact, he could have probably done either of those things before, completely lacking in the motivation he has now. With it? I have no idea what will happen. A narrative Singularity-event, one might say.
Would you want to give the reader closure for the arc of a character who is, as the protagonist states, going to be coming back to life?
Personally, this reminds me more than anything of Crono’s death in Chrono Trigger. Nobody mourns him—mourning is something to do when you don’t have control over space and time and the absolute resolve to harness that control. And so the audience, also, doesn’t get a break to stop and think about the death. They just hurl themselves, and their avatar, face-first into solving it.
Why not? Sure, you might start to recurse and distract yourself if you try to picture the process as a series of iterative steps, just as building any other kind of infinite data structure would—but that’s what declarative data structure definitions were made for. :)
Instead of actually trying to construct each new label as you experience it, simply picture the sum total of your current attention as a digraph. Then, when you experience something, you add a label to the graph (pointing to the “real” experience, which isn’t as easily visualized as the label—I picture objects in a scripting language’s object space holding references to raw C structs here.) When you label the label itself, you simply attach a new label (‘labelling’) which points to the previous label, but also points to itself (a reflexive edge.) This would be such a regular occurrence of the graph that it would be easier to just visualize such label nodes as being definitionally attached to root labels, and thus able to be left out of any mental diagram in the same way Hydrogen is left out of the diagrams of organic molecules.
Actually, that brings up an interesting point—is the labelling process suggested here inherently subvocally-auditory? Can we visualize icons representing our experiences rather than subvocalizing words representing them, or does switching from Linear to Gestalt#Polis_time.2C_delta.2C_and_perception) change the effect this practice has on executive function?
In the sociological “let’s all decide what norms to enforce” sense, sure, a lack of “morality” won’t kill anyone. But in the more speculative-fictional “let’s all decide how to self-modify our utility functions” sense, throwing away our actual morality—the set of things we do or do not cringe about doing—in ourselves, or in our descendants, is a very real possibility, and (to some people) a horrible idea to be fought with all one’s might.
What I find unexpected about this is that libertarians (the free-will kind) tend to think in the second sense by default, because they assume that their free will gives them absolute control over their utility function, so if they manage to argue away their morality, then, by gum, they’ll stop cringing! It seems you first have to guide people into realizing that they can’t just consciously change what they instinctively cringe about, before they’ll accept any argument about what they should be consciously scorning.
Er, yes, edited.
He’s quite prepared in a Hero’s Journey sense, though. In Harry’s own mind, he has lost his mentor. Thus, he is now free to be a mentor. And what better way to grow, as a Hero and über-rationalist, than to teach others to do what you do?
Of course, Harry would say that he’s already doing that with Draco—but in the same way that he usually holds back his near-mode instrumental-rationalist dark side, he’s holding back the kind of insights that Draco would need to think the way Harry thinks; Harry is training Draco to be a scientist, but not an instrumental rationalist, and therefore, in the context of the story, not a Hero. (To put it another way: Draco will never one-box. He’s a virtue-ethicist who is more concerned with “rationality” as just another virtue than with winning per se.)
Mentoring Hermione would be an entirely different matter: he would basically have to instill a dark side into her. Quirrel taught Harry how to lose—Harry would have to teach Hermione how to win.
If Eliezer has planned MoR as a five-act heroic fantasy, it will probably go like this; usually, in a five-act form, acts 4 and 5 mirror the character developments of the Hero in 2 and 3 in another character, for the purposes of re-examining the (developed, and now mostly stagnant) Hero’s growth and revealing by juxtaposition what using that particular character as Hero brought to the journey.
It seems more likely to be a three-act form at this point, though, with Azkaban as the central, act 2 ordeal. That’s not to say the story is more than half-over already, though; Harry has just found his motivation for acting instead of reacting (to change the magical world such that Azkaban is no longer a part of it.)
this kind of question-dissolving is not the standard, evolution-provided brain pathway.
Hawkins would agree.
Whatever substrate supports the computation inscribing your consciousness would be necessarily real, under whatever sense the word “real” could possibly have useful meaning. (“I think; thinking is an algorithm; therefore something is, in order to execute that algorithm.”)
Interestingly, proposing a Tegmark multiverse makes the deepest substrate of consciousness “mathematics.”
We’re built to play games. Until we hit the formal operational stage (at puberty), we basically have a bunch of individual, contextual constraint solvers operating mostly independently in our minds, one for each “game” we understand how to play—these can be real games, or things like status interactions or hunting. Basically, each one is a separately-trained decision-theoretical agent.
The formal operational psychological stage signals a shift where these agents become unified under a single, more general constraint-solving mechanism. We begin to see the meta-rules that apply across all games: things like mathematical laws, logical principles, etc. This generalized solver is expensive to build, and expensive to run (minds are almost never inside it if they can help it, rather staying inside the constraint-solving modes relevant to particular games), but rewards use, as anyone here can attest.
When we are operating using this general solver, and we process an assertion that would suggest that we must restructure the general solver itself, we react in two ways:
Initially, we dread the idea. This is a shade of the same feeling you’d get if your significant other said, very much out of the blue and in very much the sort of tone associated with such things, “we need to talk.” Your brain is negatively reinforcing, all at once, all the pathways that led you here, way back as far as it remembers the causal chain proceeding. Your mind reels, thinking “oh crap, I should have studied [1 day ago], I shouldn’t have gone out partying [1 week ago], I should have asked friends to form a study group [at the beginning of the semester], I never should have come to this school in the first place… why did I choose this damn major?”
Second, we alienate ourselves from the source of the assertion. We don’t want to restructure; not only is it expensive, but our general solver was created as a product of the purified intersection of all experiments that led to success in all played games. That is to say, it is, without exception, the set of the most well-trusted algorithms and highly-useful abstractions in your brain. It’s basically read-only. So, like an animal lashing out when something tries to touch its wounds, our minds lash out to stop the assertion from pressing too hard against something that would be both expensive and fruitless to re-evaluate. We turn down the level of identification/trust we have with whoever or whatever made the assertion, until they no longer need to be taken seriously. Serious breaches can cause us to think of the speaker as having a completely alien mental process—this is what some people say of the experience of speaking with sociopathic serial killers, for example.
Of course, the mind can only implement the second “barrier” step when the assertion is associated with something that can vary on trust, like a person or a TV program. If it comes directly as evidence from the environment, only the first reaction remains, and intensifies increasingly as you internalize the idea that you may just have to sit down and throw out your mind.
I would say that it is not that we want essences in our sexuality, but that gender and sexuality are essentialist by nature: the sexual drive is built on top of the parts of our brains that essentialize/abstract/encapsulate, and so reducing the concept would involve modifying the human utility function to desire the parts, rather than the pretended whole.
Or, to put it another way: a heterosexual blegg is not 50% attracted to something with 50% blegg features and 50% rube features; it is attracted only to pure rubes, and the closer something is to being a rube, without exactly being a rube, the less attractive it is. This is basically the Uncanny Valley at work: some of our drives want discrete gestalts, and the harder they have to work to construct them, the less favorably they’ll evaluate the things they’re constructing on.
It’s pretty common, though. You wanted the other people reading to think of you as clever, and considered that to be “worth” making the author feel a bit bad. This is what the proxy-value of karma, as implemented by the Reddit-codebase discussion engine of this site, reflects: the author can only downvote once (and even then they are discouraged from doing so, unlike with, say, a Whuffie system), but the audience can upvote numerous times.
Thinking back, I’ve had many discussions on the Internet that devolved into arguments, where, although my interlocutor was trying to convince me of something, I had given up on convincing them of anything in particular, and was instead trying to convince any third-parties reading the post that the other person was not to be trusted, and that their advice was dangerous—at the expense of making myself seem like even less trustworthy to the person I was nominally supposed to be convincing. This is what public fora do.
The errors of others, or the errors of those of superior social ranking? Do Korean teachers refrain from correcting students?
This is an example of Conservation of Detail, which is just another way to say that the contrapositive of your statement is true: if you don’t need to take something in a game, then the designer won’t have bothered to make it take-able (or even to include it.)
I always assume that there’s all sorts of stuff lying around in an RPG house that you can’t see, because your viewpoint character doesn’t bother to take notice of it. It might just be because it’s irrelevant, but it might also be for ethical reasons: your viewpoint character only “reports” things to you that his system of belief allows him to act upon.
This seems to track with the Eliezer’s fictional “conspiracies of knowledge”: if we don’t want our politicians to get their hands on our nuclear weapons (or the theory for their operation), then why should they be allowed a say in what our FAI thinks?
In such cases, it more-often-than-not seems to me that the arguer has arrived at their conclusion through intuition, and is now attempting to work back to defensible arguments without those arguments being ones that would convince them, if they didn’t first have the intuition.