It’s interesting to note that those oh-so-advanced humans prefer to save children to saving adults, even though there don’t seem to be any limits to natural lifespan anymore.
At our current tech-level this kind of thing can make sense because adults have less lifespan left; but without limits on natural lifespan (or neural degradation because of advanced age) older humans have, on average, had more resources invested into their development—and as such should on average be more knowledgeable, more productive and more interesting people.
It appears to me that the decision to save human children in favor of adults is a result of executing obsolete adaptions as opposed to shutting up and multiplying. I’m surprised nobody seems to have mentioned this yet—am I missing something obvious?
List of allusions I managed to catch (part 1):
Alderson starlines—Alderson Drive
Giant Science Vessel—GSV—General Systems Vehicle
Lord Programmer—allusion to the archeologist programmers in Vernor Vinge’s A Fire Upon the Deep?
Greater Archive—allusion to Orion’s Arm’s Greater Archives?
Will Wilkinson said at 50:48:
People will shout at you in germany if you jaywalk, I’m told.
I’d be relieved if the reason were that you ascribed probability significantly greater than 1% to a Long Slump, but I suspect it’s because you worry humanity will run out of time in many of the other scenarios before FAI work is finished- reducing you to looking at the Black Swan possibilities within which the world might just be saved.
This use of the word ‘wants’ struck me as a distinction Eliezer would make, rather than this character.
Also, at the risk of being redundant: Great story.
To add to Abigail’s point:
Is there significant evidence that the critically low term in the Drake Equation isn’t f_i (i.e. P(intelligence|life))? If natural selection on earth hadn’t happened to produce an intelligent species, I would assign a rather low probability of any locally evolved life surviving the local sun going nova.
I don’t see any reasonable way of even assigning a lower bound to f_i.
The of helping someone, …
Okay, so no one gets their driver’s license until they’ve built their own Friendly AI, without help or instruction manuals. Seems to me like a reasonable test of adolescence.
Up to now there never seemed to be a reason to say this, but now that there is:
Eliezer Yudkowsky, afaict you’re the most intelligent person I know. I don’t know John Conway.
It’s easier to say where someone else’s argument is wrong, then to get the fact of the matter right;
You posted your raw email address needlessly. Yum.
How can you tell if someone is an idiot not worth refuting, or if they’re a genius who’s so far ahead of you to sound crazy to you? Could we think an AI had gone mad, and reboot it, when it is really genius.
In the case of you considering taking action against the entity (as in your example of deleting the AI), this is partly self-regulating: A sufficiently intelligent entity should see such an attack coming and have effective countermeasures in place (for instance, by communicating better to you so you don’t conclude it has gone mad). If you attack it and succeed, that by itself places limits on how intelligent the target really was. Note that this part doesn’t work if both sides are unmodified humans, because the relative differences in intelligence aren’t large enough.
Do you really truly think that the rational thing for both parties to do, is steadily defect against each other for the next 100 rounds?
Cooperation only makes sense in the iterated version of the PD. This isn’t the iterated case, and there’s no prior communication, hence no chance to negotiate for mutual cooperation (though even if there was, meaningful negotiation may well be impossible depending on specific details of the situation).
Superrationality be damned, humanity’s choice doesn’t have any causal influence on the paperclip maximizer’s choice. Defection is the right move.
Nitpicking your poison category:
What is a poison? … Carrots, water, and oxygen are “not poison”. … (… You’re really asking about fatality from metabolic disruption, after administering doses small enough to avoid mechanical damage and blockage, at room temperature, at low velocity.)
What character is ◻?
Larry, interpret the smiley face as saying:
PA + (◻C → C) |- I’m still struggling to completely understand this. Are you also changing the meaning of ◻ from ‘derivable from PA’ to ‘derivable from PA + (◻C → C)’? If so, are you additionally changing L to use provability in PA + (◻C → C) instead of provability in PA?
s/abstract rational reasoning/abstract moral reasoning/
But my moral code does include such statements as “you have no fundamental obligation to help other people.” I help people because I like to.
In the modern world, people have to make moral choices using their general intelligence, because there aren’t enough “yuck” and “yum” factors around to give guidance on every question. As such, we shouldn’t expect much more moral agreement from humans than from rational (or approximately rational) AIs.
If I have a value judgment that would not be interpersonally compelling to a supermajority of humankind even if they were fully informed, then it is proper for me to personally fight for and advocate that value judgment, but not proper for me to preemptively build an AI that enforces that value judgment upon the rest of humanity.
I think my highest goal in life is to make myself happy. Because I’m not a sociopath making myself happy tends to involve having friends and making them happy. But the ultimate goal is me.
After all, if the humans have something worth treating as spoils, then the humans are productive and so might be even more useful alive.
I read stuff like this and immediately my mind thinks, “comparative advantage.” The point is that it can be (and probably is) worthwhile for Bob and Bill to trade with each other even if Bob is better at absolutely everything than Bill.
The FAI may be an unsolvable problem, if by FAI we mean an AI into which certain limits are baked.
Constant [sorry for getting the attribution wrong in my previous reply] wrote:
We do not know very well how the human mind does anything at all. But that the the human mind comes to have preferences that it did not have initially, cannot be doubted.
Any two AIs are likely to have a much vaster difference in effective intelligence than you could ever find between two humans (for one thing, their hardware might be much more different than any two working human brains). This likelihood increases further if (at least) some subset of them is capable of strong self-improvement. With enough difference in power, cooperation becomes a losing strategy for the more powerful party.
The AIs might agree that they’d all be better off if they took the matter currently in use by humans for themselves, dividing the spoils among each other.
We’ve been told that a General AI will have power beyond any despot known to history.
If that will be then we are doomed. Power corrupts. In theory an AI, not being human, might resist the corruption, but I wouldn’t bet on that. I do not think it is a mere peculiarity of humanity that we are vulnerable to corruption.