That’s a bug of sorts in the script, and easily fixable. In fact, I’ve already done so; have an updated version.
Baughn
It’s possible to decide which axioms are in effect from the inside of a sufficiently complex mathematical system (such as this universe), however.
For that matter, it would be possible to deduce the existence of a god, too; you just have to die. Granted, there are some issues with this, but nobody said deducing the axiom had to be convenient.
It seems obvious that current patent laws are too strong/bad for business, but the concept of patents does serve the useful purpose of encouraging innovation, even as they prevent innovators from actually going through with it.
There has to be a spot where the laws reach an optimum between encouragement and prevention, and it would be very surprising if it’s at zero.
You’re assuming that his goal is not, in itself, an existential risk. Something that is almost, but not quite, unattainable.. we know of such a technology already, and Eliezer is aiming for it.
I’m coming in a bit late, and not reading the rest of the posts, but I felt I had to comment on the third horn of the trilemma, as it’s an option I’ve been giving a lot of thought.
I managed to independently invent it (with roughly the same reasoning) back in high school, though I haven’t managed to convince myself of it or, for that matter, to explain it to anyone else. Your explanation is better, and I’ll be borrowing it.
At any rate. One of your objections seems to be ”...to assert that you can hurl yourself off a cliff without fear, because whoever hits the ground will be another person not particularly connected to you by any such ridiculous thing as a “thread of subjective experience”.
For that to make sense would require that, while you can anticipate subjective experiences from just about anywhere, you would only anticipate experiencing a limited subset of them; 1/N of the total, where N represents.. what? The total number of humans, and why? Of souls?
Things get simpler if you set N to 1. Then your anticipation would be to experience Eliezer+5, Britney+5 and Cliffdiver+5, as well as every other subjective experience available for experiencing; sidestepping the cliffdiver problem, and more importantly removing any need to explain the value of N.
There’s still the alternate option of it being infinity. I feel relatively certain that this is not the case, but I’m not sure this isn’t simply wishful thinking. Help?
If you implemented the laws of physics on a computer, using lazy evaluation, then whatever is “over the horizon” from the observer process(es) would not be computed.
However, this would not in the least be observable from inside the system. If the observer moved to serve you, your past would be “retroactively” computed.
I’m not claiming this is very likely to be the case, since at the very least it requires an additional agent—the observer process—to cause anything to happen at all, but lazy evaluation isn’t some weird ad-hoc concept; it’s a basic concept in computer science that also happens to make programs shorter, a lot of the time.
Hopefully not sufficiently shorter that a universe using lazy evaluation with one random point in space somewhere as the observer is less complex than one using strict evaluation. That.. would be impossible for us to detect, of course, but I believe it’d still have consequences.
How about a video game where you attempt to control a pre-singularity global civilization by directly playing a few thousand randomly selected humans simultaneously, while not letting this fact be noticed by the NPCs?
It’s interesting to wonder what sort of games post-humans might play, though I hope it won’t be anything quite that ethically objectionable.
Considering that there exist fork-lift simulation games, I hesitate to claim that anything is too dull to be made.
I think it was originally meant for training, but yes. People play it. As a game.
There’s another possible belief, p-zombies-aren’t-possible-but-I’d-sure-like-to-know-why; that is, that while the existence of non-zombies proves the impossibility of any world with zombies, it is still possible to (counter-factually) conceive of an existence where it was the other way around. Though there would have been nobody to wonder about it.
I’ve talked to a number of apparent p-zombie believers who, under careful questioning, turn out to be asking this question instead. I’m pretty sure it’s not the same question.
I’m afraid you’ll find that the books are nowhere near as good as Eliezer’s writing. This should probably not surprise you, though.
..for the love of god, Eliezer. You cannot be serious. Stopping there?
The story is certainly more powerful because he stopped when he did, but it’s missing any sense of closure.
Your mileage will vary on what the correct tradeoff is, there. Personally, I find this kind of ending very unsatisfying. It makes me want to write fanfiction.
I’ll go ahead and claim a 98% chance that, if a transhuman, non-Friendly intelligence is created, it makes things worse. And an 80% chance that this is in a nonrecoverable way.
I kinda hope you’re right, but I just don’t see how.
Videoconferencing what, exactly?
I’ve been using it for years. I’m not sure how to correctly expand your sentence, and it shouldn’t be subject to interpretation.
In other words, one of us did not specify the prediction correctly.
I don’t think it’s me. I deliberately didn’t say it’d destroy the world. Would it be correct to modify yours to say ”..and not making the world a worse place”?
Worse, in such a situation I would simply delete the AI.
Then turn the computer to scrap, destroy any backups, and for good measure run it through the most destructive apparatus I can find.
In any case, I would not assign any significant probability to the AI getting a chance to follow through.
When this happens to me, I often try to explain how 100% certainty (or 0%) is a mathematical concept that’s incompatible with how evidence is actually gathered (which they’ll usually nod along to, unless they see where this is going), and then proceed to explain how this means that the word “certainty” does not, in fact, mean 100%.
This has yet to convince anyone. I should probably think of something else.
You could imagine a particle gun that shoots water molecules with the exact same speed distribution as hot water (carefully aligned so they don’t collide mid-beam), but all with the same direction—straight towards you.
The result of sticking your hand in such a beam would be roughly the same as putting it in hot water, ignoring the asymmetric momentum transfer. However, it is easy to see that you can extract useful energy from the beam.
Would it be possible to keep the black hole charged (use an electron gun), then manipulate electric fields to keep it centered? I don’t know enough physics to tell.
Yes; you would be unable to talk to them for.. however long it’d take before you could join them.
Of course the rational solution then would be suicide or, failing that, good, ethical actions that certainly would get you into heaven but just happen to be incredibly dangerous. I’m sure we could find some.