Ty! For the record, my reason for thinking it’s fine to say “if anyone builds it, everyone dies” despite some chance of survival is mostly spelled out here. Relative to the beliefs you spell out above, I think the difference is a combination of (a) it sounds like I find the survival scenarios less likely than you do; (b) it sounds like I’m willing to classify more things as “death” than you are.
For examples of (b): I’m pretty happy to describe as “death” cases where the AI makes things that are to humans what dogs are to wolves, or (more likely) makes some other strange optimized thing that has some distorted relationship to humanity, or cases where digitized backups of humanity are sold to aliens, etc. I feel pretty good about describing many exotic scenarios as “we’d die” to a broad audience, especially in a setting with extreme length constraints (like a book title). If I were to caveat with “except maybe backups of us will be sold to aliens”, I expect most people to be confused and frustrated about me bringing that point up. It looks to me like most of the least-exotic scenarios are ones that rout through things that lay audience members pretty squarely call “death”.
It looks to me like the even more exotic scenarios (where modern individuals get “afterlives”) are in the rough ballpark of quantum immortality / anthropic immortality arguments. AI definitely complicates things and makes some of that stuff more plausible (b/c there’s an entity around that can make trades and has a record of your mind), but it still looks like a very small factor to me (washed out e.g. by alien sales) and feels kinda weird and bad to bring it up in a lay conversation, similar to how it’d be weird and bad to bring up quantum immortality if we were trying to stop a car speeding towards a cliff.
FWIW, insofar as people feel like they can’t literally support the title because they think that backups of humans will be sold to aliens, I encourage them to say as much in plain language (whenever they’re critiquing the title). Like: insofar as folks think the title is causing lay audiences to miss important nuance, I think it’s an important second-degree nuance that the allegedly-missing nuance is “maybe we’ll be sold to aliens”, rather than something less exotic than that.
(b) it sounds like I’m willing to classify more things as “death” than you are.
I don’t think this matters much. I’m happy to consider non-consensual uploading to be death and I’m certainly happy to consider “the humans are modified in some way they would find horrifying (at least on reflection)” to be death. I think “the humans are alive in the normal sense of alive” is totally plausible and I expect some humans to be alive in the normal sense of alive in the majority of worlds where AIs takeover.
Making uploads is barely cheaper than literally keeping physical humans alive after AIs have fully solidified their power I think, maybe 0-3 OOMs more expensive or something, so I don’t think non-consensual uploads are that much of the action. (I do think rounding humans up into shelters is relevant.)
Ty! For the record, my reason for thinking it’s fine to say “if anyone builds it, everyone dies” despite some chance of survival is mostly spelled out here. Relative to the beliefs you spell out above, I think the difference is a combination of (a) it sounds like I find the survival scenarios less likely than you do; (b) it sounds like I’m willing to classify more things as “death” than you are.
For examples of (b): I’m pretty happy to describe as “death” cases where the AI makes things that are to humans what dogs are to wolves, or (more likely) makes some other strange optimized thing that has some distorted relationship to humanity, or cases where digitized backups of humanity are sold to aliens, etc. I feel pretty good about describing many exotic scenarios as “we’d die” to a broad audience, especially in a setting with extreme length constraints (like a book title). If I were to caveat with “except maybe backups of us will be sold to aliens”, I expect most people to be confused and frustrated about me bringing that point up. It looks to me like most of the least-exotic scenarios are ones that rout through things that lay audience members pretty squarely call “death”.
It looks to me like the even more exotic scenarios (where modern individuals get “afterlives”) are in the rough ballpark of quantum immortality / anthropic immortality arguments. AI definitely complicates things and makes some of that stuff more plausible (b/c there’s an entity around that can make trades and has a record of your mind), but it still looks like a very small factor to me (washed out e.g. by alien sales) and feels kinda weird and bad to bring it up in a lay conversation, similar to how it’d be weird and bad to bring up quantum immortality if we were trying to stop a car speeding towards a cliff.
FWIW, insofar as people feel like they can’t literally support the title because they think that backups of humans will be sold to aliens, I encourage them to say as much in plain language (whenever they’re critiquing the title). Like: insofar as folks think the title is causing lay audiences to miss important nuance, I think it’s an important second-degree nuance that the allegedly-missing nuance is “maybe we’ll be sold to aliens”, rather than something less exotic than that.
I don’t think this matters much. I’m happy to consider non-consensual uploading to be death and I’m certainly happy to consider “the humans are modified in some way they would find horrifying (at least on reflection)” to be death. I think “the humans are alive in the normal sense of alive” is totally plausible and I expect some humans to be alive in the normal sense of alive in the majority of worlds where AIs takeover.
Making uploads is barely cheaper than literally keeping physical humans alive after AIs have fully solidified their power I think, maybe 0-3 OOMs more expensive or something, so I don’t think non-consensual uploads are that much of the action. (I do think rounding humans up into shelters is relevant.)