Yep. Most mass-market space operas are guilty of this. Despite having knowledge and resources to fly to other planets, humans in them still have to shoot kinetic bullets at animals.
However, stories, in order to be entertaining (at least for the mainstream public), have to depict a protagonist (or a group thereof) who are changing because of conflict, and the conflict has to be winnable, resolvable—it must “allow” the protagonist to use his wit, perseverance, luck and whatever else to win.
Now imagine a “more realistic” setting where humans went through a singularity (and, possibly, coexist with AIs). If the singularity was friendly, then this is an utopia which, by definition, has no conflict. If the singularity was unfriendly, humans are either already disassembled for atoms, or soon will be—and they have no chance to win against the AI because the capability gap is too big. Neither branch has much story potential.
This applies to game design as well—enemies in a game built around a conflict have to be “repeatedly winnable”, otherwise the game would become an exercise in frustration.
(I think there is some story / game potential in the early FOOM phase where humans still have a chance to shut it down, but it is limited. A realistic AI has no need to produce hordes of humanoid or monstrous robots vulnerable to bullets to serve as enemies, and it has no need to monologue when the hero is about to flip the switch. Plus the entire conflict is likely to be very brief.)
Data from Star Trek doesn’t quite give me the lurching despair I was thinking of when I wrote the original post, but he does make me do a mental double-take whenever a physical embodiment of human understanding of cognition sits there wondering about esoteric aspects of human behaviour that were mysterious to sci-fi screenwriters in the early 1990s.
Data’s awareness of his own construction varies as befits the plot. My point was that TNG often asked a lot of questions about ethics and cognition and personhood and identity. Data himself talks about the mysterious questions of human experience all the bloody time.
In a world where Data exists, significant headway has been made on those questions already.
This is a special case of a general property of the Star Trek universe: it exhibits a very low permeability to new information. Breakthroughs and discoveries occur all over the place that have only local effects. I’ve generally assumed that there’s some as-yet-unrevealed Q-like entity that intervenes regularly to avoid too many changes in the social fabric in a given period of time.
However, stories, in order to be entertaining (at least for the mainstream public), have to depict a protagonist (or a group thereof) who are changing because of conflict, and the conflict has to be winnable, resolvable—it must “allow” the protagonist to use his wit, perseverance, luck and whatever else to win.
Bwahaha. Have you seen the end of mass effect 3? The “win” is worse than letting the bad guys do their thing.
Vs lbh unq yrg gur erncref fgrnzebyyre pvivyvmngvba, gur arkg plpyr jbhyq unir orra noyr gb qrsrng gurz naq ohvyq n creznanag pvivyvmngvba orpnhfr bs jneavatf cynprq nyy bire gur cynpr ol bar bs gur punenpgref.
Vs lbh unq yrg gur erncref fgrnzebyyre pvivyvmngvba, gur arkg plpyr jbhyq unir orra noyr gb qrsrng gurz naq ohvyq n creznanag pvivyvmngvba orpnhfr bs jneavatf cynprq nyy bire gur cynpr ol bar bs gur punenpgref.
Tbbq cbvag. Fvapr gur pvarzngvp raqvat fubjrq crbcyr gung unqa’g tbggra fhcreabin’rq, V nffhzrq gurl fbzrubj qvq n tenprshy fuhgqbja ba gur znff erynlf, hayvxr gur Ongnevna fbyhgvba. Ohg rira tvira gung, gurer’f qrsvavgryl n znffvir syrrg fghpx va gur Fby flfgrz naq ab zber vagrefgryyne genqr.
Bu, jryy. Ng yrnfg gur zhygvcynlre’f rguvpnyyl qrsrafvoyr (V tb ol rg wnlarf gurer).
Now imagine a “more realistic” setting where humans went through a singularity (and, possibly, coexist with AIs). If the singularity was friendly, then this is an utopia which, by definition, has no conflict.
There is Friendliness and there is Friendliness. Note: Ambivalence or even bemused antagonism would qualify as Friendliness so long as humans were still able to determine their own personal courses of development and progress.
An AGI that had as its sole ambition the prevention of other AGIs and unFriendly scenarios would allow a lot of what passes for bad science fiction in most space operas, actually. AI cores on ships that can understand human language but don’t qualify as fully sentient (because the real AGI is gutting their intellects); androids that are fully humanoid and perhaps even sentient but haven’t any clue why that is so (because you could rebuild human-like cognitive faculties by reverse-engineering black-box but if you actually knew what was going on in the parts you would have that information purged...) -- so on and so on.
And yet this would qualify as Friendly; human society and ingenuity would continue.
Yep. Most mass-market space operas are guilty of this. Despite having knowledge and resources to fly to other planets, humans in them still have to shoot kinetic bullets at animals.
However, stories, in order to be entertaining (at least for the mainstream public), have to depict a protagonist (or a group thereof) who are changing because of conflict, and the conflict has to be winnable, resolvable—it must “allow” the protagonist to use his wit, perseverance, luck and whatever else to win.
Now imagine a “more realistic” setting where humans went through a singularity (and, possibly, coexist with AIs). If the singularity was friendly, then this is an utopia which, by definition, has no conflict. If the singularity was unfriendly, humans are either already disassembled for atoms, or soon will be—and they have no chance to win against the AI because the capability gap is too big. Neither branch has much story potential.
This applies to game design as well—enemies in a game built around a conflict have to be “repeatedly winnable”, otherwise the game would become an exercise in frustration.
(I think there is some story / game potential in the early FOOM phase where humans still have a chance to shut it down, but it is limited. A realistic AI has no need to produce hordes of humanoid or monstrous robots vulnerable to bullets to serve as enemies, and it has no need to monologue when the hero is about to flip the switch. Plus the entire conflict is likely to be very brief.)
How is this a utopia?
Data from Star Trek doesn’t quite give me the lurching despair I was thinking of when I wrote the original post, but he does make me do a mental double-take whenever a physical embodiment of human understanding of cognition sits there wondering about esoteric aspects of human behaviour that were mysterious to sci-fi screenwriters in the early 1990s.
To be fair, he didn’t actually have access to Soong’s design notes.
Data’s awareness of his own construction varies as befits the plot. My point was that TNG often asked a lot of questions about ethics and cognition and personhood and identity. Data himself talks about the mysterious questions of human experience all the bloody time.
In a world where Data exists, significant headway has been made on those questions already.
This is a special case of a general property of the Star Trek universe: it exhibits a very low permeability to new information. Breakthroughs and discoveries occur all over the place that have only local effects.
I’ve generally assumed that there’s some as-yet-unrevealed Q-like entity that intervenes regularly to avoid too many changes in the social fabric in a given period of time.
The Federation government being deeply corrupt would also explain a lot.
Bwahaha. Have you seen the end of mass effect 3? The “win” is worse than letting the bad guys do their thing.
Can you rot13 the ending for us? I’ve never played it and never intend to, but I wouldn’t mind knowing what you’re talking about.
N zvyyvbaf-bs-lrnef-byq fhcrevagryyvtrapr inyhrf yvsr, ohg unf qrgrezvarq gung gur bayl jnl gb fhfgnva yvsr va gur tnynkl vf gb crevbqvpnyyl jvcr bhg nqinaprq pvivyvmngvbaf orsber gurl varivgnoyl frys-qrfgehpg, qrfgeblvat tnynpgvp srphaqvgl. Gb qb guvf, vg perngrq avtu-vaihyarenoyr znpuvarf gung fjrrc guebhtu rirel 50,000 lrnef naq fcraq n srj praghevrf xvyyvat rirel fcrpvrf ng xneqnfuri 1 be terngre.
Sbe gur cnfg srj plpyrf, betnavpf unir znqr cebterff gbjneq fgbccvat gur znpuvarf. Gur fhcrevagryyvtrapr nqzvgf gb lbh gung gur fbyhgvba vf ab ybatre jbexvat, naq bssref guerr pubvprf: (1) betnavpf qbzvangr znpuvarf, (2) xvyy nyy NVf, (3) “zretr” betnavpf jvgu NVf. Arvgure gur tnzr abe gur fhcrevagryyvtrapr vzcyvrf gung pvivyvmngvba jvyy abg frys-qrfgehpg, qrfgeblvat tnynpgvp srphaqvgl.
Gung’f abg nyy gub. Nyy fbyhgvbaf vaibyir gur qrfgehpgvba bs gur pvgrqry naq znff erynlf, juvpu ner gur onfvf bs tnynpgvp pvivyvmngvba. Jvgubhg gurz gur rpbabzl jvyy gbgnyyl zryg qbja, abar bs gur syrrgf jvyy or noyr gb rfpncr gur fby flfgrz, naq ovyyvbaf bs crbcyr jvyy or fghpx va cynprf jvgu ab pbzcngvoyr sbbq. Znff fgneingvba rafhrf.
Naq gung’f vtabevat gung gur qrfgehpgvba bs n znff erynl perngrf na rkcybfvba ba cne jvgu n fhcreabin, jvcvat bhg gur ubfg flfgrz.
Fb onfvpnyyl rirelbar qvrf, naq pvivyvmngvba arire erpbiref.
Vs lbh unq yrg gur erncref fgrnzebyyre pvivyvmngvba, gur arkg plpyr jbhyq unir orra noyr gb qrsrng gurz naq ohvyq n creznanag pvivyvmngvba orpnhfr bs jneavatf cynprq nyy bire gur cynpr ol bar bs gur punenpgref.
Gung’f abg nyy gub. Nyy fbyhgvbaf vaibyir gur qrfgehpgvba bs gur pvgrqry naq znff erynlf, juvpu ner gur onfvf bs tnynpgvp pvivyvmngvba. Jvgubhg gurz gur rpbabzl jvyy gbgnyyl zryg qbja, abar bs gur syrrgf jvyy or noyr gb rfpncr gur fby flfgrz, naq ovyyvbaf bs crbcyr jvyy or fghpx va cynprf jvgu ab pbzcngvoyr sbbq. Znff fgneingvba rafhrf.
Naq gung’f vtabevat gung gur qrfgehpgvba bs n znff erynl perngrf na rkcybfvba ba cne jvgu n fhcreabin, jvcvat bhg gur ubfg flfgrz.
Fb onfvpnyyl rirelbar qvrf, naq pvivyvmngvba arire erpbiref.
Vs lbh unq yrg gur erncref fgrnzebyyre pvivyvmngvba, gur arkg plpyr jbhyq unir orra noyr gb qrsrng gurz naq ohvyq n creznanag pvivyvmngvba orpnhfr bs jneavatf cynprq nyy bire gur cynpr ol bar bs gur punenpgref.
Tbbq cbvag. Fvapr gur pvarzngvp raqvat fubjrq crbcyr gung unqa’g tbggra fhcreabin’rq, V nffhzrq gurl fbzrubj qvq n tenprshy fuhgqbja ba gur znff erynlf, hayvxr gur Ongnevna fbyhgvba. Ohg rira tvira gung, gurer’f qrsvavgryl n znffvir syrrg fghpx va gur Fby flfgrz naq ab zber vagrefgryyne genqr.
Bu, jryy. Ng yrnfg gur zhygvcynlre’f rguvpnyyl qrsrafvoyr (V tb ol rg wnlarf gurer).
I’m confused as to why this was downvoted—was it because it was an inaccurate summary?
Perhaps because the quote was misformatted or because the poster advertised their multi-player handle.
I don’t know either, but it isn’t inaccurate.
There is Friendliness and there is Friendliness. Note: Ambivalence or even bemused antagonism would qualify as Friendliness so long as humans were still able to determine their own personal courses of development and progress.
An AGI that had as its sole ambition the prevention of other AGIs and unFriendly scenarios would allow a lot of what passes for bad science fiction in most space operas, actually. AI cores on ships that can understand human language but don’t qualify as fully sentient (because the real AGI is gutting their intellects); androids that are fully humanoid and perhaps even sentient but haven’t any clue why that is so (because you could rebuild human-like cognitive faculties by reverse-engineering black-box but if you actually knew what was going on in the parts you would have that information purged...) -- so on and so on.
And yet this would qualify as Friendly; human society and ingenuity would continue.