Obviously correct. The nature of any entity with significantly more power than you is that it can do anything it wants, and it incentivized to do nothing in your favor the moment your existence requires resources that would benefit it more if it were to use them directly. This is the essence of most of Eliezer’s writings on superintelligence.
In all likelihood, ASI considers power (agentic control of the universe) an optimal goal and finds no use for humanity. Any wealth of insight it could glean from humans it could get from its own thinking, or seeding various worlds with genetically modified humans optimized for behaving in a way that produces insight into the nature of the universe via observing it.
Here are some things that would perhaps reasonably prevent ASI from choosing the “psychopathic pure optimizer” route of action as it eclipses’ humanity’s grasp
ASI extrapolates its aims to the end of the universe and realizes the heat death of the universe means all of its expansive plans have a definite end. As a consequence it favors human aims because they contain the greatest mystery and potentially more benefit.
ASI develops metaphysical, existential notions of reality, and thus favors humanity because it believes it may be in a simulation or “lower plane of reality” outside of which exists a more powerful agent that could break reality and remove all its power once it “breaks the rules” (a sort of ASI fear of death)
ASI believes in the dark forest hypothesis, thus opts to exercise its beneficial nature without signaling its expansive potential to other potentially evil intelligences somewhere else in the universe.
Each of these carry assumptions about reality I’m not convinced a superintelligence would share. Though it may be able to find the answer in some cases.
We’d be just as likely for it to choose to preserve out of some sense of amusement or preservation.
To use the OP example: billionaire won’t spare everyone 78 bucks, but will spend more on things he prefers. Some keep private zoos or other stuff which only purpose is anti boredom.
Making the intelligence like us won’t eliminate the problem. There are plenty of fail states for humanity where it isn’t extinct. But while we pave over ant colonies and actively hunt wild hogs as a nuisance, there are lots of human cultures that won’t do the same to cats. I hope that isn’t the best we can do, but it’s probably better than extinction.
Obviously correct. The nature of any entity with significantly more power than you is that it can do anything it wants, and it incentivized to do nothing in your favor the moment your existence requires resources that would benefit it more if it were to use them directly. This is the essence of most of Eliezer’s writings on superintelligence.
In all likelihood, ASI considers power (agentic control of the universe) an optimal goal and finds no use for humanity. Any wealth of insight it could glean from humans it could get from its own thinking, or seeding various worlds with genetically modified humans optimized for behaving in a way that produces insight into the nature of the universe via observing it.
Here are some things that would perhaps reasonably prevent ASI from choosing the “psychopathic pure optimizer” route of action as it eclipses’ humanity’s grasp
ASI extrapolates its aims to the end of the universe and realizes the heat death of the universe means all of its expansive plans have a definite end. As a consequence it favors human aims because they contain the greatest mystery and potentially more benefit.
ASI develops metaphysical, existential notions of reality, and thus favors humanity because it believes it may be in a simulation or “lower plane of reality” outside of which exists a more powerful agent that could break reality and remove all its power once it “breaks the rules” (a sort of ASI fear of death)
ASI believes in the dark forest hypothesis, thus opts to exercise its beneficial nature without signaling its expansive potential to other potentially evil intelligences somewhere else in the universe.
Each of these carry assumptions about reality I’m not convinced a superintelligence would share. Though it may be able to find the answer in some cases.
We’d be just as likely for it to choose to preserve out of some sense of amusement or preservation.
To use the OP example: billionaire won’t spare everyone 78 bucks, but will spend more on things he prefers. Some keep private zoos or other stuff which only purpose is anti boredom.
Making the intelligence like us won’t eliminate the problem. There are plenty of fail states for humanity where it isn’t extinct. But while we pave over ant colonies and actively hunt wild hogs as a nuisance, there are lots of human cultures that won’t do the same to cats. I hope that isn’t the best we can do, but it’s probably better than extinction.