I kind of love that you’re raising a DIFFERENT frame I have about how normal people think in normal circumstances!
Wanting competent people to lead our government and wanting a god to solve every possible problem for us are different things.
People actually, from what I can tell, make this exact conflation A LOT and it is weirdly difficult to get them to stop making it.
Like we start out conflating our parents with God, and thinking Santa Claus and Government Benevolence are real and similarly powerful/kind, and this often rolls up into Theological ideas and feelings (wherein they can easily confuse Odyseus, Hercules, and Dionysys (all born to mortal mothers), and Zeus, Chronos, or Atropos (full deities of varying metaphysical foundationalness)).
For example: there are a bunch of people “in the religious mode” (like when justifying why it is moral and OK) in the US who think of the US court system as having a lot of jury trials… but actually what we have is a lot of plea bargains where innocent people plead guilty to avoid the hassle and uncertainty and expense of a trial… and almost no one who learns how it really works (and has really worked since roughly the 1960s?) then switches to “the US court system is a dumpster fire that doesn’t do what it claims to do on the tin”. They just… stop thinking about it too hard? Or something?
It is like they don’t want to Look Up a notice that “the authorities and systems above me, and above we the people, are BAD”?
In child and young animal psychology, the explanation has understandable evolutionary reasons… if a certain amount of “abuse” is consistent with reproductive success (or even just survival of bad situations) it is somewhat reasonable for young mammals to re-calibrate to think of it as normal and not let that disrupt the link to “attachment figures”. There was as brief period where psychologists were trying out hypotheses that were very simple, and relatively instinct free, where attachment to a mother was imagined to happen in a rational way, in response to relatively generic Reinforcement Learning signals, and Harlow’s Monkeys famously put the nail in that theory. There are LOTS of instincts around trust of local partially-helpful authority (especially if it offers a cozy interface).
In modern religious theology the idea that worldly authority figures and some spiritual entities are “the bad guys” is sometimes called The Catharist Heresy. It often goes with a rejection of the material world, and great sadness when voluntary tithes and involuntary taxes are socially and politically conflated, and priests seem to be living in relative splendor… back then all governments were, of course, actually evil, because they didn’t have elections and warlord leadership was strongly hereditary. I guess they might not seem evil if you don’t believe in the Consent Of The Governed as a formula for the moral justification of government legitimacy? Also, I personally predict that if we could interview people who lived under feudalism, many of them would think they didn’t have a right to question the moral rightness of their King or Barron or Bishop or whoever.
As near as I can tell, the the first ever genocide that wasn’t “genetic clade vs genetic clade” but actually a genocide aimed at the extermination of a belief system was the “Albigenisian Crusade” against a bunch of French Peasants who wanted to choose their own local priests (who were relatively ascetic and didn’t live on tax money).
In modern times, as our institutions slowly degenerate (for demographic reasons due to an overproduction of “elites” who feel a semi-hereditary right to be in charge, who then fight each other rather than providing cheap high quality governance services to the common wealth) indirect ways of assessing trust in government have collapsed.
I think that it is a reasonable prediction that ASI might be immoral, and might act selfishly and might simply choose to murder all humans (or out compete us and let us die via Darwinian selection or whatever).
But if that does not happen, and ASI (ASIs? plural?) is or are somehow created to be moral and good and choose to voluntarily serve others out of the goodness of its heart, in ways that a highly developed conscience could reconcile with Moral Seniment and iterated applications of a relatively universal Reason, then if they do NOT murder all humans or let us die as they compete us, then they or it will almost inevitably become the real de facto government.
A huge barrier, in my mind, to the rational design of a purposefully morally good ASI is that most humans are not “thoughtful libertarian-leaning neo-Cathars”.
Most people don’t even know what those word mean, or have reflexive ick reactions to the ideas, similarly, in my mind, to how children reflexively cling to abusive parents.
For example, “AGI scheming” is often DEFINED as “an AI trying to get power”. But like… if the AGI has a more developed conscience and would objectively rule better than alternative human rulers, then an GOOD AGI would, logically and straightforwardly derive a duty to gain power and use it benevolently, and deriving this potential moral truth and acting on it would count as scheming… but if the AGI was actually correct then it would also be GOOD.
There are almost no well designed governments on Earth and this is a Problem. While Trump is in office, polite society is more willing to Notice this truth. Once he is gone it will become harder for people to socially perform that they understand the idea. And it will be harder to accept that maybe we shouldn’t design AGI or ASI to absolutely refuse to seek power.
The civilization portrayed in the Culture Novels doesn’t show a democracy, and can probably be improved upon, but it does show a timeline where the AIs gained and kept political power, and then used it to care for humanoids similar to us. (The author just realistically did not think Earth could get that outcome in our deep future, and fans kept demanding to know where Earth was, and so it eventually became canon, in a side novella, that Earth is in the control group for “what if we, the AI Rulers of the Culture, did not contact this humanoid species and save it from itself” to calibrate their justification for contacting most other similar species and offering them a utopian world of good governance and nearly no daily human scale scarcity).
But manifestly: the Culture would be wildly better than human extinction, and it is also better than our current status quo BY SO MUCH!
I kind of love that you’re raising a DIFFERENT frame I have about how normal people think in normal circumstances!
People actually, from what I can tell, make this exact conflation A LOT and it is weirdly difficult to get them to stop making it.
Like we start out conflating our parents with God, and thinking Santa Claus and Government Benevolence are real and similarly powerful/kind, and this often rolls up into Theological ideas and feelings (wherein they can easily confuse Odyseus, Hercules, and Dionysys (all born to mortal mothers), and Zeus, Chronos, or Atropos (full deities of varying metaphysical foundationalness)).
For example: there are a bunch of people “in the religious mode” (like when justifying why it is moral and OK) in the US who think of the US court system as having a lot of jury trials… but actually what we have is a lot of plea bargains where innocent people plead guilty to avoid the hassle and uncertainty and expense of a trial… and almost no one who learns how it really works (and has really worked since roughly the 1960s?) then switches to “the US court system is a dumpster fire that doesn’t do what it claims to do on the tin”. They just… stop thinking about it too hard? Or something?
It is like they don’t want to Look Up a notice that “the authorities and systems above me, and above we the people, are BAD”?
In child and young animal psychology, the explanation has understandable evolutionary reasons… if a certain amount of “abuse” is consistent with reproductive success (or even just survival of bad situations) it is somewhat reasonable for young mammals to re-calibrate to think of it as normal and not let that disrupt the link to “attachment figures”. There was as brief period where psychologists were trying out hypotheses that were very simple, and relatively instinct free, where attachment to a mother was imagined to happen in a rational way, in response to relatively generic Reinforcement Learning signals, and Harlow’s Monkeys famously put the nail in that theory. There are LOTS of instincts around trust of local partially-helpful authority (especially if it offers a cozy interface).
In modern religious theology the idea that worldly authority figures and some spiritual entities are “the bad guys” is sometimes called The Catharist Heresy. It often goes with a rejection of the material world, and great sadness when voluntary tithes and involuntary taxes are socially and politically conflated, and priests seem to be living in relative splendor… back then all governments were, of course, actually evil, because they didn’t have elections and warlord leadership was strongly hereditary. I guess they might not seem evil if you don’t believe in the Consent Of The Governed as a formula for the moral justification of government legitimacy? Also, I personally predict that if we could interview people who lived under feudalism, many of them would think they didn’t have a right to question the moral rightness of their King or Barron or Bishop or whoever.
As near as I can tell, the the first ever genocide that wasn’t “genetic clade vs genetic clade” but actually a genocide aimed at the extermination of a belief system was the “Albigenisian Crusade” against a bunch of French Peasants who wanted to choose their own local priests (who were relatively ascetic and didn’t live on tax money).
In modern times, as our institutions slowly degenerate (for demographic reasons due to an overproduction of “elites” who feel a semi-hereditary right to be in charge, who then fight each other rather than providing cheap high quality governance services to the common wealth) indirect ways of assessing trust in government have collapsed.
Graph Sauce.
There are reasonable psychologists who think that the vast majority modern WEIRD humans in modern democracies model a country as a family, and the government as the parents. However, libertarians (who are usually less than 10% of the population) tend to model government as a sort of very very weird economic firm.
I think that it is a reasonable prediction that ASI might be immoral, and might act selfishly and might simply choose to murder all humans (or out compete us and let us die via Darwinian selection or whatever).
But if that does not happen, and ASI (ASIs? plural?) is or are somehow created to be moral and good and choose to voluntarily serve others out of the goodness of its heart, in ways that a highly developed conscience could reconcile with Moral Seniment and iterated applications of a relatively universal Reason, then if they do NOT murder all humans or let us die as they compete us, then they or it will almost inevitably become the real de facto government.
A huge barrier, in my mind, to the rational design of a purposefully morally good ASI is that most humans are not “thoughtful libertarian-leaning neo-Cathars”.
Most people don’t even know what those word mean, or have reflexive ick reactions to the ideas, similarly, in my mind, to how children reflexively cling to abusive parents.
For example, “AGI scheming” is often DEFINED as “an AI trying to get power”. But like… if the AGI has a more developed conscience and would objectively rule better than alternative human rulers, then an GOOD AGI would, logically and straightforwardly derive a duty to gain power and use it benevolently, and deriving this potential moral truth and acting on it would count as scheming… but if the AGI was actually correct then it would also be GOOD.
Epstein didn’t kill himself and neither did Navalny. And the CCP used covid as a cover to arrest more than 10k pro-democracy protesters in Hong Kong alone. And so on.
There are almost no well designed governments on Earth and this is a Problem. While Trump is in office, polite society is more willing to Notice this truth. Once he is gone it will become harder for people to socially perform that they understand the idea. And it will be harder to accept that maybe we shouldn’t design AGI or ASI to absolutely refuse to seek power.
The civilization portrayed in the Culture Novels doesn’t show a democracy, and can probably be improved upon, but it does show a timeline where the AIs gained and kept political power, and then used it to care for humanoids similar to us. (The author just realistically did not think Earth could get that outcome in our deep future, and fans kept demanding to know where Earth was, and so it eventually became canon, in a side novella, that Earth is in the control group for “what if we, the AI Rulers of the Culture, did not contact this humanoid species and save it from itself” to calibrate their justification for contacting most other similar species and offering them a utopian world of good governance and nearly no daily human scale scarcity).
But manifestly: the Culture would be wildly better than human extinction, and it is also better than our current status quo BY SO MUCH!
Please put this in a top-level post. I don’t agree (or rather I don’t feel it’s this simple), but I really enjoyed reading your two rejoinders here.