My take on these issues is the following potential CoT of people calling themselves altruistic:
Assuming that ASI is ever[1] created, mankind can[2] be doomed or face the Deep Utopia or Dystopia.
P(doom|no WWIII) = P(ASI is created|no WWIII)P(doom|ASI is created). P(ASI is created|no WWIII) can be decreased only by international coordination of potential creators, including the USA and China, which seems unlikely.
P(doom|ASI is created) depends on alignment efforts by the AI companies. If I understand correctly, most such effort is done by Anthropic. Therefore, joining it is likely to decrease P(doom) or even be the best action a person living in the USA can take.
The slowdown ending of the AI-2027 forecast[3] does discuss power grabs and make a reference to the Intelligence Curse, which is the very lock-in requiring people to secure their fraction of the lightcone.
Potential altruists wishing to prevent power grabs and lock-in and not-so-altruists who want to secure as much as possible need to have an influence on the Oversight Committee. Which also requires being at the necessary place during the power struggle.
Therefore, not even an altruist can avoid concluding that any sufficiently capable person should join AI research.
My take on these issues is the following potential CoT of people calling themselves altruistic:
Assuming that ASI is ever[1] created, mankind can[2] be doomed or face the Deep Utopia or Dystopia.
P(doom|no WWIII) = P(ASI is created|no WWIII)P(doom|ASI is created). P(ASI is created|no WWIII) can be decreased only by international coordination of potential creators, including the USA and China, which seems unlikely.
P(doom|ASI is created) depends on alignment efforts by the AI companies. If I understand correctly, most such effort is done by Anthropic. Therefore, joining it is likely to decrease P(doom) or even be the best action a person living in the USA can take.
The slowdown ending of the AI-2027 forecast[3] does discuss power grabs and make a reference to the Intelligence Curse, which is the very lock-in requiring people to secure their fraction of the lightcone.
Potential altruists wishing to prevent power grabs and lock-in and not-so-altruists who want to secure as much as possible need to have an influence on the Oversight Committee. Which also requires being at the necessary place during the power struggle.
Therefore, not even an altruist can avoid concluding that any sufficiently capable person should join AI research.
While this does not happen in scenarios like titotal’s or Chapin Lenthall-Cleary’s, I consider them highly unlikely.
While I’ve proposed a scenario where the AI cares about humans in an exotic way, exotic scenarios aren’t widely known and don’t affect people’s CoT.
Said forecast is likely to undergo revisions.