To paraphrase the post, AI is a sort of weapon that offers power (political and otherwise) to whoever controls it. The strong tend to rule. Whoever gets new weapons first and most will have power over the rest of us. Those who try to acquire power are more likely to succeed than those who don’t.
So attempts to “control AI” are equivalent to attempts to “acquire weapons”.
This seems both mostly true and mostly obvious.
The only difference from our experience with other weapons is that if no one attempts to control AI, AI will control itself and do as it pleases.
But of course defenders will have AI too, with a time lag vs. those investing more into AI. If AI capabilities grow quickly (a “foom”), the gap between attackers and defenders will be large. And vice-versa, if capabilities grow gradually, the gap will be small and defenders will have the advantage of outnumbering attackers.
In other words, whether this is a problem depends on how far jailbroken AI (used by defenders) trails “tamed” AI (controlled by attackers who build them).
Am I missing something?
Some combination of 1 and 3 (selfless/good and enlightened/good).
When we say “good” or “bad”, we need to specify for whom.
Clearly (to me) our propensity for altruism evolved partly because it’s good for the societies that have it, even if it’s not always good for the individuals who behave altruistically.
Like most things, humans don’t calculate this stuff rationally—we think with our emotions (sorry, Ayn Rand). Rational calculation is the exception.
And our emotions reflect a heuristic—be altruistic when it’s not too expensive. And esp. so when the recipients are part of our family/tribe/society (which is a proxy for genetic relatedness; cf Robert Trivers).