Hi Nate, great respect. Forgive a rambling stream-of-consciousness comment.
Without the advantages of maxed-out physically feasible intelligence (and the tech unlocked by such intelligence), I think we would inevitably be overpowered.
I think you move to the conclusion “if humans don’t have AI, aliens with AI will stomp humans” a little promptly.
Hanson’s estimate of when we’ll meet aliens is 500 million years. I know very little about how Hanson estimated that & how credible the method is, and you don’t appear to either: that might be worth investigating. But—
One million years is ten thousand generations of humans as we know them. If AI progress were impossible under the heel of a world-state, we could increase intelligence by a few points each generation. This already happens naturally and it would hardly be difficult to compound the Flynn effect.
Surely we could hit endgame technology that hits the limits of physical possibility/diminishing returns in one million years, let alone five hundred of those spans. You are aware of all we have done in just the past two hundred years — we can expect invention progress to eventually decelerate as untapped invention space narrows, but when that finally outweighs the accelerating factors of increasing intelligence and helpful technology it seems likely that we will already be quite close to finaltech.
In comparative terms, a five hundred year sabbatical from AI would reduce the share of resources we could reach by an epsilon only, and if AI safety premises are sound then it would greatly increase EV.
This point is likely moot, of course. I understand that we do not live in a totalitarian world state and your intent is just to assure people that AI safety people are not neoluddites. (I suppose one could attempt to help a state establish global dominance, then attempt to steer really hard towards AI-safety, but that requires two incredible victories for sufficiently murky benefits such that you’d have to be really confident of AI doom and have nothing better to try.)
Secondary comment: I think there’s kind of a lot of room between 95% of potential value being lost and 5%!! A solid chunk of my probability mass about the future involves takeover by a ~random person or group of people who just happened to be in the right spot to seize power (e.g. government leader, corporate board) which could run anywhere from a 20 or 30% utility loss to the far negatives.
(This is based on the idea that even if the alignment problem is solved such that we know how to specify a goal rigorously to an AI, it doesn’t follow that the people who happen to be programming the goal will be selfless. You work in AI so presumably you have practiced rebuttals to this concept; I do not so I’ll state my thought but be clear that I expect this is well-worn territory to which I expect you to have a solid answer.)
a guess that a fair number of alien species are smarter, more cognitively coherent, and/or more coordinated than humans at the time they reach our technological level. (E.g., a hive-mind species would probably have an easier time solving alignment, since they wouldn’t need to rush.)
Tertiary comment: I’d be curious about your reasoning process behind this guess.
Is that genuinely just a solitary intuition, the chain of reasoning of which is too distributed to meaningfully follow back? It seems to assume that things like hive-mind species are possible or common, which I don’t have information about but maybe you do. I’d be interested in evolutionary or anthropic arguments here, but the knowledge that you have this intuition does not cause me to adopt it.
Anyway this was fun to think about have a good day!! :D
throws a Bayes point at you