So yes—it makes you wonder: is that it? Is that the answer to the Fermi paradox? Is that actually the great filter: all smart species, when they reach a certain threshold of collective smartness, end up designing AIs, and these AIs end up killing them. It could… that’s not extremely likely, but it’s likely enough. It could. You would still have to explain why the universe isn’t crawling with AIs—but AIs being natural ennemies to biological species, it would make sense that they wouldn’t just happily reveal themselves to us—a biological species. Instead, they could just be waiting in darkness, perfectly seeing us while we do not see them, and watching—among other things going on across the universe—if that particular smart-enough biological species will be a fruitful one—if it will end up giving birth to one new member of their tribe, i,e, a new AI, which new AI will most probably go and join their universal confederacy right after it gets rid of us.
Humm… fascinating downvotes. But what do they mean really? They could mean that either (i) the Fermi paradox does not exist, Fermi and everybody else that has written and thought about it since, were just fools; or (ii) maybe the Fermi paradox exists, but thinking AI-driven extinction could be a solution to it is just wrong, for some reason so obvious that it does not even need to be stated (since none was stated by the downvoters). In both cases—fascinating insights… on the problem itself, on the audience of this site, on a lot of things really.
So yes—it makes you wonder: is that it? Is that the answer to the Fermi paradox? Is that actually the great filter: all smart species, when they reach a certain threshold of collective smartness, end up designing AIs, and these AIs end up killing them. It could… that’s not extremely likely, but it’s likely enough. It could. You would still have to explain why the universe isn’t crawling with AIs—but AIs being natural ennemies to biological species, it would make sense that they wouldn’t just happily reveal themselves to us—a biological species. Instead, they could just be waiting in darkness, perfectly seeing us while we do not see them, and watching—among other things going on across the universe—if that particular smart-enough biological species will be a fruitful one—if it will end up giving birth to one new member of their tribe, i,e, a new AI, which new AI will most probably go and join their universal confederacy right after it gets rid of us.
Humm… fascinating downvotes. But what do they mean really? They could mean that either (i) the Fermi paradox does not exist, Fermi and everybody else that has written and thought about it since, were just fools; or (ii) maybe the Fermi paradox exists, but thinking AI-driven extinction could be a solution to it is just wrong, for some reason so obvious that it does not even need to be stated (since none was stated by the downvoters). In both cases—fascinating insights… on the problem itself, on the audience of this site, on a lot of things really.