We need a clear definition of bad AI before we can know what is -not- that I think. These benchmarks seem to itemize AI as if it will have known, concrete components. But I think we need to first compose in the abstract a runaway self sustaining AI, and work backwards to see which pieces are already in place for it.
I haven’t kept up with this community for many years, so I have some catching up to do, but I am currently on the hunt for the most clear and concise places where the various runaway scenarios are laid out. I know there is a wealth of literature, I have the Bostrom book from years ago as well, but I think simplicity is the key here. In other words, where is the AI redline ?
I find the article well written and hits one nail on the head after another in regards to the potential scope of what’s to come, but the overarching question of the black swan is a bit distracting. To greatly oversimplify, I would say black swan is a category of massive event, on par with “catastrophe” and “miracle”, it just has overtones of financial investors having hedged their bets properly or not to prepare for it (that was the context of Taleb’s book iirc).
Imho, the more profound point you started to address, was our denial of these events—that we only in fact understand them retroactively. I think there is some inevitability to that, given that we can’t be living perpetually in hypothetical futures.
I did read the book many years ago but I forget Taleb’s prognosis—what are the strategies for preparing for uknown unknowns?