The point isn’t that I’m unaware of the orthogonality thesis, it’s that Yudkowsky doesn’t present it in his recent popular articles and podcast appearances[0]. So, he asserts that the creation of superhuman AGI will almost certainly lead to human extinction (until massive amounts of alignment research has been successfully carried out), but he doesn’t present an argument for why that is the case. Why doesn’t he? Is it because he thinks normies cannot comprehend the argument? Is this not a black pill? IIRC he did assert that superhuman AGI would likely decide to use our atoms on the Bankless podcast, but he didn’t present a convincing argument in favour of that position.
[0] see the following: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/ ,
,
Nit: that’s not what “solved” means. Superhuman ability =/= solved.