But this is already presupposing the existence of the superintelligence whose feasibility we are trying to explain.
Strictly speaking I only presupposed an AI could reach close to the limits of human intelligence in terms of thinking ability, but with the inherent speed and parallelizability and memory advantages of a digital mind.
Do you have any examples handy of AI being successful at real-world goals?
In small ways (aka sized appropriately for current AI capabilities) this kind of thing shows up all the time in chains of thought in response to all kinds of prompts, to the point that no, I don’t have specific examples, because I wouldn’t know how to pick one. The one that first comes to mind, I guess, was using AI to help me develop a personalized nutrition/supplement/weight loss/training regimen.
Stepping back, I should reiterate that I’m talking about “the current AI paradigm”
That’s fair, and a reasonable thing to discuss. After all, the fundamental claim of the book’s title is about a conditional probability: IF it turns out that the anything like our current methods scale to superintelligent agents, we’d all be screwed.
I think you’re overestimating the discourse on Frost.