In my opinion, an article like this is not worth the time to decipher. There are probably some good ideas here but they’re buried under a low signal to noise ratio.
Intelligence is a resource, not an entity
This is like Ryle-inspired pseudo-philosophy, I don’t understand what these terms mean and why I am being told not to confuse them. And it doesn’t connect with his next claim that superintelligent AIs don’t need to be agents when structured workflows can steer them into having capabilites. I wish he’d dwell on this point more, but he never brings it up again.
The crucial question, then, is what we should do with AI, not what “it” will do with us.
Um, no it’s not. This is just a rhetorically empty antimetabole completely disconnected from the rest of the essay.
Nice truism there. Is that sentence even grammatically correct?
Rather than fragile vibe-coded software, AI will yield rock-solid systems
For both learning and inference, costs will fall, or performance will rise, or both
Can you substantiate your points instead of just saying things?
AI-enabled implementation capacity applied to expanding implementation capacity, including AI: this is what “transformative AI” will mean in practice.
Umm… What?
optimization means minimizing—not maximizing—resource consumption
This is just flatly false. Lots of optimization problems involve finding a maximum, like if you’re a salesman and want to sell as many goods as possible.
The framework I’ve described is intellectual infrastructure for a transition that will demand clear thinking under pressure
The framework described here is nothing, and I don’t even understand the problem it was supposed to solve.
I could go further, but you get the point.
So yeah, this essay is badly written slop. It’s hard to read but not just because it’s platitudinous. The ideas are all over the place and don’t logically connect, and it’s riddled with irrelevant, unsubstantiated claims.
I take this to be pretty strong evidence that this is not a good article for people reading Drexler to start with! (FWIW I valued reading it, but I’m now realising that the value I got was largely in understanding a bit better how Eric’s sweep of ideas connect, and perhaps that wouldn’t have been available to me if I hadn’t had the background context.)
Edit: I edited the original post to change the recommendation there slightly.
Hmm, k. I do think it fails to produce a readable essay as Claude often tends to.
Do you just dislike the writing style or do you think it’s seriously “unreadable” in some sense?
In my opinion, an article like this is not worth the time to decipher. There are probably some good ideas here but they’re buried under a low signal to noise ratio.
This is like Ryle-inspired pseudo-philosophy, I don’t understand what these terms mean and why I am being told not to confuse them. And it doesn’t connect with his next claim that superintelligent AIs don’t need to be agents when structured workflows can steer them into having capabilites. I wish he’d dwell on this point more, but he never brings it up again.
Um, no it’s not. This is just a rhetorically empty antimetabole completely disconnected from the rest of the essay.
Nice truism there. Is that sentence even grammatically correct?
Can you substantiate your points instead of just saying things?
Umm… What?
This is just flatly false. Lots of optimization problems involve finding a maximum, like if you’re a salesman and want to sell as many goods as possible.
The framework described here is nothing, and I don’t even understand the problem it was supposed to solve.
I could go further, but you get the point.
So yeah, this essay is badly written slop. It’s hard to read but not just because it’s platitudinous. The ideas are all over the place and don’t logically connect, and it’s riddled with irrelevant, unsubstantiated claims.
I take this to be pretty strong evidence that this is not a good article for people reading Drexler to start with! (FWIW I valued reading it, but I’m now realising that the value I got was largely in understanding a bit better how Eric’s sweep of ideas connect, and perhaps that wouldn’t have been available to me if I hadn’t had the background context.)
Edit: I edited the original post to change the recommendation there slightly.
Yeah, I’m sure this is not a typical example of his writing style or exposition of the ideas he’s advocated for over the bulk of his career.
I think it’s pretty seriously unreadable. Like, most of it is vague big metaphors that fail to explain anything mechanistically.