“A cat has an IQ of 14. You have an IQ of 140. A superintelligence has an IQ of 14000. You understand addition much better than the cat. The superintelligence does not understand addition much better than you.”
I think I’d focus more on qualitative differences rather than quantitative ones. Eg an AI system was able to solve protein folding, when humans couldn’t despite great effort; this points at a future AI being able to design synthetic life and molecular nanotechnology, which is qualitatively different from anything humans can do.
(Though, disjunctively, there are also plenty of paths through which speed alone is sufficient to take over the world, ie things that a time-dilated human would be able to do.)
AFAICT this is the crux; Yarvin seems to think that superintelligence can’t exist, which he argues through the lens of a series of considerations that would matter for an AGI that was as smart as a top-tier human, but which become minor speedbumps at most in the context as soon as intelligence advances any further than that.
(Overall, I think the linked article reinforces my preexisting impression that Curtis Yarvin is a fool.)
>Overall, I think the linked article reinforces my preexisting impression that Curtis Yarvin is a fool.
Given he was in the SMPY, I don’t think intelligence is preventing him from understanding this issue, rather he seems to have approached the issue uncritically and overconfidently. In effect, not distinguishable from a fool.
I think I’d focus more on qualitative differences rather than quantitative ones. Eg an AI system was able to solve protein folding, when humans couldn’t despite great effort; this points at a future AI being able to design synthetic life and molecular nanotechnology, which is qualitatively different from anything humans can do.
(Though, disjunctively, there are also plenty of paths through which speed alone is sufficient to take over the world, ie things that a time-dilated human would be able to do.)
AFAICT this is the crux; Yarvin seems to think that superintelligence can’t exist, which he argues through the lens of a series of considerations that would matter for an AGI that was as smart as a top-tier human, but which become minor speedbumps at most in the context as soon as intelligence advances any further than that.
(Overall, I think the linked article reinforces my preexisting impression that Curtis Yarvin is a fool.)
>Overall, I think the linked article reinforces my preexisting impression that Curtis Yarvin is a fool.
Given he was in the SMPY, I don’t think intelligence is preventing him from understanding this issue, rather he seems to have approached the issue uncritically and overconfidently. In effect, not distinguishable from a fool.