Agreed. Human-comparable AI can be transformative, if it’s cheap and fast. Cheap, fast AI with an IQ of say 240 would clearly be very transformative. My speculation isn’t about whether AI can be transformative, it’s specifically about whether that transformation will keep on accelerating in a J-shaped curve to hit a Singularity in a finite, possibly even short, amount of time, or not. I’m suggesting that that may be harder than people often assume, unless most of the solar system gets turned into computronium and solar panels.
An “IQ of 240” that can be easily scaled up to run in billions of instances in parallel might be enough to have a singularity. It can outcompete anything humans do by a large margin.
In principle the exercise of sketching a theory and seeing if something like that can be convinced to make more sense is useful, and flaws don’t easily invalidate the exercise even when it’s unclear how to fix them. But I don’t see much hope here?
There’s human sample efficiency, and being smart despite learning on stupid data. With a bit of serial speed advantage and a bit of going beyond average human researcher intelligence, it won’t take long to reproduce that. Then the calculation needs new anchors, and in any case properties of pre-trained LLMs are only briefly relevant if the next few years of blind scaling spit out an AGI, and probably not at all relevant otherwise.
Agreed. Human-comparable AI can be transformative, if it’s cheap and fast. Cheap, fast AI with an IQ of say 240 would clearly be very transformative. My speculation isn’t about whether AI can be transformative, it’s specifically about whether that transformation will keep on accelerating in a J-shaped curve to hit a Singularity in a finite, possibly even short, amount of time, or not. I’m suggesting that that may be harder than people often assume, unless most of the solar system gets turned into computronium and solar panels.
An “IQ of 240” that can be easily scaled up to run in billions of instances in parallel might be enough to have a singularity. It can outcompete anything humans do by a large margin.
In principle the exercise of sketching a theory and seeing if something like that can be convinced to make more sense is useful, and flaws don’t easily invalidate the exercise even when it’s unclear how to fix them. But I don’t see much hope here?
There’s human sample efficiency, and being smart despite learning on stupid data. With a bit of serial speed advantage and a bit of going beyond average human researcher intelligence, it won’t take long to reproduce that. Then the calculation needs new anchors, and in any case properties of pre-trained LLMs are only briefly relevant if the next few years of blind scaling spit out an AGI, and probably not at all relevant otherwise.