The ‘one big breakthrough’ idea is definitely a way that you could have easy marginal intelligence improvements at HLMI, but we didnt’t call the node ‘one big breakthrough/few key insights needed’ because that’s not the only way it’s been characterised. E.g. some people talk about a ‘missing gear for intelligence’, where some minor change that isn’t really a breakthrough (like tweaking a hyperparameter in a model training procedure) produces massive jumps in capability. Like David said, there’s a subsequent post where we go through the different ways the jump to HLMI could play out, and One Big Breakthrough (we call it ’few key breakthroughs for intelligence) is just one of them.
I guess I’d just suggest that in “ML exhibits easy marginal intelligence improvements”, you should specify whether the “ML” is referring to “today’s ML algorithms” vs “Whatever ML algorithms we’re using in HLMI” vs “All ML algorithms” vs something else (or maybe you already did say which it is but I missed it).
The ‘one big breakthrough’ idea is definitely a way that you could have easy marginal intelligence improvements at HLMI, but we didnt’t call the node ‘one big breakthrough/few key insights needed’ because that’s not the only way it’s been characterised. E.g. some people talk about a ‘missing gear for intelligence’, where some minor change that isn’t really a breakthrough (like tweaking a hyperparameter in a model training procedure) produces massive jumps in capability. Like David said, there’s a subsequent post where we go through the different ways the jump to HLMI could play out, and One Big Breakthrough (we call it ’few key breakthroughs for intelligence) is just one of them.
I guess I’d just suggest that in “ML exhibits easy marginal intelligence improvements”, you should specify whether the “ML” is referring to “today’s ML algorithms” vs “Whatever ML algorithms we’re using in HLMI” vs “All ML algorithms” vs something else (or maybe you already did say which it is but I missed it).
Looking forward to the future posts :)