firstly, a lot of aspects would not necessarily scale up to a smarter system, and it’s sometimes hard to tell what generalizes and what doesn’t.
I agree, but certainly trying to solve the problem without any hands on knowledge is more difficulty.
Secondly, it’s very very hard to pinpoint the “intelligence” of a program without running it
I agree, there is a risk that the first AGI we build will be intelligent enough to skillfully manipulate us. I think the chances are quite small. I find it difficult to image skipping dog level intelligence and human level intelligence and jumping straight to superhuman intelligence, but it is certainly possible.
I agree, but certainly trying to solve the problem without any hands on knowledge is more difficulty.
I agree, there is a risk that the first AGI we build will be intelligent enough to skillfully manipulate us. I think the chances are quite small. I find it difficult to image skipping dog level intelligence and human level intelligence and jumping straight to superhuman intelligence, but it is certainly possible.