Regarding “computational threshold”, my working assumption is that any given capability X is either (1) always and forever out of reach of a system by design, or (2) completely useless, or (3) very likely to be learned by a system, if the system has long-term real-world goals. Maybe it takes some computational time and effort to learn it, but AIs are not lazy (unless we program them to be). AIs are just systems that make good decisions in pursuit of a goal, and if “acquiring capability X” is instrumentally helpful towards achieving goals in the world, it will probably make that decision if it can (cf. “Instrumental convergence”).
If I have a life goal that is best accomplished by learning to use a forklift, I’ll learn to use a forklift, right? Maybe I won’t be very fluid at it, but fine, I’ll operate it more slowly and deliberately, or design a forklift autopilot subsystem, or whatever...
Regarding “computational threshold”, my working assumption is that any given capability X is either (1) always and forever out of reach of a system by design, or (2) completely useless, or (3) very likely to be learned by a system, if the system has long-term real-world goals. Maybe it takes some computational time and effort to learn it, but AIs are not lazy (unless we program them to be). AIs are just systems that make good decisions in pursuit of a goal, and if “acquiring capability X” is instrumentally helpful towards achieving goals in the world, it will probably make that decision if it can (cf. “Instrumental convergence”).
If I have a life goal that is best accomplished by learning to use a forklift, I’ll learn to use a forklift, right? Maybe I won’t be very fluid at it, but fine, I’ll operate it more slowly and deliberately, or design a forklift autopilot subsystem, or whatever...