But people underestimate how much more science needs to be done.
The big thing that is missing is meta-cognitive self reflection. It might turn out that even today’s RNN structures are sufficient and the only lacking answer is how to interconnect multi-columnar networks with meta-cognition networks.
it’s probably not going to be useful to build a product tomorrow.
Yes. Given the architecture is right and capable few science is needed to train this AGI. It will learn on its own.
The amount of safety related research is for sure underestimated. Evolution of biological brains never needed extra constraints. Society needed and created constraints. And it had time to do so. If science gets the architecture right—do the scientists really know what is going on inside their networks? How can developers integrate safety? There will not be a society of similarly capable AIs that can self-constrain its members. These are critical science issues especially because we have little we can copy from.
The big thing that is missing is meta-cognitive self reflection. It might turn out that even today’s RNN structures are sufficient and the only lacking answer is how to interconnect multi-columnar networks with meta-cognition networks.
Yes. Given the architecture is right and capable few science is needed to train this AGI. It will learn on its own.
The amount of safety related research is for sure underestimated. Evolution of biological brains never needed extra constraints. Society needed and created constraints. And it had time to do so. If science gets the architecture right—do the scientists really know what is going on inside their networks? How can developers integrate safety? There will not be a society of similarly capable AIs that can self-constrain its members. These are critical science issues especially because we have little we can copy from.