Replace “unknown random team” with the US military and a garage with a “military base” and you would be correct. There is no incentive for militaries to stop building autonomous drones/AGI.
However, I am not sure they are creative enough and not-control-freaks enough to try to build seriously self-modifying systems. They also don’t mind spending tons of money and allocating large teams, so they might not be aiming for artificial AI researchers all that much. And they are afraid to lose control (they know how to control people, but artificial self-modifying systems are something else).
Whereas a team in a garage is creative, is short on resources and quite interested in creating a team of artificial co-workers to help them (a success in that leads to a serious recursive self-improvement situation automatically), and might not hesitate to try other recursive self-improvements schemas (we are seeing more and more descriptions of novel recursive self-improvements schemas in recent publications), so they might end up with a foom even before they build more conventional artificial AI researchers (a sufficiently powerful self-referential metalearning schema might result in that; a typical experience is that all those recursive self-improvement schemas saturate disappointingly early, so the teams will be pushing harder at them trying to prevent premature saturation, and someone might succeed too well).
Basically, having “true AGI” means being able to create competent artificial AI researchers, which are sufficient for very serious recursive self-improvement capabilities, but one might also obtain drastic recursive self-improvement capabilities way before achieving anything like “true AGI”. “True AGI” is sufficient to start a far reaching recursive self-improvement, but there is no reason to think that “true AGI” is necessary for that (being more persistent at hacking the currently crippled self-improvement schemas and at studying ways to improve them might be enough).
Replace “unknown random team” with the US military and a garage with a “military base” and you would be correct. There is no incentive for militaries to stop building autonomous drones/AGI.
Militaries are certainly doing that, I agree.
However, I am not sure they are creative enough and not-control-freaks enough to try to build seriously self-modifying systems. They also don’t mind spending tons of money and allocating large teams, so they might not be aiming for artificial AI researchers all that much. And they are afraid to lose control (they know how to control people, but artificial self-modifying systems are something else).
Whereas a team in a garage is creative, is short on resources and quite interested in creating a team of artificial co-workers to help them (a success in that leads to a serious recursive self-improvement situation automatically), and might not hesitate to try other recursive self-improvements schemas (we are seeing more and more descriptions of novel recursive self-improvements schemas in recent publications), so they might end up with a foom even before they build more conventional artificial AI researchers (a sufficiently powerful self-referential metalearning schema might result in that; a typical experience is that all those recursive self-improvement schemas saturate disappointingly early, so the teams will be pushing harder at them trying to prevent premature saturation, and someone might succeed too well).
Basically, having “true AGI” means being able to create competent artificial AI researchers, which are sufficient for very serious recursive self-improvement capabilities, but one might also obtain drastic recursive self-improvement capabilities way before achieving anything like “true AGI”. “True AGI” is sufficient to start a far reaching recursive self-improvement, but there is no reason to think that “true AGI” is necessary for that (being more persistent at hacking the currently crippled self-improvement schemas and at studying ways to improve them might be enough).