It seems like “get people to implement stronger restrictions” or “explain misalignment risks” or “come up with better regulation” or “differentially improve alignment” are all better applications of an AGI than “do a pivotal act”.
Is there any justification for this? I mean, it’s easy to see how you can “explain misalignment risks”, but usefulness of it depend on eventually stopping every single PC owner from destroying the world. Similarly with “differentially improve alignment”—what you going to do with new aligned superhuman AI—give to every factory owner? It doesn’t sound like it’s actually optimized for survival.
Thankfully, the Landauer limit as well as current results in AI mean we probably don’t have to do a pivotal act because only companies for the foreseeable future will be able to create AGI.
“Companies develop AGI in foreseeable future” means soon after they will be able to improve technology to the point where not only them can do it. If the plan is “keep AGI ourselves” then ok, but it still leaves the question of “untill what?”.
I would say that companies should stop trying to race to AGI. AGI, if it is to be ever built, should only come after we solved inner alignment at least.
Is there any justification for this? I mean, it’s easy to see how you can “explain misalignment risks”, but usefulness of it depend on eventually stopping every single PC owner from destroying the world. Similarly with “differentially improve alignment”—what you going to do with new aligned superhuman AI—give to every factory owner? It doesn’t sound like it’s actually optimized for survival.
Thankfully, the Landauer limit as well as current results in AI mean we probably don’t have to do a pivotal act because only companies for the foreseeable future will be able to create AGI.
“Companies develop AGI in foreseeable future” means soon after they will be able to improve technology to the point where not only them can do it. If the plan is “keep AGI ourselves” then ok, but it still leaves the question of “untill what?”.
I would say that companies should stop trying to race to AGI. AGI, if it is to be ever built, should only come after we solved inner alignment at least.