[Question] Why not tool AI?

An ex­tremely ba­sic ques­tion that, af­ter months of en­gag­ing with AI safety liter­a­ture, I’m sur­prised to re­al­ize I don’t fully un­der­stand: why not tool AI?

AI Safety sce­nar­ios seem to con­ceive of AI as an au­tonomous agent. Is that be­cause of the cur­rent ma­chine learn­ing paradigm, where we’re set­ting the AI’s goals but not spec­i­fy­ing the steps to get there? Is this paradigm the en­tire rea­son why AI safety is an is­sue?

If so, is there a rea­son why ad­vanced AI would need an agenty util­ity func­tion sort of set up? Is it just too cum­ber­some to give step by step in­struc­tions for high level tasks?

Thanks!

No comments.