I haven’t read the entire post yet, but here are some thoughts I had after reading thru to about the first ten paragraphs of “Objection 2 …”. I think the problem with assuming, or judging, that tool-AI is safer than agent-AI is that a sufficiently powerful tool-AI would essentially be an agent-AI. Humans already hack other humans without directly manipulating each other’s physical persons or environments, and those hacks can drastically alter theirs or others persons and (physical) environments. Sometimes the safest course is not to listen to poisoned tongues.
I haven’t read the entire post yet, but here are some thoughts I had after reading thru to about the first ten paragraphs of “Objection 2 …”. I think the problem with assuming, or judging, that tool-AI is safer than agent-AI is that a sufficiently powerful tool-AI would essentially be an agent-AI. Humans already hack other humans without directly manipulating each other’s physical persons or environments, and those hacks can drastically alter theirs or others persons and (physical) environments. Sometimes the safest course is not to listen to poisoned tongues.