A natural extension of the way AI interacts today via MCP protocols makes it a kind of insider. One with a specific role, and specific access patterns that match this role.
Even an org that is not concerned with misaligned AI or such, will still want to lock down exactly what updates each role is provding within the org, just as these org’s typically lock down access to different roles within a company today.
Most employees cannot access account receivable, and access to the production databases in a tech company are very carefully guarded. Mostly not from fear of malevolence, its just a fear that a junior Dev could easily bollix things horribly with one errant command. Much in the same way, the org will want to specialize the AI into different roles, and provide different access according to these roles, and will want to test these different AIs in each role.
All of this seems to follow quite naturally from existing corporate practice.
But I expect this level of diligence will fall short of something that could really stop a misaligned ASI or even strong AI.
So it seems this will be most like an insider threat, but I think the real remediation of this threat is far from clear.
A natural extension of the way AI interacts today via MCP protocols makes it a kind of insider. One with a specific role, and specific access patterns that match this role.
Even an org that is not concerned with misaligned AI or such, will still want to lock down exactly what updates each role is provding within the org, just as these org’s typically lock down access to different roles within a company today.
Most employees cannot access account receivable, and access to the production databases in a tech company are very carefully guarded. Mostly not from fear of malevolence, its just a fear that a junior Dev could easily bollix things horribly with one errant command. Much in the same way, the org will want to specialize the AI into different roles, and provide different access according to these roles, and will want to test these different AIs in each role.
All of this seems to follow quite naturally from existing corporate practice.
But I expect this level of diligence will fall short of something that could really stop a misaligned ASI or even strong AI.
So it seems this will be most like an insider threat, but I think the real remediation of this threat is far from clear.