First, when I talk to security staff at AI companies about computer security, they often seem to fail to anticipate what insider threat from AIs will be like.
Why do you think this? Is it that they are not thinking about large numbers of automated agents running around doing a bunch of research?
Or is it that they are thinking about these kinds of scenarios, and yet they still don’t apply the insider threat frame for some reason?
Why do you think this? Is it that they are not thinking about large numbers of automated agents running around doing a bunch of research?
Or is it that they are thinking about these kinds of scenarios, and yet they still don’t apply the insider threat frame for some reason?