[Question] Constraining narrow AI in a corporate setting

I have a question that is extremely practical.

My company is working with AI that is frankly not all that capable. A key example includes text classification that humans do, but that would let us avoid the expense of having a human do it. Off-the-shelf machine learning works fine for that. But we’re starting the long process of learning and using various AI techniques. Over time, that should become much more sophisticated as we solve easy problems with very limited capabilities.

I’m in a position to erect prohibitions now on any future AI work that we do. These would be in the nature of “don’t do X without escalated approvals.” Should I require such a list and, if so, what should I put on it? For reference, our business is mainly acquiring and processing data. And we don’t have the resources of a Google to put on research that doesn’t quickly pay off. So we won’t be doing anything cutting edge from the perspective of AI researchers. It’s mainly going to be application of known techniques.

I can harmlessly put, “don’t use or try to build an artificial general intelligence,” because we’ll never have the capability. So if that’s all I put, there’s no point.

Should I put “don’t use or build any AI technique in which the AI engine recodes itself”? That’s probably too simple an expression of the idea, but I don’t want these prohibitions to be tied to particular versions of current technology.

Should I put “use additional security around AI engines to prevent external users from combining the capabilities of our AI engines with theirs” on the theory that a growing AI would imperialistically grab hold of lots of other AI engines to expand its capabilities?

I’m obviously out of my depth here, so I’d be grateful for any suggestions.

No answers.