I wouldn’t say it’s the same and completely familiar. It will require different means than bio and cyber (indeed there are also important differences between bio and cyber, one of which is precisely the fact that it is harder to tell apart valid and malicious coding queries.) I was just saying we can use the same general process and framework of evaluations, mitigations, etc. In this sense I am also happy that we are not dealing with the intelligence agencies for now, since the workflows there might be harder to tell apart.
I agree that there are qualitative similarities, so perhaps we should be quantitative about it. Assuming for the sake of argument that the DoW were acting in bad faith and plans to use OpenAI’s services to conduct domestic mass surveillance (legally), how likely do you think it is that OpenAI would be able to prevent this? Given the difficulties I mentioned (indistinguishable from innocuous use, problematic only in aggregate, novel setting, classified, ZDR, no meaningful contractual recourse), it would seem like a big stretch to reach ~50% confidence in my opinion, even with considerable effort on OpenAI’s part.
Perhaps you think it’s unlikely that the DoW is acting in bad faith, but if so, it’s good to be clear about whether this is a load-bearing assumption.
I wouldn’t say it’s the same and completely familiar. It will require different means than bio and cyber (indeed there are also important differences between bio and cyber, one of which is precisely the fact that it is harder to tell apart valid and malicious coding queries.) I was just saying we can use the same general process and framework of evaluations, mitigations, etc. In this sense I am also happy that we are not dealing with the intelligence agencies for now, since the workflows there might be harder to tell apart.
I agree that there are qualitative similarities, so perhaps we should be quantitative about it. Assuming for the sake of argument that the DoW were acting in bad faith and plans to use OpenAI’s services to conduct domestic mass surveillance (legally), how likely do you think it is that OpenAI would be able to prevent this? Given the difficulties I mentioned (indistinguishable from innocuous use, problematic only in aggregate, novel setting, classified, ZDR, no meaningful contractual recourse), it would seem like a big stretch to reach ~50% confidence in my opinion, even with considerable effort on OpenAI’s part.
Perhaps you think it’s unlikely that the DoW is acting in bad faith, but if so, it’s good to be clear about whether this is a load-bearing assumption.