Yeah I pretty much agree with what you’re saying. But I think I misunderstood your comment before mine, and the thing you’re talking about was not captured by the model I wrote in my last comment; so I have some more thinking to do.
I didn’t mean “can be trusted to take AI risk seriously” as “indeterminate trustworthiness but cares about x-risk”, more like “the conjunction of trustworthy + cares about x-risk”.
Yeah I pretty much agree with what you’re saying. But I think I misunderstood your comment before mine, and the thing you’re talking about was not captured by the model I wrote in my last comment; so I have some more thinking to do.
I didn’t mean “can be trusted to take AI risk seriously” as “indeterminate trustworthiness but cares about x-risk”, more like “the conjunction of trustworthy + cares about x-risk”.