I think I was imagining that the pivotal tool AI is developed by highly competent and safety-conscious humans who use it to perform a pivotal act (or series of pivotal acts) that effectively precludes the kind of issues mentioned in Wei’s quote there.
Even if you make this assumption, it seems like the reliance on human safety does not go down. I think you’re thinking about something more like “how likely it is that lack of human safety becomes a problem” rather than “reliance on human safety”.
I couldn’t say without knowing more what “human safety” means here.
But here’s what I imagine an example pivotal command looking like: “Give me the ability to shut-down unsafe AI projects for the foreseeable future. Do this while minimizing disruption to the current world order / status quo. Interpret all of this in the way I intend.”
Even if you make this assumption, it seems like the reliance on human safety does not go down. I think you’re thinking about something more like “how likely it is that lack of human safety becomes a problem” rather than “reliance on human safety”.
I couldn’t say without knowing more what “human safety” means here.
But here’s what I imagine an example pivotal command looking like: “Give me the ability to shut-down unsafe AI projects for the foreseeable future. Do this while minimizing disruption to the current world order / status quo. Interpret all of this in the way I intend.”