In a general discussion of ethics your replies are very sensible. When discussing AI safety, and, in particular P(doom), they are not. Your analogy does not work. It is effectively saying trying to prevent AI from killing us all by blocking its access to the internet with a password is better than not using a password, but an AI that is a threat to us will not be stopped by a password and neither will it be stopped by an imperfect heuristic. If we don’t have 100% certainty, we should not build it.
In a general discussion of ethics your replies are very sensible. When discussing AI safety, and, in particular P(doom), they are not. Your analogy does not work. It is effectively saying trying to prevent AI from killing us all by blocking its access to the internet with a password is better than not using a password, but an AI that is a threat to us will not be stopped by a password and neither will it be stopped by an imperfect heuristic. If we don’t have 100% certainty, we should not build it.