How about weighting each future f by the inverse of the probability of nuclear war before the time of AI in f (and then re-normalising)?
How about weighting each future f by the inverse of the probability of nuclear war before the time of AI in f (and then re-normalising)?