That said, I agree with Sam that in the short term more of the harm comes from misuse than misalignment. I just think the “short term” could be quite short, and normal people are not myopic enough that the costs of misuse are comparable to say a 3% risk of death in 10 years. I also think “misuse” vs “misalignment” can be blurry in a way that makes both positions more defensible, e.g. a scenario where OpenAI trains a model which is stolen and then deployed recklessly can involve both. Misalignment is what makes that event catastrophic for humanity, but from OpenAI’s perspective any event where someone steals their model and applies it recklessly might be described as misuse.
That said, I agree with Sam that in the short term more of the harm comes from misuse than misalignment. I just think the “short term” could be quite short, and normal people are not myopic enough that the costs of misuse are comparable to say a 3% risk of death in 10 years. I also think “misuse” vs “misalignment” can be blurry in a way that makes both positions more defensible, e.g. a scenario where OpenAI trains a model which is stolen and then deployed recklessly can involve both. Misalignment is what makes that event catastrophic for humanity, but from OpenAI’s perspective any event where someone steals their model and applies it recklessly might be described as misuse.