Scott Alexander and Daniel Kokotajlo’s article about rationally defining: “why it’s OK to talk about misaligned AI”
aka
”painting dark scenarios may increase the chance of them coming true but the benefits outweigh this possibility”
the original blog post:
https://blog.ai-futures.org/p/against-misalignment-as-self-fulfilling
the video I made about that article:
i super agree, i al so think that the value is in debating the models of intelligence explosion.
which is why i made my website: ai-2028.com or intexp.xyz