Scott Alexander and Daniel Kokotajlo’s article about rationally defining: “why it’s OK to talk about misaligned AI”aka”painting dark scenarios may increase the chance of them coming true but the benefits outweigh this possibility”the original blog post:https://blog.ai-futures.org/p/against-misalignment-as-self-fulfilling
the video I made about that article:
Scott Alexander and Daniel Kokotajlo’s article about rationally defining: “why it’s OK to talk about misaligned AI”
aka
”painting dark scenarios may increase the chance of them coming true but the benefits outweigh this possibility”
the original blog post:
https://blog.ai-futures.org/p/against-misalignment-as-self-fulfilling
the video I made about that article: