I elaborated a bit more on what I meant by “crazy”: https://www.lesswrong.com/posts/PMc65HgRFvBimEpmJ/legible-vs-illegible-ai-safety-problems?commentId=x9yixb4zeGhJQKtHb.
And yeah I do have a tendency to take weird ideas seriously, but what’s weird about the idea here? That some kinds of safety work could actually be harmful?
Nah, the weird idea is AI x-risk, something that almost nobody outside of LW-sphere takes seriously, even if some labs pay lip service to it.
I elaborated a bit more on what I meant by “crazy”: https://www.lesswrong.com/posts/PMc65HgRFvBimEpmJ/legible-vs-illegible-ai-safety-problems?commentId=x9yixb4zeGhJQKtHb.
And yeah I do have a tendency to take weird ideas seriously, but what’s weird about the idea here? That some kinds of safety work could actually be harmful?
Nah, the weird idea is AI x-risk, something that almost nobody outside of LW-sphere takes seriously, even if some labs pay lip service to it.