(I’m reviewing my own post, which LessWrong allows me to do and I am therefore assuming is OK under the doctrine of Code Is Law)
I’m still very pleased with this post. Having spent an additional year in AI risk comms, I stand by the points I made. I think the bar for AI risk comms is much higher now than it was when I wrote this post, though it could still be higher, and I don’t expect my Shapely value is particularly high on this front: lots of people have worked at this!
I’m not the best person to review this, given that it is me giving advice; ideally someone other than me would say whether or not they’d used my advice and it had helped them! But I do have the next-next-best thing, which is a comment on the EA forum crosspost of someone saying they were using my advice (though nobody has given me any information on whether it worked).
I think there are a couple more failure modes I might write about, which I might write into a new post:
One is forgetting that the audience are not rationalists. There’s a couple of weird lines which stuck out to me in IABIED which are absolutely references to previous Eliezer writings and/or quirks of Eliezer’s thinking, and which don’t need to be there. IABIED is in many ways an attempt at something better than a simple English Yud essay.
Another is politics speak when normal speak would do. Using politics speak—something I did myself when I emailed my MP once, saying “The UK must lead on AI regulation” actively makes your argument weaker in some cases: it’s like showing up to a coding interview wearing a suit and tie, it signals you’re not confident in your argument on its own merits.
(I’m reviewing my own post, which LessWrong allows me to do and I am therefore assuming is OK under the doctrine of Code Is Law)
I’m still very pleased with this post. Having spent an additional year in AI risk comms, I stand by the points I made. I think the bar for AI risk comms is much higher now than it was when I wrote this post, though it could still be higher, and I don’t expect my Shapely value is particularly high on this front: lots of people have worked at this!
I’m not the best person to review this, given that it is me giving advice; ideally someone other than me would say whether or not they’d used my advice and it had helped them! But I do have the next-next-best thing, which is a comment on the EA forum crosspost of someone saying they were using my advice (though nobody has given me any information on whether it worked).
I think there are a couple more failure modes I might write about, which I might write into a new post:
One is forgetting that the audience are not rationalists. There’s a couple of weird lines which stuck out to me in IABIED which are absolutely references to previous Eliezer writings and/or quirks of Eliezer’s thinking, and which don’t need to be there. IABIED is in many ways an attempt at something better than a simple English Yud essay.
Another is politics speak when normal speak would do. Using politics speak—something I did myself when I emailed my MP once, saying “The UK must lead on AI regulation” actively makes your argument weaker in some cases: it’s like showing up to a coding interview wearing a suit and tie, it signals you’re not confident in your argument on its own merits.
Self reviews are actively encouraged! Indeed, we will ask authors to do so explicitly in the review phase.