As someone who has really not been a fan of a lot of the recent conversations on LessWrong that you mentioned, I thought this was substantially better in an actually productive way with some really good analysis.
Also, if you or anyone else has a good concrete idea along these lines, feel free to reach out to me and I can help you get support, funding, etc. if I think the idea is a good one.
(Moderation note: added to the Alignment Forum from LessWrong.)
That’s my guess also, but I’m more asking just in case that’s not the case, and he disagrees with (for example) the Pragmatic AI Safety sequence, in which case I’d like to know why.
As someone who has really not been a fan of a lot of the recent conversations on LessWrong that you mentioned, I thought this was substantially better in an actually productive way with some really good analysis.
Also, if you or anyone else has a good concrete idea along these lines, feel free to reach out to me and I can help you get support, funding, etc. if I think the idea is a good one.
(Moderation note: added to the Alignment Forum from LessWrong.)
I’d be curious to hear what your thoughts are on the other conversations, or at least specifically which conversations you’re not a fan of?
My guess is that Evan dislikes the apocalyptic /panicky conversations that people are recently having on Lesswrong
That’s my guess also, but I’m more asking just in case that’s not the case, and he disagrees with (for example) the Pragmatic AI Safety sequence, in which case I’d like to know why.
I was referring to stuff like this, this, and this.
I haven’t finished it yet, but I’ve so far very much enjoyed the Pragmatic AI Safety sequence, though I certainly have disagreements with it.