Just to be clear, while I “vibe very hard” with what the author says on a conceptual level, I’m not directly calling for you to shut down those projects. I’m trying to explain what I think the author sees as a problem within the AI safety movement. Because I am talking to you specifically, I am using the immediate context of your work, but only as a frame not as a target. I found AI 2027 engaging, a good representation of a model of how takeoff will happen, and I thought it was designed and written well (tbh my biggest quibble is “why isn’t it called AI 2028″). The author is very very light on actual positive “what we should do” policy recommendations, so if I talked about that I would be filling in with my own takes, which probably differ from the author’s in several places. I am happy to do that if you want, though probably not publicly in a LW thread.
Just to be clear, while I “vibe very hard” with what the author says on a conceptual level, I’m not directly calling for you to shut down those projects. I’m trying to explain what I think the author sees as a problem within the AI safety movement. Because I am talking to you specifically, I am using the immediate context of your work, but only as a frame not as a target. I found AI 2027 engaging, a good representation of a model of how takeoff will happen, and I thought it was designed and written well (tbh my biggest quibble is “why isn’t it called AI 2028″). The author is very very light on actual positive “what we should do” policy recommendations, so if I talked about that I would be filling in with my own takes, which probably differ from the author’s in several places. I am happy to do that if you want, though probably not publicly in a LW thread.