OK, thanks! It sounds like you are saying that I shouldn’t be engaged in research projects like the AI Futures Model, AI 2027, etc.? On the grounds that they are deceptive, by implying that the situation is more under control, more normal, more OK than it is?
I agree that we should try to avoid giving that impression. But I feel like the way forward is to still do the research but then add prominent disclaimers, rather than abandon the research entirely.
Silicon Valley isn’t actually doing some kind of carefully calculated compute-optimal RSI takeoff launch sequence with a well understood theory of learning. The AGI “industry” is more like a group of people pulling the lever of a slot machine over and over and over again, egged on by a crowd of eager onlookers, spending down the world’s collective savings accounts until one of them wins big. By “win big”, of course, I mean “unleashes a fundamentally new kind of intelligence into the world”. And each of them may do it for different reasons, and some of them may in their heads actually have some kind of master plan, but all it looks like from the outside is ka-ching, ka-ching, ka-ching, ka-ching...
Just to be clear, while I “vibe very hard” with what the author says on a conceptual level, I’m not directly calling for you to shut down those projects. I’m trying to explain what I think the author sees as a problem within the AI safety movement. Because I am talking to you specifically, I am using the immediate context of your work, but only as a frame not as a target. I found AI 2027 engaging, a good representation of a model of how takeoff will happen, and I thought it was designed and written well (tbh my biggest quibble is “why isn’t it called AI 2028″). The author is very very light on actual positive “what we should do” policy recommendations, so if I talked about that I would be filling in with my own takes, which probably differ from the author’s in several places. I am happy to do that if you want, though probably not publicly in a LW thread.
OK, thanks! It sounds like you are saying that I shouldn’t be engaged in research projects like the AI Futures Model, AI 2027, etc.? On the grounds that they are deceptive, by implying that the situation is more under control, more normal, more OK than it is?
I agree that we should try to avoid giving that impression. But I feel like the way forward is to still do the research but then add prominent disclaimers, rather than abandon the research entirely.
I agree with this fwiw.
Just to be clear, while I “vibe very hard” with what the author says on a conceptual level, I’m not directly calling for you to shut down those projects. I’m trying to explain what I think the author sees as a problem within the AI safety movement. Because I am talking to you specifically, I am using the immediate context of your work, but only as a frame not as a target. I found AI 2027 engaging, a good representation of a model of how takeoff will happen, and I thought it was designed and written well (tbh my biggest quibble is “why isn’t it called AI 2028″). The author is very very light on actual positive “what we should do” policy recommendations, so if I talked about that I would be filling in with my own takes, which probably differ from the author’s in several places. I am happy to do that if you want, though probably not publicly in a LW thread.