Won’t the goal of getting humans to reason better necessarily turn political at a certain point? After all, if there is one side of an issue that is decidedly better from some ethical perspective we have accepted, won’t the rationalist have to advocate that side? Won’t refraining from taking political action then be unethical? This line of reasoning might need a little bit of reinforcement to be properly convincing, but it’s just to make the point that it seems to me that since political action is action, having a space cover rationality and ethics and not politics would be stifling a (very consequential) part of the discussion.
I’m not here very frequently, I just really like political theory and have seen around the site that you guys try to not discuss it too much. Not very common to find a good place to discuss it, as one would expect. But I’d love to find one!
So the idea is that if you get as many people in the AI business/research and as possible to read the sequences, then that will change their ideas in a way that will make them work in AI in a safer way, and that will avoid doom?
I’m just trying to understand how exactly the mechanism that will lead to the desired change is supposed to work.
If that is the case, I would say the critique made by OP is really on point. I don’t believe the current approach is convincing many people to read the sequences, and I also think reading the sequences won’t necessarily make people change their actions when business/economic/social incentives work otherwise. The latter being unavoidably a regulatory problem, and the former a communications strategy problem.
Or are you telling me to read the sequences? I intend to sometime, I just have a bunch of stuff to read already and I’m not exactly good at reading a lot consistently. I don’t deny having good material on the subject is not essential either.