Thank you, I can’t find anything to complain about in this response. I am even less sympathetic to the anti,-TESCREAL crowd, for the record, I just also don’t consider them dangerous. LessWrong seems dangerous, even if sympathetic, and even if there’s very limited evidence of maliciousness. Effective Altruism seems directionally correct in most respects except maybe at the conjunction of dogmatic utilitarianism and extreme long termism, which I understand to be only a factional perspective within EA. If they keep moving in their overall direction, that is straightforwardly good. If it coalesces at a movement level into a doctrinal set of practices that is bad, even if it gains them scale and coordination. I think Scott Alexander (not a huge fan, but whatever) once said that the difference between a rational expert and a political expert is that one could be replaced by a rock with a directive on it saying to do whatever actions reflect the highest unconditioned probability of success. I’m somewhere between this anxiety, the anxiety about hostile epistemic processes as existing that actively exploit dead players, and LessWrong in particular being on track to at best multiply the magnitude of the existing distribution of happinesses and woes by a very large number and then fix them in place forever, or at worst arm the enemies of every general concept of moral principle with the means to permanently usurp it (leading to permanent misery or the end of consciousness).
I know you have a lot of political critics who do not really engage directly with ideas. I have tried, to an extent that I am not even sure is defensible, to always engage directly with ideas. And my perspectives probably can be found as minority perspectives among respected LessWrong members, but each individual one is already an extreme minority perspective so even the conjunction of three of them probably already doesn’t exist in anyone else. But if I could de-accelerate anything it would be LessWrong right now. It’s the only group of people who would consensually actually do this, and I have presented a rough case for the esoteric arguments for doing so. It’s the only place where the desired behavior actually has real positive expectation. With everything else you just have to hope it’s like the Nazi atomic bomb project at this point and that their bad philosophical commitments and opposition to “Jewish Science” also destroy their practical capacity. You cannot talk Heisenberg in 1943 into not being dangerous. If you really want him around academically and in friendly institutions after the war that’s fine, honestly, the scale of issues is sufficient that it just sort of can’t be risked caring about, but at the immediate moment that can’t be understood as a sane relationship.
Thank you, I can’t find anything to complain about in this response. I am even less sympathetic to the anti,-TESCREAL crowd, for the record, I just also don’t consider them dangerous. LessWrong seems dangerous, even if sympathetic, and even if there’s very limited evidence of maliciousness. Effective Altruism seems directionally correct in most respects except maybe at the conjunction of dogmatic utilitarianism and extreme long termism, which I understand to be only a factional perspective within EA. If they keep moving in their overall direction, that is straightforwardly good. If it coalesces at a movement level into a doctrinal set of practices that is bad, even if it gains them scale and coordination. I think Scott Alexander (not a huge fan, but whatever) once said that the difference between a rational expert and a political expert is that one could be replaced by a rock with a directive on it saying to do whatever actions reflect the highest unconditioned probability of success. I’m somewhere between this anxiety, the anxiety about hostile epistemic processes as existing that actively exploit dead players, and LessWrong in particular being on track to at best multiply the magnitude of the existing distribution of happinesses and woes by a very large number and then fix them in place forever, or at worst arm the enemies of every general concept of moral principle with the means to permanently usurp it (leading to permanent misery or the end of consciousness).
I know you have a lot of political critics who do not really engage directly with ideas. I have tried, to an extent that I am not even sure is defensible, to always engage directly with ideas. And my perspectives probably can be found as minority perspectives among respected LessWrong members, but each individual one is already an extreme minority perspective so even the conjunction of three of them probably already doesn’t exist in anyone else. But if I could de-accelerate anything it would be LessWrong right now. It’s the only group of people who would consensually actually do this, and I have presented a rough case for the esoteric arguments for doing so. It’s the only place where the desired behavior actually has real positive expectation. With everything else you just have to hope it’s like the Nazi atomic bomb project at this point and that their bad philosophical commitments and opposition to “Jewish Science” also destroy their practical capacity. You cannot talk Heisenberg in 1943 into not being dangerous. If you really want him around academically and in friendly institutions after the war that’s fine, honestly, the scale of issues is sufficient that it just sort of can’t be risked caring about, but at the immediate moment that can’t be understood as a sane relationship.
What would it mean to decelerate Less Wrong?