Thank you, I can’t find anything to complain about in this response. I am even less sympathetic to the anti,-TESCREAL crowd, for the record, I just also don’t consider them dangerous. LessWrong seems dangerous, even if sympathetic, and even if there’s very limited evidence of maliciousness. Effective Altruism seems directionally correct in most respects except maybe at the conjunction of dogmatic utilitarianism and extreme long termism, which I understand to be only a factional perspective within EA. If they keep moving in their overall direction, that is straightforwardly good. If it coalesces at a movement level into a doctrinal set of practices that is bad, even if it gains them scale and coordination. I think Scott Alexander (not a huge fan, but whatever) once said that the difference between a rational expert and a political expert is that one could be replaced by a rock with a directive on it saying to do whatever actions reflect the highest unconditioned probability of success. I’m somewhere between this anxiety, the anxiety about hostile epistemic processes as existing that actively exploit dead players, and LessWrong in particular being on track to at best multiply the magnitude of the existing distribution of happinesses and woes by a very large number and then fix them in place forever, or at worst arm the enemies of every general concept of moral principle with the means to permanently usurp it (leading to permanent misery or the end of consciousness).
I know you have a lot of political critics who do not really engage directly with ideas. I have tried, to an extent that I am not even sure is defensible, to always engage directly with ideas. And my perspectives probably can be found as minority perspectives among respected LessWrong members, but each individual one is already an extreme minority perspective so even the conjunction of three of them probably already doesn’t exist in anyone else. But if I could de-accelerate anything it would be LessWrong right now. It’s the only group of people who would consensually actually do this, and I have presented a rough case for the esoteric arguments for doing so. It’s the only place where the desired behavior actually has real positive expectation. With everything else you just have to hope it’s like the Nazi atomic bomb project at this point and that their bad philosophical commitments and opposition to “Jewish Science” also destroy their practical capacity. You cannot talk Heisenberg in 1943 into not being dangerous. If you really want him around academically and in friendly institutions after the war that’s fine, honestly, the scale of issues is sufficient that it just sort of can’t be risked caring about, but at the immediate moment that can’t be understood as a sane relationship.
Trying to stay focused on things I’ve already said, I guess it would just mean adopting any sort of security posture towards dual use concepts, particularly in regards to the attack of entirely surreptitiously replacing the semantics of an existing set of formalisms to produce a bad outcome, and also de-emphasizing cultural norms favoring coordination to focus more on safety. It’s really just like the lemon market for cars → captured mechanics → captured mechanic certification thing, there just needs to be thinking about how things can escalate. Obviously increased openness could still plausibly be a solution to this in some respects, and increased closedness a detriment. My thinking is just that, at some point AI will have the capacity to drown out all other sources of information, and if your defense for this is ‘I’ve read the sequences”, that’s not sufficient, because the AI has read the sequences too, so you need to think ahead to ‘what could AI, either autonomously or in conjunction with human bad actors, do to directly capture my own epistemic formula, overload them with alternate meaning, then deprecate existing meaning’. And you can actually keep going in paranoia from here, because obviously there are also examples of doing this literal thing that are good, like all of science for example, and therefore not just people who will subvert credulity here but people who will subvert paranoia.
I guess the ultra concise warning would be “please perpetually make sure you understand how scientific epistemics fundamentally versus conditionally differ from dark epistemics so that your heuristics don’t end up having you doing Aztec blood magic in the name of induction”
In another post made since this comment, someone did make specific claims and intermingle them with analysis, and it was pointed out that this can also reduce clarity due to heterogenous background assumptions of different readers. I think the project of rendering language itself unexploitable is probably going to be more complicated than I can usefully contribute to. It might not even be solvable at the level I’m focused on, I might literally be making the same mistake.
Thank you, I can’t find anything to complain about in this response. I am even less sympathetic to the anti,-TESCREAL crowd, for the record, I just also don’t consider them dangerous. LessWrong seems dangerous, even if sympathetic, and even if there’s very limited evidence of maliciousness. Effective Altruism seems directionally correct in most respects except maybe at the conjunction of dogmatic utilitarianism and extreme long termism, which I understand to be only a factional perspective within EA. If they keep moving in their overall direction, that is straightforwardly good. If it coalesces at a movement level into a doctrinal set of practices that is bad, even if it gains them scale and coordination. I think Scott Alexander (not a huge fan, but whatever) once said that the difference between a rational expert and a political expert is that one could be replaced by a rock with a directive on it saying to do whatever actions reflect the highest unconditioned probability of success. I’m somewhere between this anxiety, the anxiety about hostile epistemic processes as existing that actively exploit dead players, and LessWrong in particular being on track to at best multiply the magnitude of the existing distribution of happinesses and woes by a very large number and then fix them in place forever, or at worst arm the enemies of every general concept of moral principle with the means to permanently usurp it (leading to permanent misery or the end of consciousness).
I know you have a lot of political critics who do not really engage directly with ideas. I have tried, to an extent that I am not even sure is defensible, to always engage directly with ideas. And my perspectives probably can be found as minority perspectives among respected LessWrong members, but each individual one is already an extreme minority perspective so even the conjunction of three of them probably already doesn’t exist in anyone else. But if I could de-accelerate anything it would be LessWrong right now. It’s the only group of people who would consensually actually do this, and I have presented a rough case for the esoteric arguments for doing so. It’s the only place where the desired behavior actually has real positive expectation. With everything else you just have to hope it’s like the Nazi atomic bomb project at this point and that their bad philosophical commitments and opposition to “Jewish Science” also destroy their practical capacity. You cannot talk Heisenberg in 1943 into not being dangerous. If you really want him around academically and in friendly institutions after the war that’s fine, honestly, the scale of issues is sufficient that it just sort of can’t be risked caring about, but at the immediate moment that can’t be understood as a sane relationship.
What would it mean to decelerate Less Wrong?
Trying to stay focused on things I’ve already said, I guess it would just mean adopting any sort of security posture towards dual use concepts, particularly in regards to the attack of entirely surreptitiously replacing the semantics of an existing set of formalisms to produce a bad outcome, and also de-emphasizing cultural norms favoring coordination to focus more on safety. It’s really just like the lemon market for cars → captured mechanics → captured mechanic certification thing, there just needs to be thinking about how things can escalate. Obviously increased openness could still plausibly be a solution to this in some respects, and increased closedness a detriment. My thinking is just that, at some point AI will have the capacity to drown out all other sources of information, and if your defense for this is ‘I’ve read the sequences”, that’s not sufficient, because the AI has read the sequences too, so you need to think ahead to ‘what could AI, either autonomously or in conjunction with human bad actors, do to directly capture my own epistemic formula, overload them with alternate meaning, then deprecate existing meaning’. And you can actually keep going in paranoia from here, because obviously there are also examples of doing this literal thing that are good, like all of science for example, and therefore not just people who will subvert credulity here but people who will subvert paranoia.
I guess the ultra concise warning would be “please perpetually make sure you understand how scientific epistemics fundamentally versus conditionally differ from dark epistemics so that your heuristics don’t end up having you doing Aztec blood magic in the name of induction”
In another post made since this comment, someone did make specific claims and intermingle them with analysis, and it was pointed out that this can also reduce clarity due to heterogenous background assumptions of different readers. I think the project of rendering language itself unexploitable is probably going to be more complicated than I can usefully contribute to. It might not even be solvable at the level I’m focused on, I might literally be making the same mistake.