Stephen apparently found that the LLMs consistently suggest these people post on LessWrong, so insofar as you are extrapolating by normalizing based on the size of the LessWrong userbase (suggested by “that’s just on LessWrong”), that seems probably wrong.
Edit: I will say though that I do still agree this is worrying, but my model of the situation is much more along the lines of crazies being made more crazy by the agreement machine[1] than something very mysterious going on.
Contrary to the hope many have had that LLMs would make crazies less crazy due to being more patient & better at arguing than regular humans, ime they seem to have a memorized list-of-things-its-bad-to-believe which in new chats they will argue against you on, but for beliefs not on that list…
Yeah, Stephen’s comment is indeed a mild update back in the happy direction.
I’m still digesting, but a tentative part of my model here is that it’s similar to what typically happens to people in charge of large organizations. I.e. they accidentally create selection pressures which surround them with flunkies who display what the person in charge wants to see, and thereby lose the ability to see reality. And that’s not something which just happens to crazies. For instance, this is my central model of why Putin invaded Ukraine.
Stephen apparently found that the LLMs consistently suggest these people post on LessWrong, so insofar as you are extrapolating by normalizing based on the size of the LessWrong userbase (suggested by “that’s just on LessWrong”), that seems probably wrong.
Edit: I will say though that I do still agree this is worrying, but my model of the situation is much more along the lines of crazies being made more crazy by the agreement machine[1] than something very mysterious going on.
Contrary to the hope many have had that LLMs would make crazies less crazy due to being more patient & better at arguing than regular humans, ime they seem to have a memorized list-of-things-its-bad-to-believe which in new chats they will argue against you on, but for beliefs not on that list…
Yeah, Stephen’s comment is indeed a mild update back in the happy direction.
I’m still digesting, but a tentative part of my model here is that it’s similar to what typically happens to people in charge of large organizations. I.e. they accidentally create selection pressures which surround them with flunkies who display what the person in charge wants to see, and thereby lose the ability to see reality. And that’s not something which just happens to crazies. For instance, this is my central model of why Putin invaded Ukraine.