This is a sign that humanity is changing, and adapting somewhat to the circumstances presented by AI development.
It is evidence of that, but it’s not super strong, and in particular it doesn’t much distinguish between “the generators of why humanity was suicidally dismissive of information and reasoning have changed” from “some other more surface thing has changed, e.g. some low-fidelity public Zeitgeist has shifted which makes humans make a token obeisance to the Zeitgeist, but not in a way that implies that key decision makers will think clearly about the problem”. The above comment points out that we have other reason to think those generators haven’t changed much. (The latter hypothesis is a paranoid hypothesis, to be sure, in the sense that it claims there’s a process pretending to be a different process (matching at a surface level the predictions of an alternate hypothesis) but that these processes are crucially different from each other. But paranoid hypotheses in this sense are just often true.) I guess you could say the latter hypothesis also is “humanity changing, and adapting somewhat to the circumstances presented by AI development”, but it’s not the kind of “adaptation to the circumstances” that implies that now, reasoning will just work!
Yes, my experience of “nobody listened 20 years ago when the case for caring about AI risk was already overwhelmingly strong and urgent” doesn’t put strong bounds on how much I should anticipate that people will care about AI risk in the future, and this is important; but it puts stronger bounds on how much I should anticipate that people will care about counterintuitive aspects of AI risk that haven’t yet undergone a slow process of climbing in mainstream respectability, even if the case for caring about those aspects is overwhelmingly strong and urgent (except insofar as LessWrong culture has instilled a general appreciation for things that have overwhelmingly strong and urgent cases for caring about them), and this is also important.
+1
To add:
It is evidence of that, but it’s not super strong, and in particular it doesn’t much distinguish between “the generators of why humanity was suicidally dismissive of information and reasoning have changed” from “some other more surface thing has changed, e.g. some low-fidelity public Zeitgeist has shifted which makes humans make a token obeisance to the Zeitgeist, but not in a way that implies that key decision makers will think clearly about the problem”. The above comment points out that we have other reason to think those generators haven’t changed much. (The latter hypothesis is a paranoid hypothesis, to be sure, in the sense that it claims there’s a process pretending to be a different process (matching at a surface level the predictions of an alternate hypothesis) but that these processes are crucially different from each other. But paranoid hypotheses in this sense are just often true.) I guess you could say the latter hypothesis also is “humanity changing, and adapting somewhat to the circumstances presented by AI development”, but it’s not the kind of “adaptation to the circumstances” that implies that now, reasoning will just work!
Not to say, don’t try talking with people.
Yes, my experience of “nobody listened 20 years ago when the case for caring about AI risk was already overwhelmingly strong and urgent” doesn’t put strong bounds on how much I should anticipate that people will care about AI risk in the future, and this is important; but it puts stronger bounds on how much I should anticipate that people will care about counterintuitive aspects of AI risk that haven’t yet undergone a slow process of climbing in mainstream respectability, even if the case for caring about those aspects is overwhelmingly strong and urgent (except insofar as LessWrong culture has instilled a general appreciation for things that have overwhelmingly strong and urgent cases for caring about them), and this is also important.