I agree that critics engaging with arguments badly is an update towards the arguments being real, but I am essentially claiming that the fact that this selection effect exists and is very difficult to eliminate/reduce to a useful level implicitly means that you can only get a very limited amount of evidence from arguments.
One particular part of my model here is that selection effects are usually very strong and difficult to eliminate by default, unfortunately, and thus one of the central problems of science in general is how to deal with this sort of effect.
But it’s nice to hear from you on how you’ve come to believe in AI risk being a big deal.
Edit: I wrote a linkpost on the main way people turn ideologically crazy that explains why you can only get a very limited amount of evidence from arguments, and retracted the statement that “I agree that critics engaging with arguments badly is an update towards the arguments being real”.
I agree that critics engaging with arguments badly is an update towards the arguments being real, but I am essentially claiming that the fact that this selection effect exists and is very difficult to eliminate/reduce to a useful level implicitly means that you can only get a very limited amount of evidence from arguments.One particular part of my model here is that selection effects are usually very strong and difficult to eliminate by default, unfortunately, and thus one of the central problems of science in general is how to deal with this sort of effect.
But it’s nice to hear from you on how you’ve come to believe in AI risk being a big deal.
Edit: I wrote a linkpost on the main way people turn ideologically crazy that explains why you can only get a very limited amount of evidence from arguments, and retracted the statement that “I agree that critics engaging with arguments badly is an update towards the arguments being real”.