I think it is possible to use LW for generating testable hypotheses, though sadly testing would require lots of resources, but then it is usually so anyway. For example, I tried to see how LWers would estimate probabilities of statements for botanical questions, and there was even one volunteer. Well. Perhaps it would be more in-group to ask for probabilities for technical stuff—not AI or math, rather something broadly engineering that would still allow people to generate more than one alternative—and watch how they connect the dots and make assumptions explicit? People seem to find some questions easier than others, regardless of how right their answers. It would be relevant to teaching rationality to know how people decide upon this.
(Of course, I am not a specialist and all this might be solved already.)
I think it is possible to use LW for generating testable hypotheses, though sadly testing would require lots of resources, but then it is usually so anyway. For example, I tried to see how LWers would estimate probabilities of statements for botanical questions, and there was even one volunteer. Well. Perhaps it would be more in-group to ask for probabilities for technical stuff—not AI or math, rather something broadly engineering that would still allow people to generate more than one alternative—and watch how they connect the dots and make assumptions explicit? People seem to find some questions easier than others, regardless of how right their answers. It would be relevant to teaching rationality to know how people decide upon this.
(Of course, I am not a specialist and all this might be solved already.)