This might still be good for generating ideas (if not far more accurate than brainstorming or trying to come up with a way to generate models via ‘brute force’).
But the real trick is—how do we test these sorts of ideas?
Agreed this can be useful for generating ideas (and I do tons of it myself; I have hundreds of pages of docs filled with speculation on AI; I’d probably think most of it is garbage if I went back and looked at it now).
We can test the ideas in the normal way? Run RCTs, do observational studies, collect statistics, conduct literature reviews, make predictions and check them, etc. The specific methods are going to depend on the question at hand (e.g. in my case, it was “read thousands of articles and papers on AI + AI safety”).
This might still be good for generating ideas (if not far more accurate than brainstorming or trying to come up with a way to generate models via ‘brute force’).
But the real trick is—how do we test these sorts of ideas?
Agreed this can be useful for generating ideas (and I do tons of it myself; I have hundreds of pages of docs filled with speculation on AI; I’d probably think most of it is garbage if I went back and looked at it now).
We can test the ideas in the normal way? Run RCTs, do observational studies, collect statistics, conduct literature reviews, make predictions and check them, etc. The specific methods are going to depend on the question at hand (e.g. in my case, it was “read thousands of articles and papers on AI + AI safety”).