Do the people ‘who know what’s going’ on (presumably) have better arguments?
Possibly, but if so, I haven’t seen them.
My current belief is “who knows if there’s a major problem with recommender systems or not”. I’m not willing to defer to them, i.e. say “there probably is a problem based on the fact that the people who’ve studied them think there’s a problem”, because as far as I can tell all of those people got interested in recommender systems because of the bad arguments and so it feels a bit suspicious / selection-effect-y that they still think there are problems. I would engage with arguments they provide and come to my own conclusions (whereas I probably would not engage with arguments from other sources).
Do you?
No. I just have anecdotal experience + armchair speculation, which I don’t expect to be much better at uncovering the truth than the arguments I’m critiquing.
This might still be good for generating ideas (if not far more accurate than brainstorming or trying to come up with a way to generate models via ‘brute force’).
But the real trick is—how do we test these sorts of ideas?
Agreed this can be useful for generating ideas (and I do tons of it myself; I have hundreds of pages of docs filled with speculation on AI; I’d probably think most of it is garbage if I went back and looked at it now).
We can test the ideas in the normal way? Run RCTs, do observational studies, collect statistics, conduct literature reviews, make predictions and check them, etc. The specific methods are going to depend on the question at hand (e.g. in my case, it was “read thousands of articles and papers on AI + AI safety”).
Possibly, but if so, I haven’t seen them.
My current belief is “who knows if there’s a major problem with recommender systems or not”. I’m not willing to defer to them, i.e. say “there probably is a problem based on the fact that the people who’ve studied them think there’s a problem”, because as far as I can tell all of those people got interested in recommender systems because of the bad arguments and so it feels a bit suspicious / selection-effect-y that they still think there are problems. I would engage with arguments they provide and come to my own conclusions (whereas I probably would not engage with arguments from other sources).
No. I just have anecdotal experience + armchair speculation, which I don’t expect to be much better at uncovering the truth than the arguments I’m critiquing.
This might still be good for generating ideas (if not far more accurate than brainstorming or trying to come up with a way to generate models via ‘brute force’).
But the real trick is—how do we test these sorts of ideas?
Agreed this can be useful for generating ideas (and I do tons of it myself; I have hundreds of pages of docs filled with speculation on AI; I’d probably think most of it is garbage if I went back and looked at it now).
We can test the ideas in the normal way? Run RCTs, do observational studies, collect statistics, conduct literature reviews, make predictions and check them, etc. The specific methods are going to depend on the question at hand (e.g. in my case, it was “read thousands of articles and papers on AI + AI safety”).