Logical positivism asserted only statements verifiable through direct observation or logical proof should be considered meaningful. As a philosophical position, it’s self-refuting (if it’s true, it’s meaningless). As a rule of thumb about which questions are likely to reward investigation, it works pretty well.
For example, “AI risk” is incredibly vague. “AI” is a large class of possible devices and there are many forms of “risk”. If a problem can’t be clearly stated then logical proof is not a useful approach, and direct observation only works on things that actually exist. So I’d say that “AI risk” is not likely to be a tractable question, although “the effect of algorithmic trading on US agricultural commodities markets” or “the effect of social media ranking algorithms on the 2020 US elections” probably are.
Logical positivism asserted only statements verifiable through direct observation or logical proof should be considered meaningful. As a philosophical position, it’s self-refuting (if it’s true, it’s meaningless). As a rule of thumb about which questions are likely to reward investigation, it works pretty well.
For example, “AI risk” is incredibly vague. “AI” is a large class of possible devices and there are many forms of “risk”. If a problem can’t be clearly stated then logical proof is not a useful approach, and direct observation only works on things that actually exist. So I’d say that “AI risk” is not likely to be a tractable question, although “the effect of algorithmic trading on US agricultural commodities markets” or “the effect of social media ranking algorithms on the 2020 US elections” probably are.