If there are patterns in your thinking that are consistently causing you to think things that are not true, metacognition is the general tool by which you can notice that and try to correct the situatio
And it there isnt that problem, there is no need for that solution. For your argument to go through, you need to
show that people likely to be impactive on AI safety are likely to have cognitive problems that affect them when they are doing AI safety. (Saying something like “academics are irrational because some of them believe in God” isn’t enough.” Compartmentalised beliefs are unimpactive because compartmentalised. Instrumental rationality is not epistemic rationality ).
To be more specific, I can very easily imagine AI researchers not believing that AI safety is an issue due to something like cognitive dissonance:
if they admitted that AI safety was an issue, they’d be admitting that what they’re working on is dangerous and maybe shouldn’t be worked on, which contradicts their desire to work on it.
I can easily imagine an AI safety researcher maintaining a false belief that AI safety is a huge deal, because if they didn’t they would be a nobody working on a non-problem. Funny how you can make logic run in more than one direction.
And it there isnt that problem, there is no need for that solution. For your argument to go through, you need to show that people likely to be impactive on AI safety are likely to have cognitive problems that affect them when they are doing AI safety. (Saying something like “academics are irrational because some of them believe in God” isn’t enough.” Compartmentalised beliefs are unimpactive because compartmentalised. Instrumental rationality is not epistemic rationality ).
I dare say
I can easily imagine an AI safety researcher maintaining a false belief that AI safety is a huge deal, because if they didn’t they would be a nobody working on a non-problem. Funny how you can make logic run in more than one direction.