Our aim is therefore to find ways of improving both individual thinking skill, and the modes of thinking and social fabric that allow people to think together. And to do this among the relatively small sets of people tackling existential risk.
I get the impression that ‘new ways of improving thinking skill’ is a task that has mostly been saturated. The reasons people perhaps don’t have great thinking skill might be because
1) Reality provides extremely sparse feedback on ‘the quality of your/our thinking skills’ so people don’t see it as very important.
2) For a human, who represents 1⁄7 billionth of our species, thinking rationally is often a worse option than thinking irrationally in the same way as a particular group of humans, so as to better facilitate group membership via shared opinions. It’s very hard to ‘go it alone’.
3) (related to 2) Most decisions that a human has to make have already been faced by innumerable previous humans and do not require a lot of deep, fundamental-level thought.
These effects seem to present challenges to level-headed, rational thinking about the future of humanity. I see a lot of #2 in bad, broken thinking about AI risk, where the topic is treated as a proxy war for prosecuting various political/tribal conflicts.
Actually it is possible that the worst is yet to come in terms of political/tribal conflict influencing AI risk thinking.
I get the impression that ‘new ways of improving thinking skill’ is a task that has mostly been saturated. The reasons people perhaps don’t have great thinking skill might be because
1) Reality provides extremely sparse feedback on ‘the quality of your/our thinking skills’ so people don’t see it as very important.
2) For a human, who represents 1⁄7 billionth of our species, thinking rationally is often a worse option than thinking irrationally in the same way as a particular group of humans, so as to better facilitate group membership via shared opinions. It’s very hard to ‘go it alone’.
3) (related to 2) Most decisions that a human has to make have already been faced by innumerable previous humans and do not require a lot of deep, fundamental-level thought.
These effects seem to present challenges to level-headed, rational thinking about the future of humanity. I see a lot of #2 in bad, broken thinking about AI risk, where the topic is treated as a proxy war for prosecuting various political/tribal conflicts.
Actually it is possible that the worst is yet to come in terms of political/tribal conflict influencing AI risk thinking.