On a tangential note, your two contrasting paragraphs:
This was in large part the original plan of the whole rationalist project. Raise the sanity waterline. Give people the abilities and habits necessary to think well, both individually and as a group. Get our civilization to be more adequate in a variety of ways. Then, perhaps, they will be able to understand the dangers posed by future AIs and do something net useful about it.
...
Those who see a world where getting ahead means connections and status and conspiracy and also spending all your time in zero-sum competitions, and who seek to play the games of moving up the ranks of corporate America by becoming the person who would succeed at that, are not going to be the change we want to see.
Made me reflect on this whole business. Because zero-sum competitions will never disappear, so how exactly do we know “raising the sanity waterline” is even possible?
Raising the median, or mean average, might be possible, but the lower bound could be comprised of small groups or even a single individual. Which can always be pushed lower as part of the process of such competitions.
i.e. On a planet of 8 billion, how exactly could the entire “waterline” be monitored?
On a tangential note, your two contrasting paragraphs:
Made me reflect on this whole business. Because zero-sum competitions will never disappear, so how exactly do we know “raising the sanity waterline” is even possible?
Raising the median, or mean average, might be possible, but the lower bound could be comprised of small groups or even a single individual. Which can always be pushed lower as part of the process of such competitions.
i.e. On a planet of 8 billion, how exactly could the entire “waterline” be monitored?