I think when LWers say “raise the sanity waterline,” there are two ideas being presented. One is to make everyone a little bit more sane. That’s nice, but overall probably not very beneficial to FAI cause. Another is to make certain key people a bit more sane, hopefully sane enough to realize that FAI is a big deal, and sane enough to do some meaningful progress on it.
There’s another possible scenario: The AI Singularity isn’t far, but it is not very near, either. AGI is a generation or more beyond our current understanding of minds, and FAI is a generation or more beyond our current understanding of values. We’re making progress; and current efforts are on the critical path to success — but that success may not come during our lifetimes.
Since this is a possible scenario, it’s worth having insurance against it. And that means making sure that the next generation are competent to carry on the effort, and themselves survive to make it.
Cultivating a culture of rationality, awareness of existential risks, etc. is surely valuable for that purpose, too.
There’s another possible scenario: The AI Singularity isn’t far, but it is not very near, either. AGI is a generation or more beyond our current understanding of minds, and FAI is a generation or more beyond our current understanding of values. We’re making progress; and current efforts are on the critical path to success — but that success may not come during our lifetimes.
Since this is a possible scenario, it’s worth having insurance against it. And that means making sure that the next generation are competent to carry on the effort, and themselves survive to make it.
Cultivating a culture of rationality, awareness of existential risks, etc. is surely valuable for that purpose, too.
Good point, thanks.