“Why should we have to recruit people? Or train them, for that matter? If they’re smart/high-executive-function enough, they’ll find their way here”.
Note: CFAR has had been a MIRI hiring pipeline for years, and they also seemed to function as a way of upskilling people in CFAR-style rationality, which CFAR thought was the load-bearing bits required to turn someone into a world saver.
I don’t think anyone is saying this outright so I suppose I will—pushing forward the frontier on intelligence enhancement as a solution to alignment is not wise. The second order effects of pushing that particular frontier (both the capabilities and overton window) are disastrous, and our intelligence outpacing our wisdom is what got us into this mess in the first place.
I absolutely agree. Since lots of things are happening in the brain, you can’t amplify intelligence without tearing down lots of Chesterton-Schelling fences. Making a community wealthy or powerful will make all the people and structures and norms inside it OOD.
But at the same time, we need nuanced calculations comparing the expected costs and the expected benefits. We will need to do those calculations as we go along, so we can update based on which tech and projects turn out to be low hanging fruit. Staying the course also doesn’t seem to be a winning strategy.
You’re not going to just be able to stop the train at the moment the costs outweigh the benefits. The majority of negative consequences will most likely come from grey swans that won’t show up in your nuanced calculations of costs and benefits.
EY particularly mean “intelligence in broad sense”, including wisdom. One of his proposals was “identify structures that makes people rationalize, disable them”.
Are there any organizations or research groups that are specifically working on improving the effectiveness of the alignment research community? E.g.
Reviewing the literature on intellectual progress, metascience, and social epistemology and applying the resulting insights to this community
Funding the development of experimental “epistemology software”, like Arbital or Mathopedia
The classic one is Lightcone Infrastructure, the team that runs LessWrong and the Alignment Forum.
Note: CFAR has had been a MIRI hiring pipeline for years, and they also seemed to function as a way of upskilling people in CFAR-style rationality, which CFAR thought was the load-bearing bits required to turn someone into a world saver.
I don’t think anyone is saying this outright so I suppose I will—pushing forward the frontier on intelligence enhancement as a solution to alignment is not wise. The second order effects of pushing that particular frontier (both the capabilities and overton window) are disastrous, and our intelligence outpacing our wisdom is what got us into this mess in the first place.
I absolutely agree. Since lots of things are happening in the brain, you can’t amplify intelligence without tearing down lots of Chesterton-Schelling fences. Making a community wealthy or powerful will make all the people and structures and norms inside it OOD.
But at the same time, we need nuanced calculations comparing the expected costs and the expected benefits. We will need to do those calculations as we go along, so we can update based on which tech and projects turn out to be low hanging fruit. Staying the course also doesn’t seem to be a winning strategy.
You’re not going to just be able to stop the train at the moment the costs outweigh the benefits. The majority of negative consequences will most likely come from grey swans that won’t show up in your nuanced calculations of costs and benefits.
EY particularly mean “intelligence in broad sense”, including wisdom. One of his proposals was “identify structures that makes people rationalize, disable them”.
This is not what I mean by wisdom.
Double-click? I’m wondering what you mean by “cold sleep” here.