Personally, I care primarily about AI risk for a few reasons. One is that it is an extremely strong feedback loop. There are other dangerous feedback loops, including nanotech, and I am not confident which will be a problem first. But I think AI is the hardest risk to solve, and also has the most potential for negative utility. I also think that we are relatively close to being able to create AGI.
As far as I know, the SI is defined by its purpose of reducing AI risk. If other risks need long-term work, then each risk needs a dedicated group to work on it.
As for LW, I think it’s simply that people read EY’s writing on AI risk, and those that agree tend to stick around and discuss it here.
There are two forms of AI in my book and either one contains risk. The learned AI or the AI that comes with complete knowledge. to involve AI in risk assesment you will need the AI in the wilderness with nothing held back. Truly though would you do that to AI? Kind of like shoving all information down the brain of a 13 year old girl. She would just go berserk and will become defiant in the end.
The best alternative safe AI that contains no risk is the copied brain of a scientist.
Personally, I care primarily about AI risk for a few reasons. One is that it is an extremely strong feedback loop. There are other dangerous feedback loops, including nanotech, and I am not confident which will be a problem first. But I think AI is the hardest risk to solve, and also has the most potential for negative utility. I also think that we are relatively close to being able to create AGI.
As far as I know, the SI is defined by its purpose of reducing AI risk. If other risks need long-term work, then each risk needs a dedicated group to work on it.
As for LW, I think it’s simply that people read EY’s writing on AI risk, and those that agree tend to stick around and discuss it here.
There are two forms of AI in my book and either one contains risk. The learned AI or the AI that comes with complete knowledge. to involve AI in risk assesment you will need the AI in the wilderness with nothing held back. Truly though would you do that to AI? Kind of like shoving all information down the brain of a 13 year old girl. She would just go berserk and will become defiant in the end.
The best alternative safe AI that contains no risk is the copied brain of a scientist.