“So how much light is that, exactly? Ah, now that’s the issue.
I’ll start with a simple and genuine question: Is what I’ve already said, enough?”
Enough for what purpose? There are two distinct purposes that I can think of. Firstly, there is the task of convincing some “elite” group of potential FAI coders that the task is worth doing. I think that enough has been said for this one. How likely is this strategy to work? Well,
Secondly, there is the task of convincing a nontrivial fraction of “ordinary” people in developed countries that the humanity+ movement is worth getting excited about, worth voting for, worth funding. This might be a worthy goal if you think that the path of technological development will be significantly influenced by public opinion and politics. For this task, abstract descriptions are not enough, people will need specifics. If you tell John and Jane public that the AI will implement their CEV, they’ll look at you like you’re nuts. If you tell them that this will, as a special case, solve almost all of the problems that they currently worry about—like their health, their stressed lifestyles, the problems that they have with their marriage, the dementia that grandpa is succumbing to, etc, then you might be on to something.
“So how much light is that, exactly? Ah, now that’s the issue.
I’ll start with a simple and genuine question: Is what I’ve already said, enough?”
Enough for what purpose? There are two distinct purposes that I can think of. Firstly, there is the task of convincing some “elite” group of potential FAI coders that the task is worth doing. I think that enough has been said for this one. How likely is this strategy to work? Well,
Secondly, there is the task of convincing a nontrivial fraction of “ordinary” people in developed countries that the humanity+ movement is worth getting excited about, worth voting for, worth funding. This might be a worthy goal if you think that the path of technological development will be significantly influenced by public opinion and politics. For this task, abstract descriptions are not enough, people will need specifics. If you tell John and Jane public that the AI will implement their CEV, they’ll look at you like you’re nuts. If you tell them that this will, as a special case, solve almost all of the problems that they currently worry about—like their health, their stressed lifestyles, the problems that they have with their marriage, the dementia that grandpa is succumbing to, etc, then you might be on to something.