Wouldn’t it be a more effective strategy to point out to China, the NSA, Goldman Sachs, etc that if they actually succeed in building a Kludge AI they’ll paper-clip themselves and die?
We’ve been trying, and we’ll keep trying, but the response to this work so far is not encouraging.
Yeah, you kind of have to deal with the handicap of being the successor-organization to the Singularity Institute, who were really noticeably bad at public relations. Note that I say “at public relations” rather than “at science”.
Hopefully you got those $3 I left on your desk in September to encourage PUBLISHING MOAR PAPERS ;-).
Actually, to be serious a moment, there are some open scientific questions here.
Why should general intelligence in terms of potential actions correspond to general world optimization in terms of motivations? If values and intelligence are orthogonal, why can’t we build a “mind design” for a general AI that would run a kebab truck as well as a human and do nothing else whatsoever?
Why is general intelligence so apparently intractable when we are a living example that provably manages to get up in the morning and act usefully each day without having to spend infinite or exponential time calculating possibilities?
Once we start getting into the realm of Friendliness research, how the hell do you specify an object-level ontology to a generally-intelligent agent, to deal with concepts like “humans are such-and-so agents and your purpose is to calculate their collective CEV”? You can’t even build Clippy without ontology, though strangely enough, you may be able to build a Value Learner without it.
All of these certainly make a difference in probable outcomes of a Kludge AI between Clippy, FAI, and Kebab AI.
We’ve been trying, and we’ll keep trying, but the response to this work so far is not encouraging.
Yeah, you kind of have to deal with the handicap of being the successor-organization to the Singularity Institute, who were really noticeably bad at public relations. Note that I say “at public relations” rather than “at science”.
Hopefully you got those $3 I left on your desk in September to encourage PUBLISHING MOAR PAPERS ;-).
Actually, to be serious a moment, there are some open scientific questions here.
Why should general intelligence in terms of potential actions correspond to general world optimization in terms of motivations? If values and intelligence are orthogonal, why can’t we build a “mind design” for a general AI that would run a kebab truck as well as a human and do nothing else whatsoever?
Why is general intelligence so apparently intractable when we are a living example that provably manages to get up in the morning and act usefully each day without having to spend infinite or exponential time calculating possibilities?
Once we start getting into the realm of Friendliness research, how the hell do you specify an object-level ontology to a generally-intelligent agent, to deal with concepts like “humans are such-and-so agents and your purpose is to calculate their collective CEV”? You can’t even build Clippy without ontology, though strangely enough, you may be able to build a Value Learner without it.
All of these certainly make a difference in probable outcomes of a Kludge AI between Clippy, FAI, and Kebab AI.
I did. :)