Notice that his probabilities are for “AI will be able to do 80% of the jobs a human can do.” That’s much more limited than “general intelligence” (whatever that is) and thus much more likely.
As well, if the research focus is on “how do we automate production?” rather than “how do we create mentally superior beings that will take over reality?” it seems that the chance of the machine intelligences making us go extinct is much lower. The Staples supply bots want to fulfill orders for paperclips, not maximize the number of paperclips that exist. From that perspective, estimating the chance of a FOOM by 2100 at one in ten thousand doesn’t sound unreasonably low to me.
Actually, the more I think about the “80% of the jobs a human can do” metric, the more I wonder about it.
I mean, a particularly uncharitable interpretation starts counting jobs like “hold this door open”, in which case it’s possible that existing computers can do 80% of the jobs a human can do. (Possibly even without being turned on.)
I mean, a particularly uncharitable interpretation starts counting jobs like “hold this door open”
Well, ‘charitable’ is hard to judge there. That interpretation makes it easier for computers to meet that standard- is the threshold more meaningful when it’s easy or hard? Hard to say.
Even if by jobs he means “things people get paid to do full-time,” you have the question of weighting jobs equally (if even one person gets paid to floss horse teeth, that goes on the list of things an AI has to be able to do) or by composition (only one person doing the job means it’s a tiny fraction of jobs). But the second is a fluid thing, especially as jobs are given to machines rather than people!
Notice that his probabilities are for “AI will be able to do 80% of the jobs a human can do.” That’s much more limited than “general intelligence” (whatever that is) and thus much more likely.
As well, if the research focus is on “how do we automate production?” rather than “how do we create mentally superior beings that will take over reality?” it seems that the chance of the machine intelligences making us go extinct is much lower. The Staples supply bots want to fulfill orders for paperclips, not maximize the number of paperclips that exist. From that perspective, estimating the chance of a FOOM by 2100 at one in ten thousand doesn’t sound unreasonably low to me.
Actually, the more I think about the “80% of the jobs a human can do” metric, the more I wonder about it.
I mean, a particularly uncharitable interpretation starts counting jobs like “hold this door open”, in which case it’s possible that existing computers can do 80% of the jobs a human can do. (Possibly even without being turned on.)
Well, ‘charitable’ is hard to judge there. That interpretation makes it easier for computers to meet that standard- is the threshold more meaningful when it’s easy or hard? Hard to say.
Even if by jobs he means “things people get paid to do full-time,” you have the question of weighting jobs equally (if even one person gets paid to floss horse teeth, that goes on the list of things an AI has to be able to do) or by composition (only one person doing the job means it’s a tiny fraction of jobs). But the second is a fluid thing, especially as jobs are given to machines rather than people!