I don’t believe that a gray goo recipe can look like a cake recipe. I believe there are recipes for disastrously harmful things that look like recipes for desirable things; but is a goal-less Question Answerer producing a deceitful recipe more likely than a human working alone accidentally producing one?
The problem of making the average user as prudent as a Less Wrong user seems much easier than FAI. Average users already know to take the results of Wolfram Alpha and Google with a grain of salt. People working on synthetic organisms and nuclear radiation already know to take precautions when doing anything for the first time.
My point about assuming the entire world were less wrong users is that there are teams, made up of people who are not less wrong users, who will develop UFAI if we wait long enough. So a quick and slightly dirty plan (like making this sort of potentially dangerous Oracle AI) may beat a slow and perfect one.
I don’t believe that a gray goo recipe can look like a cake recipe. I believe there are recipes for disastrously harmful things that look like recipes for desirable things; but is a goal-less Question Answerer producing a deceitful recipe more likely than a human working alone accidentally producing one?
The problem of making the average user as prudent as a Less Wrong user seems much easier than FAI. Average users already know to take the results of Wolfram Alpha and Google with a grain of salt. People working on synthetic organisms and nuclear radiation already know to take precautions when doing anything for the first time.
My point about assuming the entire world were less wrong users is that there are teams, made up of people who are not less wrong users, who will develop UFAI if we wait long enough. So a quick and slightly dirty plan (like making this sort of potentially dangerous Oracle AI) may beat a slow and perfect one.
Oh! I see. That makes sense.