Asking Precise Questions

Isaac Asi­mov once de­scribed a fu­ture in which all tech­ni­cal thought was au­to­mated, and the role of hu­mans was re­duced to find­ing ap­pro­pri­ate ques­tions to pose to think­ing ma­chines. I wouldn’t sug­gest plan­ning for this even­tu­al­ity, but it struck me as an in­ter­est­ing situ­a­tion. What would we do, if we could get the an­swer to any ques­tion we could for­mu­late pre­cisely? (In the story, ques­tions didn’t need to be for­mu­lated pre­cisely, but nev­er­mind.) For con­crete­ness, sup­pose that we have a box as smart as a mil­lion Ein­steins, co­op­er­at­ing effec­tively for a cen­tury ev­ery time we ask a ques­tion, but which is ca­pa­ble only of solv­ing pre­cisely speci­fied prob­lems.

You can’t say “an­a­lyze the re­sult of this ex­per­i­ment.” You can say, “find me the set­ting for these 10 pa­ram­e­ters which best ex­plains this data” or “write me a short pro­gram which pre­dicts this data.” You can’t say “find me a pro­gram that plays Go well.” You can say, “find me a pro­gram that beats this par­tic­u­lar Go ai, even with a 9 stone hand­i­cap.” Etc. More for­mally, lets say you can spec­ify any scor­ing pro­gram and ask the box to find an in­put that scores as well as it can.

What would you do, if you got ex­actly one ques­tion? I don’t think hu­man­ity is posed to get any earth-shat­ter­ing in­sights. I don’t think we could find a the­ory of ev­ery­thing, or a friendly AI, or any sort of AI at all, or a solu­tion to any real prob­lem fac­ing us, us­ing just one ques­tion. But maybe that is just a failure of my cre­ativity.

What would you plan to do, if you had un­limited ac­cess? An AGI or brain em­u­la­tion ar­guably im­plic­itly con­verts our vague real world ob­jec­tives into a pre­cise form. Are there other ways to bridge the gap be­tween what hu­mans can for­mally de­scribe and what hu­mans want? Can you boot­strap your way there start­ing from cur­rent un­der­stand­ing? What is any rea­son­able first step?