Is the intention of this project to approximate the list of questions, which if answered, would let someone build an FAI? (That seems incompatible with SIAI’s traditional more secretive approach. Surely the Manhattan Project would never have published a list of questions, which if answered, would let someone build a nuclear bomb.)
If yes, is this a strategic (i.e., considered) change of plans? If so, what were the arguments that changed people’s minds?
Or is the intention to just define some open problems that are FAI-related, but whose answers, if made public, pose no significant danger?
Your response seems rather defensive. Perhaps it would help to know that I just recently started thinking that some seemingly innocuous advances in FAI research might turn out to be dangerous, and that Eliezer’s secretive approach might be a good idea after all. So I’m surprised to see SIAI seeming to turn away from that approach, and would like to know whether that is actually the case, and if so what is the reason for it. With that in mind, perhaps you could consider giving more informative answers to my questions?
Sorry for my brevity, it’s just that I don’t have more substantive answers right now. These are discussions that need to be ongoing. I’m not aware of any strategy change.
Is the intention of this project to approximate the list of questions, which if answered, would let someone build an FAI? (That seems incompatible with SIAI’s traditional more secretive approach. Surely the Manhattan Project would never have published a list of questions, which if answered, would let someone build a nuclear bomb.)
If yes, is this a strategic (i.e., considered) change of plans? If so, what were the arguments that changed people’s minds?
Or is the intention to just define some open problems that are FAI-related, but whose answers, if made public, pose no significant danger?
I hope not to publish problem definitions that increase existential risk. I plan to publish problem definitions that decrease existential risk.
Your response seems rather defensive. Perhaps it would help to know that I just recently started thinking that some seemingly innocuous advances in FAI research might turn out to be dangerous, and that Eliezer’s secretive approach might be a good idea after all. So I’m surprised to see SIAI seeming to turn away from that approach, and would like to know whether that is actually the case, and if so what is the reason for it. With that in mind, perhaps you could consider giving more informative answers to my questions?
Sorry for my brevity, it’s just that I don’t have more substantive answers right now. These are discussions that need to be ongoing. I’m not aware of any strategy change.