[big quote, paragraph about SIAI] With this in mind [...]
Why? Why tell him all that, and then ask him to keep it in mind? For priming purposes? I’d prefer not to get primed estimates. For informative purposes? I don’t see what’s there that he wouldn’t already know and that would affect his probability estimates independently of priming. The big quote and following paragraph could be condensed into a sentence or two without losing much, and gaining clarity, particularly clarity of purpose.
I agree that you should drop question 1. It’s vague because of undefined values of “risks” and “seriously,” and leading because people don’t want a stranger to think they’re “not taking risks seriously.”
You should also expand question 2. What is the current level of awareness relative to the ideal level? What sort of exposure to the idea of risks do people get? Basically, I think there are similar questions with more interesting answers.
Questions 4, 5, 6 and 7 feel out of causal order. Maybe ask some questions like 6 or 7 before 4 and 5 e.g. “How likely is it that a self-modifying AGI will, sometime in the future, increase its intelligence from sub-human to superhuman?” “What would you estimate the timescale to be?”
I’m glad someone is doing this. Upvoted.
Why? Why tell him all that, and then ask him to keep it in mind? For priming purposes? I’d prefer not to get primed estimates. For informative purposes? I don’t see what’s there that he wouldn’t already know and that would affect his probability estimates independently of priming. The big quote and following paragraph could be condensed into a sentence or two without losing much, and gaining clarity, particularly clarity of purpose.
I agree that you should drop question 1. It’s vague because of undefined values of “risks” and “seriously,” and leading because people don’t want a stranger to think they’re “not taking risks seriously.”
You should also expand question 2. What is the current level of awareness relative to the ideal level? What sort of exposure to the idea of risks do people get? Basically, I think there are similar questions with more interesting answers.
Questions 4, 5, 6 and 7 feel out of causal order. Maybe ask some questions like 6 or 7 before 4 and 5 e.g. “How likely is it that a self-modifying AGI will, sometime in the future, increase its intelligence from sub-human to superhuman?” “What would you estimate the timescale to be?”
Kudos for the LessWrong reference :P