For a great-if-imprecise response to #4, you can just read aloud the single page story at the beginning of Bostrom’s book ‘Superintelligence’. For a more precise response, you can make explicit the analogy.
Can you summarize what you mean or link to the excerpt?
And more precisely: Imagine if Roentgen had tried to come up with safety protocols for nuclear energy. He would simply have been far too early to possibly do so. Similarly, we are far too early in the development of AI to meaningfully make it safer, and MIRI’s program as it exists doesn’t convince me otherwise.
It is not believed his carcinoma was a result of his work with ionizing radiation because of the brief time he spent on those investigations, and because he was one of the few pioneers in the field who used protective lead shields routinely.
My apologies for not being clear on two counts. Here is the relevant passage. And the analogy referred to in my previous comment was the one between Bostrom’s story and AI.
For a great-if-imprecise response to #4, you can just read aloud the single page story at the beginning of Bostrom’s book ‘Superintelligence’. For a more precise response, you can make explicit the analogy.
And if they come back with an snake egg instead? Surely we need to have some idea of the nature of AI and it thus how exactly it is dangerous.
Can you summarize what you mean or link to the excerpt?
And more precisely: Imagine if Roentgen had tried to come up with safety protocols for nuclear energy. He would simply have been far too early to possibly do so. Similarly, we are far too early in the development of AI to meaningfully make it safer, and MIRI’s program as it exists doesn’t convince me otherwise.
From the Wikipedia article on Roentgen:
Sounds like he was doing something right.
My apologies for not being clear on two counts. Here is the relevant passage. And the analogy referred to in my previous comment was the one between Bostrom’s story and AI.