Here’s my super-short pitch. I think it could be delivered between the first and fourth floors on the elevator :-)
“Within a few decades, engineers will probably create an Artificial Intelligence at a roughly human level of ability. When that happens, it will want to improve itself as much as it can, since that will help it achieve its goals. It will self-improve to far-above-human levels. When it is that smart, it will almost certainly achieve its goals, and so we had better make sure, before we build it, that it has goals that are good for humans.”
Of course, there are some big inferential gaps, but usually you don’t have much time for your initial pitch. I think that that really does summarize the point, and a few people have “got it” after hearing a short pitch, at least to the point where details can be further explained.
I think this is way too technical-argument sounding I think; I expect people would either challenge you if they feel like they know enough or tune you out if they feel they don’t.
My initial impression is that starting with something a little less formal-sounding would be better, but actually I’d love to see a half dozen pitches and have people collect at least anecdotal evidence about which are effective.
Yes, it is too technical-sounding for most people.
But it is intended for very smart and technical/scientific people. That’s the only audience that has a chance of getting it, anyway—unless rationality training does the trick, that is :-)
It is meant as an intro, perhaps after the person has heard a bit about the topic, but you want to give them a clear summary to grasp the concept. Of course, it is not enough and is usually followed by more detail.
If I’m going to reduce that far, I’d probably go one level further and drop the reference to human/superhuman level AI altogether… for example: “We’re building systems today that automatically implement their own goals. Often they are so complex, or operate so quickly, that no human can monitor them effectively. Over time those systems will get more complex and faster and even harder for humans to monitor. Therefore, if we want to ensure that their output is good for us, we need to ensure that their goals are good for us once implemented.”
Of course, this completely loses the upside half of SI’s argument, where superhuman FAIs create a utopian post-scarcity death-free ultra-awesome environment. This might be an advantage for an elevator pitch.
Here’s my super-short pitch. I think it could be delivered between the first and fourth floors on the elevator :-)
“Within a few decades, engineers will probably create an Artificial Intelligence at a roughly human level of ability. When that happens, it will want to improve itself as much as it can, since that will help it achieve its goals. It will self-improve to far-above-human levels. When it is that smart, it will almost certainly achieve its goals, and so we had better make sure, before we build it, that it has goals that are good for humans.”
Of course, there are some big inferential gaps, but usually you don’t have much time for your initial pitch. I think that that really does summarize the point, and a few people have “got it” after hearing a short pitch, at least to the point where details can be further explained.
I think this is way too technical-argument sounding I think; I expect people would either challenge you if they feel like they know enough or tune you out if they feel they don’t.
My initial impression is that starting with something a little less formal-sounding would be better, but actually I’d love to see a half dozen pitches and have people collect at least anecdotal evidence about which are effective.
Yes, it is too technical-sounding for most people.
But it is intended for very smart and technical/scientific people. That’s the only audience that has a chance of getting it, anyway—unless rationality training does the trick, that is :-)
It is meant as an intro, perhaps after the person has heard a bit about the topic, but you want to give them a clear summary to grasp the concept. Of course, it is not enough and is usually followed by more detail.
If I’m going to reduce that far, I’d probably go one level further and drop the reference to human/superhuman level AI altogether… for example: “We’re building systems today that automatically implement their own goals. Often they are so complex, or operate so quickly, that no human can monitor them effectively. Over time those systems will get more complex and faster and even harder for humans to monitor. Therefore, if we want to ensure that their output is good for us, we need to ensure that their goals are good for us once implemented.”
Of course, this completely loses the upside half of SI’s argument, where superhuman FAIs create a utopian post-scarcity death-free ultra-awesome environment. This might be an advantage for an elevator pitch.