That’s wholly irrelevant. The important question is this: which can be constructed faster: a provably-safe-by-design friendly AGI, or a fail-safe not-proven-friendly tool AI? Lives hang in the balance: about 100,000 a day to be exact.
(There’s an aside about whether an all-powerful “friendly” AI outcome is even desirable—I don’t think it is. But that’s a separate issue.)
which can be constructed faster: a provably-safe-by-design friendly AGI, or a fail-safe not-proven-friendly tool AI?
Mark, I get that it’s terrible that people are dying. As pointed out in another thread, I support SENS. But there’s a disaster response tool called “Don’t just do something, stand there!”, which argues that taking the time to make sure you do the right thing is worth it, especially in emergencies, when there is pressure to act too soon. Mistakes made because you were in a hurry aren’t any less damaging because you were in a hurry for a good reason.
I don’t think anyone expects that it’s possible and desirable to slow down general tech development, and most tool AI is just software development. If I write software that helps engineers run their tools more effectively, or a colleague writes software that helps doctors target radiation at tumors more effectively, or another colleague writes software that helps planners decide which reservoirs to drain for electrical power, that doesn’t make a huge change to the trajectory of the future; each is just a small improvement towards more embedded intelligence and richer, longer lives.
So you don’t think the invention of AI is inevitable? If it is, shouldn’t we pool our resources to find a formula for friendliness before that happens?
How could you possibly prevent it from occurring? If you stop official AI research, that will just mean gangsters will find it first.
I mean, computing hardware isn’t that expensive, and we’re just talking about stumbling across patterns in logic here. (If an AI cannot be created that way, then we are safe regardless.) If you prevent AI research, maybe the formula won’t be discovered in 50 years, but are you okay with an unfriendly AI within 500?
(In any case, I think your definition of friendliness is too narrow. For example, you may disagree with EY’s definition of friendliness, but you have your own: Preventing people from creating an unfriendly AI will take nothing less than intervention from a friendly AI.)
(I should mention that unlike many LWers, I don’t want you to feel at all pressured to help EY build his friendly AI. My disagreement with you is purely intellectual. For instance, if your friendliness values differ from his, shouldn’t you set up your own rival organization in opposition to MIRI? Just saying.)
That’s wholly irrelevant. The important question is this: which can be constructed faster: a provably-safe-by-design friendly AGI, or a fail-safe not-proven-friendly tool AI? Lives hang in the balance: about 100,000 a day to be exact.
(There’s an aside about whether an all-powerful “friendly” AI outcome is even desirable—I don’t think it is. But that’s a separate issue.)
Mark, I get that it’s terrible that people are dying. As pointed out in another thread, I support SENS. But there’s a disaster response tool called “Don’t just do something, stand there!”, which argues that taking the time to make sure you do the right thing is worth it, especially in emergencies, when there is pressure to act too soon. Mistakes made because you were in a hurry aren’t any less damaging because you were in a hurry for a good reason.
I don’t think anyone expects that it’s possible and desirable to slow down general tech development, and most tool AI is just software development. If I write software that helps engineers run their tools more effectively, or a colleague writes software that helps doctors target radiation at tumors more effectively, or another colleague writes software that helps planners decide which reservoirs to drain for electrical power, that doesn’t make a huge change to the trajectory of the future; each is just a small improvement towards more embedded intelligence and richer, longer lives.
So you don’t think the invention of AI is inevitable? If it is, shouldn’t we pool our resources to find a formula for friendliness before that happens?
How could you possibly prevent it from occurring? If you stop official AI research, that will just mean gangsters will find it first.
I mean, computing hardware isn’t that expensive, and we’re just talking about stumbling across patterns in logic here. (If an AI cannot be created that way, then we are safe regardless.) If you prevent AI research, maybe the formula won’t be discovered in 50 years, but are you okay with an unfriendly AI within 500?
(In any case, I think your definition of friendliness is too narrow. For example, you may disagree with EY’s definition of friendliness, but you have your own: Preventing people from creating an unfriendly AI will take nothing less than intervention from a friendly AI.)
(I should mention that unlike many LWers, I don’t want you to feel at all pressured to help EY build his friendly AI. My disagreement with you is purely intellectual. For instance, if your friendliness values differ from his, shouldn’t you set up your own rival organization in opposition to MIRI? Just saying.)