So you don’t think the invention of AI is inevitable? If it is, shouldn’t we pool our resources to find a formula for friendliness before that happens?
How could you possibly prevent it from occurring? If you stop official AI research, that will just mean gangsters will find it first.
I mean, computing hardware isn’t that expensive, and we’re just talking about stumbling across patterns in logic here. (If an AI cannot be created that way, then we are safe regardless.) If you prevent AI research, maybe the formula won’t be discovered in 50 years, but are you okay with an unfriendly AI within 500?
(In any case, I think your definition of friendliness is too narrow. For example, you may disagree with EY’s definition of friendliness, but you have your own: Preventing people from creating an unfriendly AI will take nothing less than intervention from a friendly AI.)
(I should mention that unlike many LWers, I don’t want you to feel at all pressured to help EY build his friendly AI. My disagreement with you is purely intellectual. For instance, if your friendliness values differ from his, shouldn’t you set up your own rival organization in opposition to MIRI? Just saying.)
So you don’t think the invention of AI is inevitable? If it is, shouldn’t we pool our resources to find a formula for friendliness before that happens?
How could you possibly prevent it from occurring? If you stop official AI research, that will just mean gangsters will find it first.
I mean, computing hardware isn’t that expensive, and we’re just talking about stumbling across patterns in logic here. (If an AI cannot be created that way, then we are safe regardless.) If you prevent AI research, maybe the formula won’t be discovered in 50 years, but are you okay with an unfriendly AI within 500?
(In any case, I think your definition of friendliness is too narrow. For example, you may disagree with EY’s definition of friendliness, but you have your own: Preventing people from creating an unfriendly AI will take nothing less than intervention from a friendly AI.)
(I should mention that unlike many LWers, I don’t want you to feel at all pressured to help EY build his friendly AI. My disagreement with you is purely intellectual. For instance, if your friendliness values differ from his, shouldn’t you set up your own rival organization in opposition to MIRI? Just saying.)