If you acknowledge the possibility of uFAI, then it makes even less sense to want to remove the only people whose aim is to prevent that. There is already an existing AGI research community, and they’re not super-safety oriented, and there’s an AI research community who are not taking the risk seriously.
They could be dangerously deluded, for example, even if their aim is right. Currently, I don’t believe they are, but I gave an example of how you could possibly come to a conclusion that SIAI has negative expected value.
Maybe FAI is impossible, humanity’s only hope is to avoid the emergence of any super-human AIs, fooming is difficult and slow enough for that to be a somewhat realistic prospect and almost friendly AI is a lot more dangerous because it is less likely to be destroyed in time?
Then sane variant of SIAI should figure that out, produce documents that argue the case, and try to promote the ban on AI. (Of course, FAI is possible in principle, by its very problem statement, but might be more difficult than for humanity to grow up for itself.)
FAI is a device for producing good outcome. Humanity itself is such a device, to some extent. FAI as AI is an attempt to make that process more efficient, to understand the nature of good and design a process for producing more of it. If it’s in practice impossible to develop such a device significantly more efficient than humanity, then we just let the future play out, guarding it against known failure modes, such as AGI with arbitrary goals.
Maybe God will strike us down just for thinking about building a Friendly AI.
When you argue that the expected utility of action X is negative, you won’t get much headway by proposing an unlikely and gerrymandered set of circumstances such that, conditional on them being true, the conditional expectation is negative.
If you acknowledge the possibility of uFAI, then it makes even less sense to want to remove the only people whose aim is to prevent that. There is already an existing AGI research community, and they’re not super-safety oriented, and there’s an AI research community who are not taking the risk seriously.
They could be dangerously deluded, for example, even if their aim is right. Currently, I don’t believe they are, but I gave an example of how you could possibly come to a conclusion that SIAI has negative expected value.
SIAI has a higher risk of producing uFAI than your average charity.
If you acknowledge the possibility of uFAI, then it makes even less sense to want to remove the only people whose aim is to prevent that. There is already an existing AGI research community, and they’re not super-safety oriented, and there’s an AI research community who are not taking the risk seriously.
They could be dangerously deluded, for example, even if their aim is right. Currently, I don’t believe they are, but I gave an example of how you could possibly come to a conclusion that SIAI has negative expected value.
Maybe FAI is impossible, humanity’s only hope is to avoid the emergence of any super-human AIs, fooming is difficult and slow enough for that to be a somewhat realistic prospect and almost friendly AI is a lot more dangerous because it is less likely to be destroyed in time?
Then sane variant of SIAI should figure that out, produce documents that argue the case, and try to promote the ban on AI. (Of course, FAI is possible in principle, by its very problem statement, but might be more difficult than for humanity to grow up for itself.)
Could you rephrase that? I have no idea what you are saying here.
FAI is a device for producing good outcome. Humanity itself is such a device, to some extent. FAI as AI is an attempt to make that process more efficient, to understand the nature of good and design a process for producing more of it. If it’s in practice impossible to develop such a device significantly more efficient than humanity, then we just let the future play out, guarding it against known failure modes, such as AGI with arbitrary goals.
Thank you, now I see how the short version says the same thing, even though it sounded like gibberish to me before. I think I agree.
Maybe God will strike us down just for thinking about building a Friendly AI.
When you argue that the expected utility of action X is negative, you won’t get much headway by proposing an unlikely and gerrymandered set of circumstances such that, conditional on them being true, the conditional expectation is negative.
Now what kind of civilized rational conversation is that?
If you acknowledge the possibility of uFAI, then it makes even less sense to want to remove the only people whose aim is to prevent that. There is already an existing AGI research community, and they’re not super-safety oriented, and there’s an AI research community who are not taking the risk seriously.
They could be dangerously deluded, for example, even if their aim is right. Currently, I don’t believe they are, but I gave an example of how you could possibly come to a conclusion that SIAI has negative expected value.