I think differential technological development—prioritising some areas over others—is the current approach. It achirves the same result but has a higher chance of working.
Thanks for your response and not to be argumentative, but honest question: doesn’t that mean that you want some forms of AI research to slow down, at least on a relative scale?
I personally don’t see any thing wrong with this stance, but it seems to me like you’re trying to suggest that this trade-off doesn’t exist, and that’s not at all what I took from reading Bostrom’s Superintelligence.
An important distinction that jumps out to me- if we slowed down all technological progress equally, that wouldn’t actually “buy time” for anything in particular- I can’t think of anything we’d want to be doing with that time besides either 1. researching other technologies that might help with avoiding AI (can’t think of any ATM though- one that comes to mind is technologies that would allow downloading or simulating a human mind before we build AI from scratch, which sounds at least somewhat less dangerous from a human perspective than building AI from scratch), or 2. thinking about AI value systems.
The 2 is presumably the reason why anyone would suggest slowing down AI research, but I think a notable obstacle to 2 at present is large numbers of people not being concerned about AI risk because it’s so far away. If we get to the point where people actually expect an AI very soon, then slowing down while we discuss it might make sense.
I’m not a Friendliness researcher, but I did once consider whether trying to slow down AI research might be a good idea. Current thinking is probably not, but only because we’re forced to live in a third-best world:
First best: Do AI research until just before we’re ready to create an AGI. Either Friendliness is already solved by then, or else everyone stop and wait until Friendliness is solved.
Second best: Friendliness looks a lot harder than AGI, and we can’t expect everyone to resist the temptation of fame and fortune when the possibility of creating AGI is staring them in the face. So stop or slow down AI research now.
Third best: Don’t try to stop or slow down AI research because we don’t know how to do it effectively, and doing it ineffectively will just antagonize AI researchers and create PR problems.
There are some people, who honestly think Friendliness-researchers in MIRI and other places actually discourage AI research. Which sounds to me ridiculous, I’ve never seen such attitude from Friendliness-researchers, nor can even imagine that.
Why is this so ridiculous as to be unimaginable? Isn’t the second-best world above actually better than the third-best, if only it was feasible?
I think differential technological development—prioritising some areas over others—is the current approach. It achirves the same result but has a higher chance of working.
Thanks for your response and not to be argumentative, but honest question: doesn’t that mean that you want some forms of AI research to slow down, at least on a relative scale?
I personally don’t see any thing wrong with this stance, but it seems to me like you’re trying to suggest that this trade-off doesn’t exist, and that’s not at all what I took from reading Bostrom’s Superintelligence.
The trade off exists. There are better ways of resolving it than others, and there are better ways of phrasing it than others.
An important distinction that jumps out to me- if we slowed down all technological progress equally, that wouldn’t actually “buy time” for anything in particular- I can’t think of anything we’d want to be doing with that time besides either 1. researching other technologies that might help with avoiding AI (can’t think of any ATM though- one that comes to mind is technologies that would allow downloading or simulating a human mind before we build AI from scratch, which sounds at least somewhat less dangerous from a human perspective than building AI from scratch), or 2. thinking about AI value systems.
The 2 is presumably the reason why anyone would suggest slowing down AI research, but I think a notable obstacle to 2 at present is large numbers of people not being concerned about AI risk because it’s so far away. If we get to the point where people actually expect an AI very soon, then slowing down while we discuss it might make sense.
I’m not a Friendliness researcher, but I did once consider whether trying to slow down AI research might be a good idea. Current thinking is probably not, but only because we’re forced to live in a third-best world:
First best: Do AI research until just before we’re ready to create an AGI. Either Friendliness is already solved by then, or else everyone stop and wait until Friendliness is solved.
Second best: Friendliness looks a lot harder than AGI, and we can’t expect everyone to resist the temptation of fame and fortune when the possibility of creating AGI is staring them in the face. So stop or slow down AI research now.
Third best: Don’t try to stop or slow down AI research because we don’t know how to do it effectively, and doing it ineffectively will just antagonize AI researchers and create PR problems.
Why is this so ridiculous as to be unimaginable? Isn’t the second-best world above actually better than the third-best, if only it was feasible?
I can only talk about those I’ve interacted with, and I haven’t seen AI research blocking being discussed as a viable option.