Shulman has (and cites) some really good ideas, but the question remains: is anything being done about this? Is anyone actually working on developing a protocol for international regulation of artificial intelligence, or on selling the idea of Brin’s Transparent Society to decision-makers, or on programming a shield AI?
I think many in the field see intelligence as positive—and think that intelligent machines will reduce automobile accidents and liberate humans from much tedious drudgery.
Since at the moment, things seem to be proceeding at a managable pace, there seems to be little demand for elaborate braking systems.
Also, those who feel the most need for slowing down are probably those least well placed to influence the rate of progress.
Attempts to prevent a “race to the bottom” are likely to prove ineffective—and seem to be largely misguided. There is bound to be such a race—so, we should exert our resources where there’s a chance of making a difference.
s anyone actually working on developing a protocol for international regulation of artificial intelligence,
At this stage I would avoid doing that. The more you try to convince those with power that they should make rules against AI the more they are going to think that creating an AI is something they need to do!
I’m sure they do. The important thing is what additional promotion can achieve. Will it make a leader think he needs to cooperate with the enemy and make treaties? Or it will make them more inclined to add a trillion or two to the NSA budget?
Great! Thanks for the link.
Shulman has (and cites) some really good ideas, but the question remains: is anything being done about this? Is anyone actually working on developing a protocol for international regulation of artificial intelligence, or on selling the idea of Brin’s Transparent Society to decision-makers, or on programming a shield AI?
I think many in the field see intelligence as positive—and think that intelligent machines will reduce automobile accidents and liberate humans from much tedious drudgery.
Since at the moment, things seem to be proceeding at a managable pace, there seems to be little demand for elaborate braking systems.
Also, those who feel the most need for slowing down are probably those least well placed to influence the rate of progress.
Attempts to prevent a “race to the bottom” are likely to prove ineffective—and seem to be largely misguided. There is bound to be such a race—so, we should exert our resources where there’s a chance of making a difference.
At this stage I would avoid doing that. The more you try to convince those with power that they should make rules against AI the more they are going to think that creating an AI is something they need to do!
I am pretty sure that the government knows the score—or at least that the NSA do.
I’m sure they do. The important thing is what additional promotion can achieve. Will it make a leader think he needs to cooperate with the enemy and make treaties? Or it will make them more inclined to add a trillion or two to the NSA budget?
Yes indeed, the government seems to explore all promising technologies, and even some that are not so promising