people in AI Safety … especially don’t seem to be … offering profitable alternatives
The problem is that AI is a general-purpose tool. It’s far simpler to just ban AI entirely.
One could imagine an approach similar to UN Security Council on nuclear weapons. The nonproliferation treaty says, only the permanent members of the UNSC (the big five victors from World War 2) are allowed to have nuclear weapons, but they promise to help other states with civilian uses of nuclear energy. An “AI nonproliferation treaty” could again say, only the permanent members are allowed to have this technology—and they need to nationalize it or otherwise keep it under tight control. But they will make it available to other states in limited form.
I don’t think any of that will happen, either. Not unless something happens that deeply frightens the great powers. Just as AI is irresistibly tempting for those who seek profit, it is also irresistibly tempting for those who seek power.
So I am definitely in the camp of those who are trying to improve the odds that AI will be benevolent, rather than fighting to stop it from happening at all. I actually would like to see the “stop AI, or at least slow it down” body of opinion become better organized. But I think it would need to be based outside the AI research and AI safety communities.
The trouble I see with banning AI vs banning nuclear weapons is that it’s a lot harder to catch and detect people who are making AI. Banning AI is more like banning drugs or gambling. It could be done, but the effectiveness really varies. Creating a narrative about not using it, since it’s bad for your health, associating it with addicts, making it clear how it’s not profitable even if seems that way on the surface, controlling the components used to make it, etc seem much more effective.
I agree that AI is very tempting for those who seek profit, but I don’t agree with the irresistability. I think a sufficiently tech savvy businessman who’s looking for long term profits, in the scope of at least decades, rather than years, can see how unprofitable AI will be.
Something that is not fully understood and gets harder and harder to understand, that is discouraging for people who wanted to study to become experts and needs those experts to be able to verify it’s results and is very energy and computation intensive on top of that, is not sustainable. And that’s not even considering it at some point maybe having its own will which is unlikely to be in line with your own.
Now many short term seeking businessmen will certainly be attracted to it and perhaps some long term ones will also think they can ride the wave for a bit and then cash out. Or some businessmen who are powerful enough to think they’ll be the ones left holding the AI that essentially becomes the economy.
Take this with a lot of salt please, I’m very ignorant on a lot of this.
With what I know, even in the scenarios where we have well aligned AGI- which seems very unlikely- it’s much more likely to be used to further cement the power of authoritative governments or corporations than be used to help people. Any help will likely be a side effect or a necessary step for said government/corporation to get more power.
If we say that empowering people, helping people be able to help themselves, helping people feel fulfilled and happy etc is a goal, it seems to me that we must focus on tech and laws that move us away from thing like AI. And more towards fixing tax evasion, make solar panels more efficient and cheaper, urban planning that allows walkable cities, reducing a need for the Internet, etc.
Being a bit less ignorant now, I disagree with a lot of “I agree that AI is very tempting for those who seek profit, but I don’t agree with the irresistability. I think a sufficiently tech savvy businessman who’s looking for long term profits, in the scope of at least decades, rather than years, can see how unprofitable AI will be. ”
One of the biggest things I think we can immediately do is not consume online entertainment. Have more in person play/fun and encourage it of other too. The more this is done, the less data is available for training AI.
The problem is that AI is a general-purpose tool. It’s far simpler to just ban AI entirely.
One could imagine an approach similar to UN Security Council on nuclear weapons. The nonproliferation treaty says, only the permanent members of the UNSC (the big five victors from World War 2) are allowed to have nuclear weapons, but they promise to help other states with civilian uses of nuclear energy. An “AI nonproliferation treaty” could again say, only the permanent members are allowed to have this technology—and they need to nationalize it or otherwise keep it under tight control. But they will make it available to other states in limited form.
I don’t think any of that will happen, either. Not unless something happens that deeply frightens the great powers. Just as AI is irresistibly tempting for those who seek profit, it is also irresistibly tempting for those who seek power.
So I am definitely in the camp of those who are trying to improve the odds that AI will be benevolent, rather than fighting to stop it from happening at all. I actually would like to see the “stop AI, or at least slow it down” body of opinion become better organized. But I think it would need to be based outside the AI research and AI safety communities.
The trouble I see with banning AI vs banning nuclear weapons is that it’s a lot harder to catch and detect people who are making AI. Banning AI is more like banning drugs or gambling. It could be done, but the effectiveness really varies. Creating a narrative about not using it, since it’s bad for your health, associating it with addicts, making it clear how it’s not profitable even if seems that way on the surface, controlling the components used to make it, etc seem much more effective.
I agree that AI is very tempting for those who seek profit, but I don’t agree with the irresistability. I think a sufficiently tech savvy businessman who’s looking for long term profits, in the scope of at least decades, rather than years, can see how unprofitable AI will be.
Something that is not fully understood and gets harder and harder to understand, that is discouraging for people who wanted to study to become experts and needs those experts to be able to verify it’s results and is very energy and computation intensive on top of that, is not sustainable. And that’s not even considering it at some point maybe having its own will which is unlikely to be in line with your own.
Now many short term seeking businessmen will certainly be attracted to it and perhaps some long term ones will also think they can ride the wave for a bit and then cash out. Or some businessmen who are powerful enough to think they’ll be the ones left holding the AI that essentially becomes the economy.
Take this with a lot of salt please, I’m very ignorant on a lot of this.
With what I know, even in the scenarios where we have well aligned AGI- which seems very unlikely- it’s much more likely to be used to further cement the power of authoritative governments or corporations than be used to help people. Any help will likely be a side effect or a necessary step for said government/corporation to get more power.
If we say that empowering people, helping people be able to help themselves, helping people feel fulfilled and happy etc is a goal, it seems to me that we must focus on tech and laws that move us away from thing like AI. And more towards fixing tax evasion, make solar panels more efficient and cheaper, urban planning that allows walkable cities, reducing a need for the Internet, etc.
Being a bit less ignorant now, I disagree with a lot of “I agree that AI is very tempting for those who seek profit, but I don’t agree with the irresistability. I think a sufficiently tech savvy businessman who’s looking for long term profits, in the scope of at least decades, rather than years, can see how unprofitable AI will be. ”
One of the biggest things I think we can immediately do is not consume online entertainment. Have more in person play/fun and encourage it of other too. The more this is done, the less data is available for training AI.
I disagree with this now.