If you believe overall ‘misuse risk’ increases in a linear way with the number of people who have access, I guess that argument would hold.
The argument assumes that someone who is already wealthy and powerful can’t do any more harm with an uncensored AI that answers to them alone than any random person.
It further assumes that someone wealthy and powerful is invested in the status quo, and will therefore have less reason to misuse than someone without wealth or power.
I think that software solely in the hands of the powerful is far more dangerous than open sourcing it. I’m hopeful that Chinese teams with reasonable, people-centric morals like Deepseek will win tech races.
Westerners love their serfdom too much to expect them to make any demands at all of their oligarchs.
If you believe overall ‘misuse risk’ increases in a linear way with the number of people who have access, I guess that argument would hold.
The argument assumes that someone who is already wealthy and powerful can’t do any more harm with an uncensored AI that answers to them alone than any random person.
It further assumes that someone wealthy and powerful is invested in the status quo, and will therefore have less reason to misuse than someone without wealth or power.
I think that software solely in the hands of the powerful is far more dangerous than open sourcing it. I’m hopeful that Chinese teams with reasonable, people-centric morals like Deepseek will win tech races.
Westerners love their serfdom too much to expect them to make any demands at all of their oligarchs.