I agree, this is the obvious solution… as long as you put your hands in your ears and I shout “I can’t hear you, I can’t hear you” whenever the topic of misuse risks comes up...
Otherwise, there are some quite thorny problem. Maybe you’re ultimately correct about open source being the path forward, but it’s far from obvious.
I’m actually warming to the idea. You’re right that it doesn’t solve all problems. But if our choice is between the open-source path where many people can use (and train) models locally, and the closed-source path where only big actors get to do that, then let’s compare them.
One risk everyone is thinking about is that AI will be used to attack people and take away their property. Since big actors aren’t moral toward weak people, this risk is worse in the closed-source path. (My go-to example, as always, is enclosures in England, where the elite happily impoverished their own population to get a little bit richer themselves.) The open-source path might help people keep at least a measure of power against the big actors, so on this dimension it wins.
The other risk is someone making a “basement AI” that will defeat the big actors and burn the world. But to me this doesn’t seem plausible. Big actors already have every advantage, why wouldn’t they be able to defend themselves? So on this dimension the open source path doesn’t seem too bad.
Of course both paths are very dangerous, for reasons we know very well. AI could make things a lot worse for everyone, period. So you could say we should compare against a third path where everyone pauses AI development. But the world isn’t taking that path! We already know that. So maybe our real choice now is between 1 and 2. At least that’s how things look to me now.
If offense-defense balance leans strongly to the attacker, that makes it even easier for big actors to attack & dispossess the weak, whose economic and military usefulness (the two pillars that held up democracy till now) will be gone due to AI. So it becomes even more important that the weak have AI of their own.
The powers that be have literal armies of human hackers pointed at the rest of us. Being able to use AI so they can turn server farms of GPUs into even larger armies isn’t destabilizing to the status quo.
I do not have the ability to reverse engineer every piece of software and weird looking memory page on my computer, and am therefore vulnerable. It would be cool if I could have a GPU with a magic robot reverse engineer on it giving me reports on my own stuff.
That would actually change the balance of power in favor of the typical individual, and is exactly the sort of capability that the ‘safety community’ is preventing.
If you believe overall ‘misuse risk’ increases in a linear way with the number of people who have access, I guess that argument would hold.
The argument assumes that someone who is already wealthy and powerful can’t do any more harm with an uncensored AI that answers to them alone than any random person.
It further assumes that someone wealthy and powerful is invested in the status quo, and will therefore have less reason to misuse than someone without wealth or power.
I think that software solely in the hands of the powerful is far more dangerous than open sourcing it. I’m hopeful that Chinese teams with reasonable, people-centric morals like Deepseek will win tech races.
Westerners love their serfdom too much to expect them to make any demands at all of their oligarchs.
I agree, this is the obvious solution… as long as you put your hands in your ears and I shout “I can’t hear you, I can’t hear you” whenever the topic of misuse risks comes up...
Otherwise, there are some quite thorny problem. Maybe you’re ultimately correct about open source being the path forward, but it’s far from obvious.
I’m actually warming to the idea. You’re right that it doesn’t solve all problems. But if our choice is between the open-source path where many people can use (and train) models locally, and the closed-source path where only big actors get to do that, then let’s compare them.
One risk everyone is thinking about is that AI will be used to attack people and take away their property. Since big actors aren’t moral toward weak people, this risk is worse in the closed-source path. (My go-to example, as always, is enclosures in England, where the elite happily impoverished their own population to get a little bit richer themselves.) The open-source path might help people keep at least a measure of power against the big actors, so on this dimension it wins.
The other risk is someone making a “basement AI” that will defeat the big actors and burn the world. But to me this doesn’t seem plausible. Big actors already have every advantage, why wouldn’t they be able to defend themselves? So on this dimension the open source path doesn’t seem too bad.
Of course both paths are very dangerous, for reasons we know very well. AI could make things a lot worse for everyone, period. So you could say we should compare against a third path where everyone pauses AI development. But the world isn’t taking that path! We already know that. So maybe our real choice now is between 1 and 2. At least that’s how things look to me now.
I’m worried that the offense-defense balance leans strongly towards the attacker. What are your thoughts here?
(Edited to make much shorter)
If offense-defense balance leans strongly to the attacker, that makes it even easier for big actors to attack & dispossess the weak, whose economic and military usefulness (the two pillars that held up democracy till now) will be gone due to AI. So it becomes even more important that the weak have AI of their own.
The powers that be have literal armies of human hackers pointed at the rest of us. Being able to use AI so they can turn server farms of GPUs into even larger armies isn’t destabilizing to the status quo.
I do not have the ability to reverse engineer every piece of software and weird looking memory page on my computer, and am therefore vulnerable. It would be cool if I could have a GPU with a magic robot reverse engineer on it giving me reports on my own stuff.
That would actually change the balance of power in favor of the typical individual, and is exactly the sort of capability that the ‘safety community’ is preventing.
If you believe overall ‘misuse risk’ increases in a linear way with the number of people who have access, I guess that argument would hold.
The argument assumes that someone who is already wealthy and powerful can’t do any more harm with an uncensored AI that answers to them alone than any random person.
It further assumes that someone wealthy and powerful is invested in the status quo, and will therefore have less reason to misuse than someone without wealth or power.
I think that software solely in the hands of the powerful is far more dangerous than open sourcing it. I’m hopeful that Chinese teams with reasonable, people-centric morals like Deepseek will win tech races.
Westerners love their serfdom too much to expect them to make any demands at all of their oligarchs.