This community is intensely hostile to the obvious solution: open source uncensored models as fast as you build them, and make GPUs to run them as cheap as possible.
I agree, this is the obvious solution… as long as you put your hands in your ears and I shout “I can’t hear you, I can’t hear you” whenever the topic of misuse risks comes up...
Otherwise, there are some quite thorny problem. Maybe you’re ultimately correct about open source being the path forward, but it’s far from obvious.
I’m actually warming to the idea. You’re right that it doesn’t solve all problems. But if our choice is between the open-source path where many people can use (and train) models locally, and the closed-source path where only big actors get to do that, then let’s compare them.
One risk everyone is thinking about is that AI will be used to attack people and take away their property. Since big actors aren’t moral toward weak people, this risk is worse in the closed-source path. (My go-to example, as always, is enclosures in England, where the elite happily impoverished their own population to get a little bit richer themselves.) The open-source path might help people keep at least a measure of power against the big actors, so on this dimension it wins.
The other risk is someone making a “basement AI” that will defeat the big actors and burn the world. But to me this doesn’t seem plausible. Big actors already have every advantage, why wouldn’t they be able to defend themselves? So on this dimension the open source path doesn’t seem too bad.
Of course both paths are very dangerous, for reasons we know very well. AI could make things a lot worse for everyone, period. So you could say we should compare against a third path where everyone pauses AI development. But the world isn’t taking that path! We already know that. So maybe our real choice now is between 1 and 2. At least that’s how things look to me now.
If offense-defense balance leans strongly to the attacker, that makes it even easier for big actors to attack & dispossess the weak, whose economic and military usefulness (the two pillars that held up democracy till now) will be gone due to AI. So it becomes even more important that the weak have AI of their own.
The powers that be have literal armies of human hackers pointed at the rest of us. Being able to use AI so they can turn server farms of GPUs into even larger armies isn’t destabilizing to the status quo.
I do not have the ability to reverse engineer every piece of software and weird looking memory page on my computer, and am therefore vulnerable. It would be cool if I could have a GPU with a magic robot reverse engineer on it giving me reports on my own stuff.
That would actually change the balance of power in favor of the typical individual, and is exactly the sort of capability that the ‘safety community’ is preventing.
If you believe overall ‘misuse risk’ increases in a linear way with the number of people who have access, I guess that argument would hold.
The argument assumes that someone who is already wealthy and powerful can’t do any more harm with an uncensored AI that answers to them alone than any random person.
It further assumes that someone wealthy and powerful is invested in the status quo, and will therefore have less reason to misuse than someone without wealth or power.
I think that software solely in the hands of the powerful is far more dangerous than open sourcing it. I’m hopeful that Chinese teams with reasonable, people-centric morals like Deepseek will win tech races.
Westerners love their serfdom too much to expect them to make any demands at all of their oligarchs.
my current hunch is that this would in fact be the obvious solution if you solved strong alignment. if you figure out how to solve strong alignment, the kind where starkly superintelligent AIs are in fact trying to do good, then you do want them to be available to everyone. My disagree vote is because i think it doesn’t matter who runs a model or what prompt it’s given, if it’s starkly superintelligent and even a little bit not doing what you actually meant. shove enough oomph through an approximator and the flaws in the approximation are all that’s noticeable.
The problem is that this helps solve the democratization issue (only partially so, it still vastly favours technically literate first worlders), while simultaneously making the proliferation issue infinitely worse.
There is really no way out of this other than “just stop building this shit”. Everyone likes to point out the glaring flaws in their ideological opponents’ plans but that only keeps happening because both sides’ plans are hugely flawed.
This is not an obvious solution, since (as you probably are aware) you run into the threat of human disempowerment given sufficiently strong models. You may disagree with this being an issue, but it would at least need to be argued.
A “feudal” system is at least as disempowering for nearly all humans, and would probably be felt as far more disempowering. I really don’t care at all how empowered Sam Altman is.
I’d say that the “open source uncensored models” had a greater danger of rapid human extinction, endless torture, and the like… except that I give very, very little credence to the idea that any of the “safety” or alignment directions anybody’s been pursuing will do anything to prevent those. I guess I hope they might have a greater danger.
That post has already gotten a disagree, and I really, really wanna know which paragraph it’s meant to apply to, or if it’s meant to apply to both of them.
Sure. Here’s the argument. Concern about future hypothetical harms (described with magic insider jargon words like disempowerment, fast takeoff, asi, gray goo, fooming) is used as an excuse to “deprioritize” dealing with very real, much more boring present day harms.
Here’s the stupid hegelian dialectic that this community has promoted:
Thesis: AI could kill us all!!!!1111
Antithesis: Drop bombs on datacenters, we have to stop now.
Synthesis: let’s just trust wealthy and powerful people to build AI responsibly. Let’s make sure they work in secret, so nobody else does something irresponsible.
Sure. Here’s the argument. Concern about future hypothetical harms (described with magic insider jargon words like disempowerment, fast takeoff, asi, gray goo, fooming) is used as an excuse to “deprioritize” dealing with very real, much more boring present day harms.
This doesn’t seem to actually respond to said concern in any way, though.
Like… do you think that concerns about “everyone dies” are… not plausible? That such outcomes just can’t happen? If you do think that, then that’s the argument, and whether something is being “used as an excuse” is completely irrelevant. If you don’t think that, then… what?
Those concerns are not plausible for the tools that exist today. Maybe they’re plausible for things that will release tomorrow.
The ‘anti-TESCREAL’ community is pretty united in the thesis that ‘AI safety’ people concerned about the words I’m quoting are pulling air away from their ‘mundane’ concerns about tech that is actually in use today.
This community is intensely hostile to the obvious solution: open source uncensored models as fast as you build them, and make GPUs to run them as cheap as possible.
I agree, this is the obvious solution… as long as you put your hands in your ears and I shout “I can’t hear you, I can’t hear you” whenever the topic of misuse risks comes up...
Otherwise, there are some quite thorny problem. Maybe you’re ultimately correct about open source being the path forward, but it’s far from obvious.
I’m actually warming to the idea. You’re right that it doesn’t solve all problems. But if our choice is between the open-source path where many people can use (and train) models locally, and the closed-source path where only big actors get to do that, then let’s compare them.
One risk everyone is thinking about is that AI will be used to attack people and take away their property. Since big actors aren’t moral toward weak people, this risk is worse in the closed-source path. (My go-to example, as always, is enclosures in England, where the elite happily impoverished their own population to get a little bit richer themselves.) The open-source path might help people keep at least a measure of power against the big actors, so on this dimension it wins.
The other risk is someone making a “basement AI” that will defeat the big actors and burn the world. But to me this doesn’t seem plausible. Big actors already have every advantage, why wouldn’t they be able to defend themselves? So on this dimension the open source path doesn’t seem too bad.
Of course both paths are very dangerous, for reasons we know very well. AI could make things a lot worse for everyone, period. So you could say we should compare against a third path where everyone pauses AI development. But the world isn’t taking that path! We already know that. So maybe our real choice now is between 1 and 2. At least that’s how things look to me now.
I’m worried that the offense-defense balance leans strongly towards the attacker. What are your thoughts here?
(Edited to make much shorter)
If offense-defense balance leans strongly to the attacker, that makes it even easier for big actors to attack & dispossess the weak, whose economic and military usefulness (the two pillars that held up democracy till now) will be gone due to AI. So it becomes even more important that the weak have AI of their own.
The powers that be have literal armies of human hackers pointed at the rest of us. Being able to use AI so they can turn server farms of GPUs into even larger armies isn’t destabilizing to the status quo.
I do not have the ability to reverse engineer every piece of software and weird looking memory page on my computer, and am therefore vulnerable. It would be cool if I could have a GPU with a magic robot reverse engineer on it giving me reports on my own stuff.
That would actually change the balance of power in favor of the typical individual, and is exactly the sort of capability that the ‘safety community’ is preventing.
If you believe overall ‘misuse risk’ increases in a linear way with the number of people who have access, I guess that argument would hold.
The argument assumes that someone who is already wealthy and powerful can’t do any more harm with an uncensored AI that answers to them alone than any random person.
It further assumes that someone wealthy and powerful is invested in the status quo, and will therefore have less reason to misuse than someone without wealth or power.
I think that software solely in the hands of the powerful is far more dangerous than open sourcing it. I’m hopeful that Chinese teams with reasonable, people-centric morals like Deepseek will win tech races.
Westerners love their serfdom too much to expect them to make any demands at all of their oligarchs.
my current hunch is that this would in fact be the obvious solution if you solved strong alignment. if you figure out how to solve strong alignment, the kind where starkly superintelligent AIs are in fact trying to do good, then you do want them to be available to everyone. My disagree vote is because i think it doesn’t matter who runs a model or what prompt it’s given, if it’s starkly superintelligent and even a little bit not doing what you actually meant. shove enough oomph through an approximator and the flaws in the approximation are all that’s noticeable.
The problem is that this helps solve the democratization issue (only partially so, it still vastly favours technically literate first worlders), while simultaneously making the proliferation issue infinitely worse.
There is really no way out of this other than “just stop building this shit”. Everyone likes to point out the glaring flaws in their ideological opponents’ plans but that only keeps happening because both sides’ plans are hugely flawed.
This is not an obvious solution, since (as you probably are aware) you run into the threat of human disempowerment given sufficiently strong models. You may disagree with this being an issue, but it would at least need to be argued.
A “feudal” system is at least as disempowering for nearly all humans, and would probably be felt as far more disempowering. I really don’t care at all how empowered Sam Altman is.
I’d say that the “open source uncensored models” had a greater danger of rapid human extinction, endless torture, and the like… except that I give very, very little credence to the idea that any of the “safety” or alignment directions anybody’s been pursuing will do anything to prevent those. I guess I hope they might have a greater danger.
That post has already gotten a disagree, and I really, really wanna know which paragraph it’s meant to apply to, or if it’s meant to apply to both of them.
Sure. Here’s the argument. Concern about future hypothetical harms (described with magic insider jargon words like disempowerment, fast takeoff, asi, gray goo, fooming) is used as an excuse to “deprioritize” dealing with very real, much more boring present day harms.
Here’s the stupid hegelian dialectic that this community has promoted:
Thesis: AI could kill us all!!!!1111
Antithesis: Drop bombs on datacenters, we have to stop now.
Synthesis: let’s just trust wealthy and powerful people to build AI responsibly. Let’s make sure they work in secret, so nobody else does something irresponsible.
This doesn’t seem to actually respond to said concern in any way, though.
Like… do you think that concerns about “everyone dies” are… not plausible? That such outcomes just can’t happen? If you do think that, then that’s the argument, and whether something is being “used as an excuse” is completely irrelevant. If you don’t think that, then… what?
Those concerns are not plausible for the tools that exist today. Maybe they’re plausible for things that will release tomorrow.
The ‘anti-TESCREAL’ community is pretty united in the thesis that ‘AI safety’ people concerned about the words I’m quoting are pulling air away from their ‘mundane’ concerns about tech that is actually in use today.