A thing I have found increasingly distressing about the rationalist/EA community is the extent to which most of us willfully ignore the obvious condition of most—importantly not all! - humans in a post strong AGI world, where “alignment” is in fact achieved.
The default outcome of where we think we are going is to turn (almost) everyone into serfs, completely incapable of improving their position through their own efforts, and dependent on the whims of the few who own the strong AI systems. Such a state of affairs would plainly be evil, regardless of how “benevolent” the people in charge are. Sufficient inequality of power is a harm—a severe harm, even—absent any considerations over how the power is used. You can see it is a harm by how it terrifies people like your friend—who sounds at least reasonably morally sensitive—into pursuing employment at Anthropic for the sake of avoiding serfhood. I don’t fault her, really, except to fault her for not being a saint. I do fault the people, systems, and culture that created this dichotomy.
I think it is insanely unethical that the large AI labs are not proactively decentralizing ownership, while their success is still uncertain. OpenAI and Anthropic should both be public companies so ordinary people can own a stake in the future they are building and not be dependent on charity forever if that future comes. They choose not to do this.
I think a lot of people in the community are so econ-101 pilled that they are incapable of conceptualizing how miserable and dehumanizing the boot of “abundance” techno-feudalism could be.
“You will be nothing and you will be happy.”
EDIT: To be clear, I do not think public ownership of AI labs is sufficient to make the existence of AI labs—or the power concentration they will engender—a good or moral idea. I just think that publicly owned AI labs are less evil than privately owned AI labs because they concentrate power less.
I think it is insanely unethical that the large AI labs are not proactively decentralizing ownership, while their success is still uncertain. OpenAI and Anthropic should both be public companies so ordinary people can own a stake in the future they are building and not be dependent on charity forever if that future comes. They choose not to do this.
Not like that would solve much. Maybe give a couple chances to own a tiny amount of stock in OpenAI to US citizens? What chance is exactly going anyone from a third world country to have, for example? Generally speaking, the trajectory towards “someone will rule the world as its AI master so it might as well be us” leads to nothing but cyberpunk dystopias at best.
I think that public ownership is helpful but insufficient to make building strong AGI ethical. Still, at the margin, I expect better outcomes with more decentralized power and ownership. As you disburse power, power is more likely to be wielded in ways representative of broader human values—but I still prefer not building it at all.
I think it is insanely unethical that the large AI labs are not proactively decentralizing ownership, while their success is still uncertain
Maybe the problem is with the idea that something like that should have owners to begin with? In the “standard discussion model” we tend to use for these things, you’re talking about eternal control of the entire future. Giving that to a few thousand, or a few hundred thousand, people who happened to be stockholders at some critical time isn’t all that much better than giving it to a handful.
I don’t buy the idea that being the ones who built or funded a machine that took over the world should give you the right to run the world forever… not even if it took over through “non-force” means.
OpenAI seemed to be kind of going in the right direction at the beginning: “We’ll let you share in mundane profits, but if this thing FOOMs and remakes the world, then all bets are off. We are doing this for All Mankind(TM)”.
I think even they, like most people on Less Wrong, probably would have been unwilling to take what I think is the correct step after that: humans in general shouldn’t control such a thing, beyond setting its initial goals. But at least it seemed as though they were willing to explore the idea that a concept of ownership based on human effort becomes ridiculous in an economy that doesn’t run on human effort.
Maybe the problem is with the idea that something like that should have owners to begin with?
I would argue the problem is it being created at all. Suppose a new group called SocialAI builds an AGI that it intends to make entirely autonomous and independent once bootstrapped. The AGI then FOOMs and is aligned. This is a vastly better future than many other possibilities, but does that mean it is still ethically ok to create an intelligence, imbue it with your values, your choices and ideas, and then send it off to rule the world in a way that will make those values and choices and ideas live forever, more important than anything else?
It’s like going back in time to write the Bible, if the Bible was also actively able to go and force people to abide by its tenets.
Strongly agree that no human is fit to own an AI which has “eternal control of the future”. If there is going to be ownership though, better to be a broader group of people (which would represent a greater plurality of values if nothing else).
I also agree that in an economy which does not run on human effort, no one should own anything. But it seems hard to make that a reality, particularly in a way which applies to the most powerful people.
This community is intensely hostile to the obvious solution: open source uncensored models as fast as you build them, and make GPUs to run them as cheap as possible.
I agree, this is the obvious solution… as long as you put your hands in your ears and I shout “I can’t hear you, I can’t hear you” whenever the topic of misuse risks comes up...
Otherwise, there are some quite thorny problem. Maybe you’re ultimately correct about open source being the path forward, but it’s far from obvious.
I’m actually warming to the idea. You’re right that it doesn’t solve all problems. But if our choice is between the open-source path where many people can use (and train) models locally, and the closed-source path where only big actors get to do that, then let’s compare them.
One risk everyone is thinking about is that AI will be used to attack people and take away their property. Since big actors aren’t moral toward weak people, this risk is worse in the closed-source path. (My go-to example, as always, is enclosures in England, where the elite happily impoverished their own population to get a little bit richer themselves.) The open-source path might help people keep at least a measure of power against the big actors, so on this dimension it wins.
The other risk is someone making a “basement AI” that will defeat the big actors and burn the world. But to me this doesn’t seem plausible. Big actors already have every advantage, why wouldn’t they be able to defend themselves? So on this dimension the open source path doesn’t seem too bad.
Of course both paths are very dangerous, for reasons we know very well. AI could make things a lot worse for everyone, period. So you could say we should compare against a third path where everyone pauses AI development. But the world isn’t taking that path! We already know that. So maybe our real choice now is between 1 and 2. At least that’s how things look to me now.
If offense-defense balance leans strongly to the attacker, that makes it even easier for big actors to attack & dispossess the weak, whose economic and military usefulness (the two pillars that held up democracy till now) will be gone due to AI. So it becomes even more important that the weak have AI of their own.
The powers that be have literal armies of human hackers pointed at the rest of us. Being able to use AI so they can turn server farms of GPUs into even larger armies isn’t destabilizing to the status quo.
I do not have the ability to reverse engineer every piece of software and weird looking memory page on my computer, and am therefore vulnerable. It would be cool if I could have a GPU with a magic robot reverse engineer on it giving me reports on my own stuff.
That would actually change the balance of power in favor of the typical individual, and is exactly the sort of capability that the ‘safety community’ is preventing.
If you believe overall ‘misuse risk’ increases in a linear way with the number of people who have access, I guess that argument would hold.
The argument assumes that someone who is already wealthy and powerful can’t do any more harm with an uncensored AI that answers to them alone than any random person.
It further assumes that someone wealthy and powerful is invested in the status quo, and will therefore have less reason to misuse than someone without wealth or power.
I think that software solely in the hands of the powerful is far more dangerous than open sourcing it. I’m hopeful that Chinese teams with reasonable, people-centric morals like Deepseek will win tech races.
Westerners love their serfdom too much to expect them to make any demands at all of their oligarchs.
my current hunch is that this would in fact be the obvious solution if you solved strong alignment. if you figure out how to solve strong alignment, the kind where starkly superintelligent AIs are in fact trying to do good, then you do want them to be available to everyone. My disagree vote is because i think it doesn’t matter who runs a model or what prompt it’s given, if it’s starkly superintelligent and even a little bit not doing what you actually meant. shove enough oomph through an approximator and the flaws in the approximation are all that’s noticeable.
The problem is that this helps solve the democratization issue (only partially so, it still vastly favours technically literate first worlders), while simultaneously making the proliferation issue infinitely worse.
There is really no way out of this other than “just stop building this shit”. Everyone likes to point out the glaring flaws in their ideological opponents’ plans but that only keeps happening because both sides’ plans are hugely flawed.
This is not an obvious solution, since (as you probably are aware) you run into the threat of human disempowerment given sufficiently strong models. You may disagree with this being an issue, but it would at least need to be argued.
A “feudal” system is at least as disempowering for nearly all humans, and would probably be felt as far more disempowering. I really don’t care at all how empowered Sam Altman is.
I’d say that the “open source uncensored models” had a greater danger of rapid human extinction, endless torture, and the like… except that I give very, very little credence to the idea that any of the “safety” or alignment directions anybody’s been pursuing will do anything to prevent those. I guess I hope they might have a greater danger.
That post has already gotten a disagree, and I really, really wanna know which paragraph it’s meant to apply to, or if it’s meant to apply to both of them.
Sure. Here’s the argument. Concern about future hypothetical harms (described with magic insider jargon words like disempowerment, fast takeoff, asi, gray goo, fooming) is used as an excuse to “deprioritize” dealing with very real, much more boring present day harms.
Here’s the stupid hegelian dialectic that this community has promoted:
Thesis: AI could kill us all!!!!1111
Antithesis: Drop bombs on datacenters, we have to stop now.
Synthesis: let’s just trust wealthy and powerful people to build AI responsibly. Let’s make sure they work in secret, so nobody else does something irresponsible.
Sure. Here’s the argument. Concern about future hypothetical harms (described with magic insider jargon words like disempowerment, fast takeoff, asi, gray goo, fooming) is used as an excuse to “deprioritize” dealing with very real, much more boring present day harms.
This doesn’t seem to actually respond to said concern in any way, though.
Like… do you think that concerns about “everyone dies” are… not plausible? That such outcomes just can’t happen? If you do think that, then that’s the argument, and whether something is being “used as an excuse” is completely irrelevant. If you don’t think that, then… what?
Those concerns are not plausible for the tools that exist today. Maybe they’re plausible for things that will release tomorrow.
The ‘anti-TESCREAL’ community is pretty united in the thesis that ‘AI safety’ people concerned about the words I’m quoting are pulling air away from their ‘mundane’ concerns about tech that is actually in use today.
I think it is insanely unethical that the large AI labs are not proactively decentralizing ownership, while their success is still uncertain.
The only way I could see doing this that would make sense is an IPO.
If you tried to ‘decentralize ownership’ through charity, you’re just making your lab uninvestable. Like it or not you are in a highly competitive world where if your competitor can out-fundraise you by 5X, that’s probably just it. Then what has your moral stance achieved?
Race dynamics suck, but the moral thing to do is not sabatoge your own chance at winning, it’s to complain loudly and push for change while continuing to race with maximum efficacy.
Seconding most of this. Some further thoughts:
A thing I have found increasingly distressing about the rationalist/EA community is the extent to which most of us willfully ignore the obvious condition of most—importantly not all! - humans in a post strong AGI world, where “alignment” is in fact achieved.
The default outcome of where we think we are going is to turn (almost) everyone into serfs, completely incapable of improving their position through their own efforts, and dependent on the whims of the few who own the strong AI systems. Such a state of affairs would plainly be evil, regardless of how “benevolent” the people in charge are. Sufficient inequality of power is a harm—a severe harm, even—absent any considerations over how the power is used. You can see it is a harm by how it terrifies people like your friend—who sounds at least reasonably morally sensitive—into pursuing employment at Anthropic for the sake of avoiding serfhood. I don’t fault her, really, except to fault her for not being a saint. I do fault the people, systems, and culture that created this dichotomy.
I think it is insanely unethical that the large AI labs are not proactively decentralizing ownership, while their success is still uncertain. OpenAI and Anthropic should both be public companies so ordinary people can own a stake in the future they are building and not be dependent on charity forever if that future comes. They choose not to do this.
I think a lot of people in the community are so econ-101 pilled that they are incapable of conceptualizing how miserable and dehumanizing the boot of “abundance” techno-feudalism could be.
“You will be nothing and you will be happy.”
EDIT: To be clear, I do not think public ownership of AI labs is sufficient to make the existence of AI labs—or the power concentration they will engender—a good or moral idea. I just think that publicly owned AI labs are less evil than privately owned AI labs because they concentrate power less.
Not like that would solve much. Maybe give a couple chances to own a tiny amount of stock in OpenAI to US citizens? What chance is exactly going anyone from a third world country to have, for example? Generally speaking, the trajectory towards “someone will rule the world as its AI master so it might as well be us” leads to nothing but cyberpunk dystopias at best.
I think that public ownership is helpful but insufficient to make building strong AGI ethical. Still, at the margin, I expect better outcomes with more decentralized power and ownership. As you disburse power, power is more likely to be wielded in ways representative of broader human values—but I still prefer not building it at all.
Maybe the problem is with the idea that something like that should have owners to begin with? In the “standard discussion model” we tend to use for these things, you’re talking about eternal control of the entire future. Giving that to a few thousand, or a few hundred thousand, people who happened to be stockholders at some critical time isn’t all that much better than giving it to a handful.
I don’t buy the idea that being the ones who built or funded a machine that took over the world should give you the right to run the world forever… not even if it took over through “non-force” means.
OpenAI seemed to be kind of going in the right direction at the beginning: “We’ll let you share in mundane profits, but if this thing FOOMs and remakes the world, then all bets are off. We are doing this for All Mankind(TM)”.
I think even they, like most people on Less Wrong, probably would have been unwilling to take what I think is the correct step after that: humans in general shouldn’t control such a thing, beyond setting its initial goals. But at least it seemed as though they were willing to explore the idea that a concept of ownership based on human effort becomes ridiculous in an economy that doesn’t run on human effort.
I would argue the problem is it being created at all. Suppose a new group called SocialAI builds an AGI that it intends to make entirely autonomous and independent once bootstrapped. The AGI then FOOMs and is aligned. This is a vastly better future than many other possibilities, but does that mean it is still ethically ok to create an intelligence, imbue it with your values, your choices and ideas, and then send it off to rule the world in a way that will make those values and choices and ideas live forever, more important than anything else?
It’s like going back in time to write the Bible, if the Bible was also actively able to go and force people to abide by its tenets.
Strongly agree that no human is fit to own an AI which has “eternal control of the future”. If there is going to be ownership though, better to be a broader group of people (which would represent a greater plurality of values if nothing else).
I also agree that in an economy which does not run on human effort, no one should own anything. But it seems hard to make that a reality, particularly in a way which applies to the most powerful people.
Disempower ’em?
This community is intensely hostile to the obvious solution: open source uncensored models as fast as you build them, and make GPUs to run them as cheap as possible.
I agree, this is the obvious solution… as long as you put your hands in your ears and I shout “I can’t hear you, I can’t hear you” whenever the topic of misuse risks comes up...
Otherwise, there are some quite thorny problem. Maybe you’re ultimately correct about open source being the path forward, but it’s far from obvious.
I’m actually warming to the idea. You’re right that it doesn’t solve all problems. But if our choice is between the open-source path where many people can use (and train) models locally, and the closed-source path where only big actors get to do that, then let’s compare them.
One risk everyone is thinking about is that AI will be used to attack people and take away their property. Since big actors aren’t moral toward weak people, this risk is worse in the closed-source path. (My go-to example, as always, is enclosures in England, where the elite happily impoverished their own population to get a little bit richer themselves.) The open-source path might help people keep at least a measure of power against the big actors, so on this dimension it wins.
The other risk is someone making a “basement AI” that will defeat the big actors and burn the world. But to me this doesn’t seem plausible. Big actors already have every advantage, why wouldn’t they be able to defend themselves? So on this dimension the open source path doesn’t seem too bad.
Of course both paths are very dangerous, for reasons we know very well. AI could make things a lot worse for everyone, period. So you could say we should compare against a third path where everyone pauses AI development. But the world isn’t taking that path! We already know that. So maybe our real choice now is between 1 and 2. At least that’s how things look to me now.
I’m worried that the offense-defense balance leans strongly towards the attacker. What are your thoughts here?
(Edited to make much shorter)
If offense-defense balance leans strongly to the attacker, that makes it even easier for big actors to attack & dispossess the weak, whose economic and military usefulness (the two pillars that held up democracy till now) will be gone due to AI. So it becomes even more important that the weak have AI of their own.
The powers that be have literal armies of human hackers pointed at the rest of us. Being able to use AI so they can turn server farms of GPUs into even larger armies isn’t destabilizing to the status quo.
I do not have the ability to reverse engineer every piece of software and weird looking memory page on my computer, and am therefore vulnerable. It would be cool if I could have a GPU with a magic robot reverse engineer on it giving me reports on my own stuff.
That would actually change the balance of power in favor of the typical individual, and is exactly the sort of capability that the ‘safety community’ is preventing.
If you believe overall ‘misuse risk’ increases in a linear way with the number of people who have access, I guess that argument would hold.
The argument assumes that someone who is already wealthy and powerful can’t do any more harm with an uncensored AI that answers to them alone than any random person.
It further assumes that someone wealthy and powerful is invested in the status quo, and will therefore have less reason to misuse than someone without wealth or power.
I think that software solely in the hands of the powerful is far more dangerous than open sourcing it. I’m hopeful that Chinese teams with reasonable, people-centric morals like Deepseek will win tech races.
Westerners love their serfdom too much to expect them to make any demands at all of their oligarchs.
my current hunch is that this would in fact be the obvious solution if you solved strong alignment. if you figure out how to solve strong alignment, the kind where starkly superintelligent AIs are in fact trying to do good, then you do want them to be available to everyone. My disagree vote is because i think it doesn’t matter who runs a model or what prompt it’s given, if it’s starkly superintelligent and even a little bit not doing what you actually meant. shove enough oomph through an approximator and the flaws in the approximation are all that’s noticeable.
The problem is that this helps solve the democratization issue (only partially so, it still vastly favours technically literate first worlders), while simultaneously making the proliferation issue infinitely worse.
There is really no way out of this other than “just stop building this shit”. Everyone likes to point out the glaring flaws in their ideological opponents’ plans but that only keeps happening because both sides’ plans are hugely flawed.
This is not an obvious solution, since (as you probably are aware) you run into the threat of human disempowerment given sufficiently strong models. You may disagree with this being an issue, but it would at least need to be argued.
A “feudal” system is at least as disempowering for nearly all humans, and would probably be felt as far more disempowering. I really don’t care at all how empowered Sam Altman is.
I’d say that the “open source uncensored models” had a greater danger of rapid human extinction, endless torture, and the like… except that I give very, very little credence to the idea that any of the “safety” or alignment directions anybody’s been pursuing will do anything to prevent those. I guess I hope they might have a greater danger.
That post has already gotten a disagree, and I really, really wanna know which paragraph it’s meant to apply to, or if it’s meant to apply to both of them.
Sure. Here’s the argument. Concern about future hypothetical harms (described with magic insider jargon words like disempowerment, fast takeoff, asi, gray goo, fooming) is used as an excuse to “deprioritize” dealing with very real, much more boring present day harms.
Here’s the stupid hegelian dialectic that this community has promoted:
Thesis: AI could kill us all!!!!1111
Antithesis: Drop bombs on datacenters, we have to stop now.
Synthesis: let’s just trust wealthy and powerful people to build AI responsibly. Let’s make sure they work in secret, so nobody else does something irresponsible.
This doesn’t seem to actually respond to said concern in any way, though.
Like… do you think that concerns about “everyone dies” are… not plausible? That such outcomes just can’t happen? If you do think that, then that’s the argument, and whether something is being “used as an excuse” is completely irrelevant. If you don’t think that, then… what?
Those concerns are not plausible for the tools that exist today. Maybe they’re plausible for things that will release tomorrow.
The ‘anti-TESCREAL’ community is pretty united in the thesis that ‘AI safety’ people concerned about the words I’m quoting are pulling air away from their ‘mundane’ concerns about tech that is actually in use today.
The only way I could see doing this that would make sense is an IPO.
If you tried to ‘decentralize ownership’ through charity, you’re just making your lab uninvestable. Like it or not you are in a highly competitive world where if your competitor can out-fundraise you by 5X, that’s probably just it. Then what has your moral stance achieved?
Race dynamics suck, but the moral thing to do is not sabatoge your own chance at winning, it’s to complain loudly and push for change while continuing to race with maximum efficacy.
Yes I am very obviously talking about an ipo, instead of just taking endless middle eastern oligarch money