My sense is that most of the people with lots of power are not taking heroic responsibility for the world. I think that Amodei and Altman intend to achieve global power and influence but this is not the same as taking global responsibility. I think, especially for Altman, the desire for power comes first relative to responsibility. My (weak) impression is that Hassabis has less will-to-power than the others, and that Musk has historically been much closer to having responsibility be primary.
I don’t really understand this post as doing something other than asking “on the margin are we happy or sad about present large-scale action” and then saying that the background culture should correspondingly praise or punish large-scale action. Which is maybe reasonable, but alternatively too high level of a gloss. As per the usual idea of rationality, I think whether you are capable of taking large-scale action in a healthy way is true in some worlds and not in others, and you should try to figure out which world you’re in.
The financial incentives around AI development are blatantly insanity-inducing on the topic and anyone should’ve been able to guess that going in, I don’t think this was a difficult question. Though I guess someone already exceedingly wealthy (i.e. already having $1B or $10B) could have unusually strong reason to not be concerned about that particular incentive (and I think it is the case Musk has seemed differently insane than the others taking action in this area, and lacking in some of the insanities).
However I think most moves around wielding this level of industry should be construed as building an egregore more powerful than you. The founders/CEOs of the AI big-tech companies are not able to simple turn their companies off, nor their industry. If they grow to believe their companies are bad for the world, either they’ll need to spend many years dismantling / redirecting them, or else they’ll simply quit/move on and some other person will take their place. So it’s still default-irresponsible even if you believe you can maintain personal sanity.
Overall I think taking responsibility for things is awesome and I wish people were doing more of it and trying harder. And I wish people took ultimate responsibility for as big of a thing they can muster. This is not the same as “trying to pull the biggest lever you can” or “reaching for power on a global level”, those are quite different heuristics; grabbing power can obviously just cost you sanity, and often those pulling the biggest lever they can are doing so foolishly.
As a background model, I think if someone wants to take responsibility for some part of the world going well, by-default this does not look like “situating themselves in the center of legible power”. Lonely scientist/inventor James Watt spent his early years fighting poverty before successfully inventing better steam engines, and had far more influence by helping cause the industrial revolution than most anyone in government did during his era. I think confusing “moving toward legible power” for “having influence over the world” is one of the easiest kinds of insanity.
My sense is that most of the people with lots of power are not taking heroic responsibility for the world. I think that Amodei and Altman intend to achieve global power and influence but this is not the same as taking global responsibility. I think, especially for Altman, the desire for power comes first relative to responsibility. My (weak) impression is that Hassabis has less will-to-power than the others, and that Musk has historically been much closer to having responsibility be primary.
Can you expand on this? How can you tell the difference, and does it make much of a difference in the end (e.g., if most people get corrupted by power regardless of initial intentions)?
As a background model, I think if someone wants to take responsibility for some part of the world going well, by-default this does not look like “situating themselves in the center of legible power”.
And yet, Eliezer, the writer of “heroic responsibility” is also the original proponent of “build a Friendly AI to take over the world and make it safe”. If your position is that “heroic responsibility” is itself right, but Eliezer and others just misapplied it, that seems to imply we need some kind of post-mortem on what went wrong with trying to apply the concept, and how future people can avoid making the same mistake. My guess is that like other human biases, it’s hard to avoid making this mistake even if you point it out to people or try other ways to teach people to avoid it, because the drive for status and power is deep-seated, because it has a strong evolutionary logic.
(My position is, let’s not spread ideas/approaches that will predictably be “misused”, e.g., as justification for grabbing power, similar to how we shouldn’t develop AI that will predictably be “misused”, even if nominally “aligned” in some sense.)
Can you expand on this? How can you tell the difference, and does it make much of a difference in the end (e.g., if most people get corrupted by power regardless of initial intentions)?
But I don’t believe most people get corrupted by power regardless of initial intentions? I don’t think Francis Bacon was corrupted by power, I don’t think James Watt was corrupted by power, I don’t think Stanislav Petrov was corrupted by power, and all of these people had far greater influence over the world than most people who are “corrupted by power”.
I’m hearing you’d be interested in me saying more words about the difference in what it looks like to be motivated by responsibility versus power-seeking. I’ll say some words, can see if they help.
I think someone motivated by responsibility often will end up looking more aligned with their earlier self over time even as they grow and change, will often not accept opportunities for a lot of power/prestige/money because they’re uninteresting to them, will often make sacrifices of power/prestige for ethical reasons, will pursue a problem they care about long after most would give up or think it likely to be solved.
I think someone primarily seeking power will be much more willing to do things that pollute the commons or break credit-allocation mechanisms to get credit, and generally game a lot of systems that other people are earnestly rising through. They will more readily pivot on what issue they say they care about or are working on because they’re not attached to the problem, but to the reward for solving the problem, and many rewards can be gotten from lots of different problems. They’ll be more guided by what’s fashionable right now, and more attuned to it. They’ll maneuver themselves in order to be able to politically work with whoever has power that they want, regardless of the ethics/competence/corruption of those people.
> As a background model, I think if someone wants to take responsibility for some part of the world going well, by-default this does not look like “situating themselves in the center of legible power”.
And yet, Eliezer, the writer of “heroic responsibility” is also the original proponent of “build a Friendly AI to take over the world and make it safe”.
Building an AGI doesn’t seem to me like a very legible mechanism of power, or at least it didn’t in the era Eliezer pursued it (where it wasn’t also credibly “a path to making billions of dollars and getting incredible prestige”). The word ‘legible’ was doing a lot of work in the sentence I wrote.
Another framing I sometimes look through (H/T Habryka) is constrained vs unconstrained power. Having a billion dollars is unconstrained power, because you can use it to do a lot of different things – buy loads of different companies or resources. Being an engineer overlooking missile-defense systems in the USSR is very constrained, you have an extremely well-specified set of things you can control. This changes the adversarial forces on you, because in the former case a lot of people stand to gain a lot of different possible things they want if they can get leverage over you, and they have to be concerned about a lot of different ways you could be playing them. So the pressures for insanity are higher. Paths that give you the ability to influence very specific things that route through very constrained powers are less insanity-inducing, I think, and I think most routes that look like “build a novel invention in a way that isn’t getting you lots of money/status along the way” are less insanity-inducing, and I rarely find the person to have become as insane as some of the tech-company CEOs have. I also think people motivated by taking responsibility for fixing a particular problem in the world are more likely to take constrained power, because… they aren’t particularly motivated by all the other power they might be able to get.
I don’t suspect I addressed your cruxes here so far about whether this idea of heroic responsibility is/isn’t predictably misused. I’m willing to try again if you wish, or if you can try pointing again to what you’d guess I’m missing.
My sense is that most of the people with lots of power are not taking heroic responsibility for the world. I think that Amodei and Altman intend to achieve global power and influence but this is not the same as taking global responsibility. I think, especially for Altman, the desire for power comes first relative to responsibility. My (weak) impression is that Hassabis has less will-to-power than the others, and that Musk has historically been much closer to having responsibility be primary.
I don’t really understand this post as doing something other than asking “on the margin are we happy or sad about present large-scale action” and then saying that the background culture should correspondingly praise or punish large-scale action. Which is maybe reasonable, but alternatively too high level of a gloss. As per the usual idea of rationality, I think whether you are capable of taking large-scale action in a healthy way is true in some worlds and not in others, and you should try to figure out which world you’re in.
The financial incentives around AI development are blatantly insanity-inducing on the topic and anyone should’ve been able to guess that going in, I don’t think this was a difficult question. Though I guess someone already exceedingly wealthy (i.e. already having $1B or $10B) could have unusually strong reason to not be concerned about that particular incentive (and I think it is the case Musk has seemed differently insane than the others taking action in this area, and lacking in some of the insanities).
However I think most moves around wielding this level of industry should be construed as building an egregore more powerful than you. The founders/CEOs of the AI big-tech companies are not able to simple turn their companies off, nor their industry. If they grow to believe their companies are bad for the world, either they’ll need to spend many years dismantling / redirecting them, or else they’ll simply quit/move on and some other person will take their place. So it’s still default-irresponsible even if you believe you can maintain personal sanity.
Overall I think taking responsibility for things is awesome and I wish people were doing more of it and trying harder. And I wish people took ultimate responsibility for as big of a thing they can muster. This is not the same as “trying to pull the biggest lever you can” or “reaching for power on a global level”, those are quite different heuristics; grabbing power can obviously just cost you sanity, and often those pulling the biggest lever they can are doing so foolishly.
As a background model, I think if someone wants to take responsibility for some part of the world going well, by-default this does not look like “situating themselves in the center of legible power”. Lonely scientist/inventor James Watt spent his early years fighting poverty before successfully inventing better steam engines, and had far more influence by helping cause the industrial revolution than most anyone in government did during his era. I think confusing “moving toward legible power” for “having influence over the world” is one of the easiest kinds of insanity.
Can you expand on this? How can you tell the difference, and does it make much of a difference in the end (e.g., if most people get corrupted by power regardless of initial intentions)?
And yet, Eliezer, the writer of “heroic responsibility” is also the original proponent of “build a Friendly AI to take over the world and make it safe”. If your position is that “heroic responsibility” is itself right, but Eliezer and others just misapplied it, that seems to imply we need some kind of post-mortem on what went wrong with trying to apply the concept, and how future people can avoid making the same mistake. My guess is that like other human biases, it’s hard to avoid making this mistake even if you point it out to people or try other ways to teach people to avoid it, because the drive for status and power is deep-seated, because it has a strong evolutionary logic.
(My position is, let’s not spread ideas/approaches that will predictably be “misused”, e.g., as justification for grabbing power, similar to how we shouldn’t develop AI that will predictably be “misused”, even if nominally “aligned” in some sense.)
But I don’t believe most people get corrupted by power regardless of initial intentions? I don’t think Francis Bacon was corrupted by power, I don’t think James Watt was corrupted by power, I don’t think Stanislav Petrov was corrupted by power, and all of these people had far greater influence over the world than most people who are “corrupted by power”.
I’m hearing you’d be interested in me saying more words about the difference in what it looks like to be motivated by responsibility versus power-seeking. I’ll say some words, can see if they help.
I think someone motivated by responsibility often will end up looking more aligned with their earlier self over time even as they grow and change, will often not accept opportunities for a lot of power/prestige/money because they’re uninteresting to them, will often make sacrifices of power/prestige for ethical reasons, will pursue a problem they care about long after most would give up or think it likely to be solved.
I think someone primarily seeking power will be much more willing to do things that pollute the commons or break credit-allocation mechanisms to get credit, and generally game a lot of systems that other people are earnestly rising through. They will more readily pivot on what issue they say they care about or are working on because they’re not attached to the problem, but to the reward for solving the problem, and many rewards can be gotten from lots of different problems. They’ll be more guided by what’s fashionable right now, and more attuned to it. They’ll maneuver themselves in order to be able to politically work with whoever has power that they want, regardless of the ethics/competence/corruption of those people.
Building an AGI doesn’t seem to me like a very legible mechanism of power, or at least it didn’t in the era Eliezer pursued it (where it wasn’t also credibly “a path to making billions of dollars and getting incredible prestige”). The word ‘legible’ was doing a lot of work in the sentence I wrote.
Another framing I sometimes look through (H/T Habryka) is constrained vs unconstrained power. Having a billion dollars is unconstrained power, because you can use it to do a lot of different things – buy loads of different companies or resources. Being an engineer overlooking missile-defense systems in the USSR is very constrained, you have an extremely well-specified set of things you can control. This changes the adversarial forces on you, because in the former case a lot of people stand to gain a lot of different possible things they want if they can get leverage over you, and they have to be concerned about a lot of different ways you could be playing them. So the pressures for insanity are higher. Paths that give you the ability to influence very specific things that route through very constrained powers are less insanity-inducing, I think, and I think most routes that look like “build a novel invention in a way that isn’t getting you lots of money/status along the way” are less insanity-inducing, and I rarely find the person to have become as insane as some of the tech-company CEOs have. I also think people motivated by taking responsibility for fixing a particular problem in the world are more likely to take constrained power, because… they aren’t particularly motivated by all the other power they might be able to get.
I don’t suspect I addressed your cruxes here so far about whether this idea of heroic responsibility is/isn’t predictably misused. I’m willing to try again if you wish, or if you can try pointing again to what you’d guess I’m missing.
Well said. Bravo.