What about the “Task AGI” and “pivotal act” stuff? That was at the very least, advising others to think seriously about using aligned AI to take over the world, on the basis that the world was otherwise doomed without a pivotal act. Then there was the matter of how much leverage MIRI thought they had as an organization, which is complicated by the confidentiality.
What about the “Task AGI” and “pivotal act” stuff?
Plausible! Do you have a link handy? Seems better for the conversation to be grounded in an example, and I am not sure exactly which things you are referencing here.
Offline, at MIRI there were discussions of possible pivotal acts, such as melting all GPUs. I suggested “what about using AI to make billions of dollars” and the response was “no it has to be much bigger than that to fix the game board”. There was some gaming of e.g. AI for uploading or nanotech. (Again, unclear how much leverage MIRI thought they had as an organization)
The “Task AGI” article is about an approach to build AGI that is safer than building a sovereign, published, on the open internet. I do not disagree that MIRI was working on trying to solve the alignment problem (as I say above, that is what two of the bullet points of my summary of their strategy are about), which this seems to be an attempt at making progress on. It doesn’t seem to me to be much evidence for “MIRI was planning to build FAI in their basement”. Yes, my understanding is that MIRI is expecting that at some point someone will build very powerful AI systems. It would be good for them to know how to do that in a way that has good consequences instead of bad. This article tries to help with that.
The “Pivotal Act” article seems similar? I mean, MIRI is still working on a pivotal act in the form of an international AI ban (subsequently followed maybe with an intelligence augmentation program). I am working on pivotal acts all day! It seems like a useful handle to have. I use it all the time. It does seem to frequently be misunderstood by people to mean “take over the world”, but like, there is no example in the linked article of something like that. The most that the article talks about is:
upload humans and run them at speeds more comparable to those of an AI
prevent the origin of all hostile superintelligences (in the nice case, only temporarily and via strategies that cause only acceptable amounts of collateral damage)
design or deploy nanotechnology such that there exists a direct route to the operators being able to do one of the other items on this list (human intelligence enhancement, prevent emergence of hostile SIs, etc.)
Which really doesn’t sound much like a “take-over-the-world” strategy. I mean, the above still seems to me like a good plan that in as much as a leading lab has no choice but to pursue AGI as a result of an intense race, I would like them to give it a try. Like, it seems terribly reckless and we are not remotely on track to doing this with any confidence, but like, I am in favor of people openly publishing things that other people should do if they find themselves building ASI. And again the above bullet lists also really don’t sound like “taking over the world”, so I still have trouble connecting this to the paragraph in the OP I take issue with.
I suggested “what about using AI to make billions of dollars” and the response was “no it has to be much bigger than that to fix the game board”. There was some gaming of e.g. AI for uploading or nanotech. (Again, unclear how much leverage MIRI thought they had as an organization)
None of these sound much like “taking over the world”? Like, yes, if you were to write a paper or blogpost with a plan that allowed someone to make a billion dollars with AI, that seems like it would basically do nothing, and if anything make things worse. It does seem like helpful contributions need to be of both a different type signature, and need to be much bigger than that.
It doesn’t seem to me to be much evidence for “MIRI was planning to build FAI in their basement”
I didn’t say that
The “Pivotal Act” article seems similar? I mean, MIRI is still working on a pivotal act in the form of an international AI ban (subsequently followed maybe with an intelligence augmentation program). I am working on pivotal acts all day!
At the time it was clear MIRI thought AGI was necessary for pivotal acts, e.g. to melt all GPUs, or to run an upload. I remember discussing “weak nanotech” and so on and they didn’t buy it, they thought they needed aligned task AGI to do a pivotal act.
Which really doesn’t sound much like a “take-over-the-world” strategy.
Quoting task AGI article:
The obvious disadvantage of a Task AGI is moral hazard—it may tempt the users in ways that a Sovereign would not. A Sovereign has moral hazard chiefly during the development phase, when the programmers and users are perhaps not yet in a position of special relative power. A Task AGI has ongoing moral hazard as it is used.
So this is acknowledging massive power concentration.
Furthermore, in context of the disagreement with Paul Christiano, it was clear that MIRI people thought there would be a much bigger capability overhang / FOOM, such that the system did not have to be “competitive”, it could be a “limited AGI” that was WAY less efficient than it could be, because of a pre-existing capability overhang versus the competition. Which, naturally, goes along with massive power concentration.
Wait, you didn’t? I agree you didn’t say “basement” but the section of the OP I am responding to is saying:
MIRI’s plan, to build a Friendly AI to take over the world
And then you said:
What about the “Task AGI” and “pivotal act” stuff? [Which is an example of MIRI’s plan to build a Friendly AI to take over the world]
The part in square brackets seems like the very clear Gricean implicature here? Am I wrong? If not, what did you mean to say in that sentence?
All the other stuff you say seems fine. I definitely agree MIRI talked about building AIs that would be very powerful and also considered whether power concentration would be a good thing, as it would reduce race dynamics. But again, I am just talking about the part of the OP says that it was MIRI’s plan to build such a system and take over the world, themselves, “in service of reducing x-risk”. None of the above seems like much evidence for that? If you agree that this was not MIRI’s plan, then sure, we are on the same page.
The part in square brackets seems like the very clear Gricean implicature here? Am I wrong? If not, what did you mean to say in that sentence?
See the two sentences right after.
That was at the very least, advising others to think seriously about using aligned AI to take over the world, on the basis that the world was otherwise doomed without a pivotal act. Then there was the matter of how much leverage MIRI thought they had as an organization, which is complicated by the confidentiality.
The Griecian implicature of this is that I at least don’t think it’s clear that MIRI wanted to build an AI to take over the world themselves. Rather, they were encouraging pivotal acts generally, and there’s ambiguity about how much they were individually trying to do so.
The literal implication of this is that it’s hard for people to know how much leverage MIRI has as an organization, which implies it’s hard for them to know that MIRI wanted to take over the world themselves.
Cool, yeah. I mean, I can’t rule this out confidently, but I do pretty strongly object to summarizing this state of affairs as:
Of course the most central old debate was over whether MIRI’s plan, to build a Friendly AI to take over the world in service of reducing x-risks, was a good one.
Like, at least in my ethics there is a huge enormous gulf between trying to take over the world, and saying that it would be a good idea for someone, ideally someone with as much legitimacy as possible, who is going to build extremely powerful AI systems anyways, to do this:
upload humans and run them at speeds more comparable to those of an AI
prevent the origin of all hostile superintelligences (in the nice case, only temporarily and via strategies that cause only acceptable amounts of collateral damage)
design or deploy nanotechnology such that there exists a direct route to the operators being able to do one of the other items on this list (human intelligence enhancement, prevent emergence of hostile SIs, etc.)
I go around and do the latter all the time, and think more people should do so! I agree I can’t rule out from the above that MIRI was maybe also planning to build such systems themselves, but I don’t currently find it that likely, and object to people referring to it as a fact of common knowledge.
What about the “Task AGI” and “pivotal act” stuff? That was at the very least, advising others to think seriously about using aligned AI to take over the world, on the basis that the world was otherwise doomed without a pivotal act. Then there was the matter of how much leverage MIRI thought they had as an organization, which is complicated by the confidentiality.
Plausible! Do you have a link handy? Seems better for the conversation to be grounded in an example, and I am not sure exactly which things you are referencing here.
On Arbital. Task directed AGI and Pivotal act.
Offline, at MIRI there were discussions of possible pivotal acts, such as melting all GPUs. I suggested “what about using AI to make billions of dollars” and the response was “no it has to be much bigger than that to fix the game board”. There was some gaming of e.g. AI for uploading or nanotech. (Again, unclear how much leverage MIRI thought they had as an organization)
Hmm, maybe I am misunderstanding this.
The “Task AGI” article is about an approach to build AGI that is safer than building a sovereign, published, on the open internet. I do not disagree that MIRI was working on trying to solve the alignment problem (as I say above, that is what two of the bullet points of my summary of their strategy are about), which this seems to be an attempt at making progress on. It doesn’t seem to me to be much evidence for “MIRI was planning to build FAI in their basement”. Yes, my understanding is that MIRI is expecting that at some point someone will build very powerful AI systems. It would be good for them to know how to do that in a way that has good consequences instead of bad. This article tries to help with that.
The “Pivotal Act” article seems similar? I mean, MIRI is still working on a pivotal act in the form of an international AI ban (subsequently followed maybe with an intelligence augmentation program). I am working on pivotal acts all day! It seems like a useful handle to have. I use it all the time. It does seem to frequently be misunderstood by people to mean “take over the world”, but like, there is no example in the linked article of something like that. The most that the article talks about is:
Which really doesn’t sound much like a “take-over-the-world” strategy. I mean, the above still seems to me like a good plan that in as much as a leading lab has no choice but to pursue AGI as a result of an intense race, I would like them to give it a try. Like, it seems terribly reckless and we are not remotely on track to doing this with any confidence, but like, I am in favor of people openly publishing things that other people should do if they find themselves building ASI. And again the above bullet lists also really don’t sound like “taking over the world”, so I still have trouble connecting this to the paragraph in the OP I take issue with.
None of these sound much like “taking over the world”? Like, yes, if you were to write a paper or blogpost with a plan that allowed someone to make a billion dollars with AI, that seems like it would basically do nothing, and if anything make things worse. It does seem like helpful contributions need to be of both a different type signature, and need to be much bigger than that.
I didn’t say that
At the time it was clear MIRI thought AGI was necessary for pivotal acts, e.g. to melt all GPUs, or to run an upload. I remember discussing “weak nanotech” and so on and they didn’t buy it, they thought they needed aligned task AGI to do a pivotal act.
Quoting task AGI article:
So this is acknowledging massive power concentration.
Furthermore, in context of the disagreement with Paul Christiano, it was clear that MIRI people thought there would be a much bigger capability overhang / FOOM, such that the system did not have to be “competitive”, it could be a “limited AGI” that was WAY less efficient than it could be, because of a pre-existing capability overhang versus the competition. Which, naturally, goes along with massive power concentration.
Wait, you didn’t? I agree you didn’t say “basement” but the section of the OP I am responding to is saying:
And then you said:
The part in square brackets seems like the very clear Gricean implicature here? Am I wrong? If not, what did you mean to say in that sentence?
All the other stuff you say seems fine. I definitely agree MIRI talked about building AIs that would be very powerful and also considered whether power concentration would be a good thing, as it would reduce race dynamics. But again, I am just talking about the part of the OP says that it was MIRI’s plan to build such a system and take over the world, themselves, “in service of reducing x-risk”. None of the above seems like much evidence for that? If you agree that this was not MIRI’s plan, then sure, we are on the same page.
See the two sentences right after.
The Griecian implicature of this is that I at least don’t think it’s clear that MIRI wanted to build an AI to take over the world themselves. Rather, they were encouraging pivotal acts generally, and there’s ambiguity about how much they were individually trying to do so.
The literal implication of this is that it’s hard for people to know how much leverage MIRI has as an organization, which implies it’s hard for them to know that MIRI wanted to take over the world themselves.
Cool, yeah. I mean, I can’t rule this out confidently, but I do pretty strongly object to summarizing this state of affairs as:
Like, at least in my ethics there is a huge enormous gulf between trying to take over the world, and saying that it would be a good idea for someone, ideally someone with as much legitimacy as possible, who is going to build extremely powerful AI systems anyways, to do this:
I go around and do the latter all the time, and think more people should do so! I agree I can’t rule out from the above that MIRI was maybe also planning to build such systems themselves, but I don’t currently find it that likely, and object to people referring to it as a fact of common knowledge.