Well, I meant to address them in a sweeping / not very detailed way. Basically I’m saying that they don’t seem like the sort of thing that should necessarily in real life prevent one from doing a Task-ish pivotal act. In other words, yes, {governance, the world not trusting MIRI, extreme power concentration} are very serious concerns, but in real life I would pretty plausibly—depending on the specific situation—say “yeah ok you should go ahead anyway”. I take your point about takeover-FAI; FWIW I had the impression that takeover-FAI was more like a hypothetical for purposes of design-thinking, like “please notice that your design would be really bad if it were doing a takeover; therefore it’s also bad for pivotal-task, because pivotal-task is quite difficult and relies on many of the same things as a hypothetical safe-takeover-FAI”.
Basically I’m saying that they don’t seem like the sort of thing that should necessarily in real life prevent one from doing a Task-ish pivotal act. In other words, yes, {governance, the world not trusting MIRI, extreme power concentration} are very serious concerns, but in real life I would pretty plausibly—depending on the specific situation—say “yeah ok you should go ahead anyway”.
That’s kind of surprising (that this is your response), given that you signed the Superintelligence Statement which seems to contradict this. But I can see some ways that you can claim otherwise, so let me not press this for now and come back to it.
I take your point about takeover-FAI; FWIW I had the impression that takeover-FAI was more like a hypothetical for purposes of design-thinking, like “please notice that your design would be really bad if it were doing a takeover; therefore it’s also bad for pivotal-task, because pivotal-task is quite difficult and relies on many of the same things as a hypothetical safe-takeover-FAI”.
Since you write this in the past tense (“had the impression”), let me first clarify: are you now convinced that sovereign-FAI (I’m avoiding “takeover” due to objection from Habryka and this) was a real and serious plan, or do you want more evidence?
Assuming you’re convinced, I think you should (if you haven’t already) update more towards the view I have of Eliezer, that he is often quite seriously wrong and/or overconfident, including about very important/consequential things like high level AI strategy. I applaud him for being able to eventually change his mind, which probably puts him in at least 99-percentile of humanity, but from an absolute standard, the years it sometimes takes is quite costly, and then often the new position is still seriously wrong. Case in point, the sovereign-FAI idea was his second one, after changing his mind from the first “accelerate AGI as fast as possible (the AGI will have good values/goals by default)”.
Maybe after doing this update, it becomes more plausible that his third idea (Task AGI, which I guess is the first that you personally came into contact with, and then spent years working towards) was also seriously wrong (or more seriously wrong than you think)?
Well, I meant to address them in a sweeping / not very detailed way. Basically I’m saying that they don’t seem like the sort of thing that should necessarily in real life prevent one from doing a Task-ish pivotal act. In other words, yes, {governance, the world not trusting MIRI, extreme power concentration} are very serious concerns, but in real life I would pretty plausibly—depending on the specific situation—say “yeah ok you should go ahead anyway”. I take your point about takeover-FAI; FWIW I had the impression that takeover-FAI was more like a hypothetical for purposes of design-thinking, like “please notice that your design would be really bad if it were doing a takeover; therefore it’s also bad for pivotal-task, because pivotal-task is quite difficult and relies on many of the same things as a hypothetical safe-takeover-FAI”.
That’s kind of surprising (that this is your response), given that you signed the Superintelligence Statement which seems to contradict this. But I can see some ways that you can claim otherwise, so let me not press this for now and come back to it.
Since you write this in the past tense (“had the impression”), let me first clarify: are you now convinced that sovereign-FAI (I’m avoiding “takeover” due to objection from Habryka and this) was a real and serious plan, or do you want more evidence?
Assuming you’re convinced, I think you should (if you haven’t already) update more towards the view I have of Eliezer, that he is often quite seriously wrong and/or overconfident, including about very important/consequential things like high level AI strategy. I applaud him for being able to eventually change his mind, which probably puts him in at least 99-percentile of humanity, but from an absolute standard, the years it sometimes takes is quite costly, and then often the new position is still seriously wrong. Case in point, the sovereign-FAI idea was his second one, after changing his mind from the first “accelerate AGI as fast as possible (the AGI will have good values/goals by default)”.
Maybe after doing this update, it becomes more plausible that his third idea (Task AGI, which I guess is the first that you personally came into contact with, and then spent years working towards) was also seriously wrong (or more seriously wrong than you think)?