Just want to mention that this post caused me to update towards a continuous take-off. The humans and chimps section the most, as I hadn’t noticed how much chimps weren’t being optimised for general intelligence. And then the point that slow take-off means more change, sooner, was also key. I had been rounding slow take-off to a position which didn’t really appreciate the true potential for capability gain, but your post showed me how to square that circle.
Fast take-off seems to me at the very least like a simplifying assumption, because I feel pretty unequipped to predict what will happen in the space before ‘strongly superintelligent AGI’ but after ‘radical change due to AGI’. Would be interested to hear anyone’s suggestions for reasoning about what that world will look like.
As additional data, I am grateful this post was written and found it to contain many good thoughts, but I updated away from continuous take-off due to conservation of evidence. I already knew Paul’s basic view, and expected a post by him that had this level of thought and effort behind it to be more convincing than it was. Instead, I found my system-1 repeatedly not viewing the arguments as being convincingly answered or often really even understood as I understand them, my and Paul’s model for what can accomplish what and the object-level steps that will get taken to get a takeoff seem super far apart and my instinctive reading of Paul’s model doesn’t seem plausible to me (which means I likely have it wrong), and I even saw pointers to some good arguments for fast takeoff I hadn’t properly fully considered.
I especially found interesting that Paul doesn’t intuitively grok the concept of things clicking. I am very much a person of the click. I am giving myself a task to add a post called Click to my draft folder.
Note that this comment is not intended to be an attempt to respond to the content, or to be convincing to anyone including Paul; I hope at some future date to write my thoughts up carefully in a way that might be convincing, or allow Paul or others to point out my mistakes (in addition to the click thing).
If people know my views they should update away half the time when they hear my arguments, so I guess I shouldn’t be bummed (though hopefully that usually involves their impressions moving towards mine and their peer disagreement adjustment shrinking).
I’d love to read an articulation of the fast takeoff argument in a way that makes sense to me, and was making this post in part to elicit that. (One problem is that different people seem to have wildly different reasons for believing in fast takeoff.)
Just want to mention that this post caused me to update towards a continuous take-off. The humans and chimps section the most, as I hadn’t noticed how much chimps weren’t being optimised for general intelligence. And then the point that slow take-off means more change, sooner, was also key. I had been rounding slow take-off to a position which didn’t really appreciate the true potential for capability gain, but your post showed me how to square that circle.
Fast take-off seems to me at the very least like a simplifying assumption, because I feel pretty unequipped to predict what will happen in the space before ‘strongly superintelligent AGI’ but after ‘radical change due to AGI’. Would be interested to hear anyone’s suggestions for reasoning about what that world will look like.
As additional data, I am grateful this post was written and found it to contain many good thoughts, but I updated away from continuous take-off due to conservation of evidence. I already knew Paul’s basic view, and expected a post by him that had this level of thought and effort behind it to be more convincing than it was. Instead, I found my system-1 repeatedly not viewing the arguments as being convincingly answered or often really even understood as I understand them, my and Paul’s model for what can accomplish what and the object-level steps that will get taken to get a takeoff seem super far apart and my instinctive reading of Paul’s model doesn’t seem plausible to me (which means I likely have it wrong), and I even saw pointers to some good arguments for fast takeoff I hadn’t properly fully considered.
I especially found interesting that Paul doesn’t intuitively grok the concept of things clicking. I am very much a person of the click. I am giving myself a task to add a post called Click to my draft folder.
Note that this comment is not intended to be an attempt to respond to the content, or to be convincing to anyone including Paul; I hope at some future date to write my thoughts up carefully in a way that might be convincing, or allow Paul or others to point out my mistakes (in addition to the click thing).
If people know my views they should update away half the time when they hear my arguments, so I guess I shouldn’t be bummed (though hopefully that usually involves their impressions moving towards mine and their peer disagreement adjustment shrinking).
I’d love to read an articulation of the fast takeoff argument in a way that makes sense to me, and was making this post in part to elicit that. (One problem is that different people seem to have wildly different reasons for believing in fast takeoff.)
I would be very interested to see this!