I think I agree with Thane’s point 1: because it seems like building intelligence requires a series of conceptual insights, there may be limits to how far in advance I can know it’s about to happen (without like, already knowing how to build it out of math myself). But I don’t view this as a position of total epistemic helplessness—it’s clear that there has been a lot of progress over the last 40 years to the extent that we should be more than halfway there.
And yeah, I don’t view AGI as equivalent to other technologies—its not even clear yet what all the technical problems that need to be solved are! I think it’s more like inventing a tiny mechanical bird than inventing a plane. Birds have probalby solved a lot of subproblems that we don’t know exist yet, and I’m really not sure how far we are from building an entire bird.
But I don’t view this as a position of total epistemic helplessness—it’s clear that there has been a lot of progress over the last 40 years to the extent that we should be more than halfway there.
Those are not incompatible. Suppose that you vaguely feel that a whole set of independent conceptual insights is missing, and that some of them will only be reachable after some previous ones have been discovered; e. g. you need to go A→B→C. Then the expected time until the problem is solved is the sum of the expected wait-times TA+TB+TC, and if you observe A and B being solved, it shortens to TC.
I think that checks out intuitively. We can very roughly gauge how “mature” a field is, and therefore, how much ground there’s likely to cover.
I think I agree with Thane’s point 1: because it seems like building intelligence requires a series of conceptual insights, there may be limits to how far in advance I can know it’s about to happen (without like, already knowing how to build it out of math myself). But I don’t view this as a position of total epistemic helplessness—it’s clear that there has been a lot of progress over the last 40 years to the extent that we should be more than halfway there.
And yeah, I don’t view AGI as equivalent to other technologies—its not even clear yet what all the technical problems that need to be solved are! I think it’s more like inventing a tiny mechanical bird than inventing a plane. Birds have probalby solved a lot of subproblems that we don’t know exist yet, and I’m really not sure how far we are from building an entire bird.
Those are not incompatible. Suppose that you vaguely feel that a whole set of independent conceptual insights is missing, and that some of them will only be reachable after some previous ones have been discovered; e. g. you need to go A→B→C. Then the expected time until the problem is solved is the sum of the expected wait-times TA+TB+TC, and if you observe A and B being solved, it shortens to TC.
I think that checks out intuitively. We can very roughly gauge how “mature” a field is, and therefore, how much ground there’s likely to cover.
Yes, I agree