I agree with many particular points in this post and the apparent thesis[1], but also think most people[2] should focus on short timelines (contrary to the apparent implication of the post). The reasons why are:
Short timelines have more leverage. This isn’t just because of more neglectedness now, but also because: (1) it’s easier to target approaches towards shorter timelines where less has changed, (2) short timelines are riskier (and I think riskier worlds are more leveraged for most interventions, this is sensitive to my views on risk and the most leveraged interventions), and (3) it’s easier to operate in near mode when targeting short timelines and I expect this has a bunch of benefits (mostly from psychological / cognitive bias perspective).
I put sufficiently high probability on short timelines: maybe 25% in <2.5 years to full AI R&D automation and 50% in <5. I don’t think deference to other experts shifts me towards longer timelines by much.[3] I think there are good arguments for this view, though I certainly agree there isn’t consensus and the arguments aren’t that clear cut or legible.
I expect work explicitly focused on short timelines (across most areas) to transfer pretty well and generally not cause that much downside in longer timelines. I think the transfer in the other direction tends to look less good in practice. (To be clear, I think work focused on short timelines shouldn’t neglect thinking about downsides in longer timelines, I just think this is usually not that big of a deal.)
The counterargument I’m most sympathetic to is that (1) a high fraction of the work should be focused on “better futures” and (2) for better futures work, the leverage is higher in longer timelines. (I don’t currently agree with either of (1) or (2), but I’m very uncertain.)
Assuming the thesis is “our probability distribution should span a wide range (including Daniel’s distribution as an example of a wide range) and we should take this into account in our decision making.
I might have a small difference between these stated probabilities and my full all considered view including defering to others. To avoid deference cascades, I usually state probabilities somewhat closer to my non-deference view. (It’s hard to fully disentangle deference because my views are based on talking to a wide range of different people.) Post deference my distribution is a bit wider with a correspondingly longer median. But I don’t think this makes much difference either way and deference also pulls up my probability on very short timelines.
I expect work explicitly focused on short timelines … generally not cause that much downside in longer timelines
Hm. Skeptical of this. From my relative lay perspective, it sure seems like Anthropic and others use justifications like “This could be coming soon. On that assumption, we can get to the forefront and do our best to work out safety and do the right thing.” and then they push the forefront foreward. Which is bad to do.
I am confused about how what you said is related to what you quoted. I don’t take any of this to be speaking to employees of the AGI companies, since none of them have any agency with respect to making AI go well.
AGI companies exist to create AGI, and their structures don’t allow for voluntary courses of action where they intentionally do that more slowly or not at all. Whether they have access to reasonable-sounding justifications for doing so is not a causal factor, and whether individual employees care about making a good human future is not a causal factor in whether that occurs.
I agree with many particular points in this post and the apparent thesis [1] , but also think most people [2] should focus on short timelines (contrary to the apparent implication of the post). The reasons why are:
Short timelines have more leverage. This isn’t just because of more neglectedness now, but also because: (1) it’s easier to target approaches towards shorter timelines where less has changed, (2) short timelines are riskier (and I think riskier worlds are more leveraged for most interventions, this is sensitive to my views on risk and the most leveraged interventions), and (3) it’s easier to operate in near mode when targeting short timelines and I expect this has a bunch of benefits (mostly from psychological / cognitive bias perspective).
I put sufficiently high probability on short timelines: maybe 25% in <2.5 years to full AI R&D automation and 50% in <5. I don’t think deference to other experts shifts me towards longer timelines by much. [3] I think there are good arguments for this view, though I certainly agree there isn’t consensus and the arguments aren’t that clear cut or legible.
I expect work explicitly focused on short timelines (across most areas) to transfer pretty well and generally not cause that much downside in longer timelines. I think the transfer in the other direction tends to look less good in practice. (To be clear, I think work focused on short timelines shouldn’t neglect thinking about downsides in longer timelines, I just think this is usually not that big of a deal.)
The counterargument I’m most sympathetic to is that (1) a high fraction of the work should be focused on “better futures” and (2) for better futures work, the leverage is higher in longer timelines. (I don’t currently agree with either of (1) or (2), but I’m very uncertain.)
Assuming the thesis is “our probability distribution should span a wide range (including Daniel’s distribution as an example of a wide range) and we should take this into account in our decision making.
Or at least most of the quality weighted labor supply.
I might have a small difference between these stated probabilities and my full all considered view including defering to others. To avoid deference cascades, I usually state probabilities somewhat closer to my non-deference view. (It’s hard to fully disentangle deference because my views are based on talking to a wide range of different people.) Post deference my distribution is a bit wider with a correspondingly longer median. But I don’t think this makes much difference either way and deference also pulls up my probability on very short timelines.
Hm. Skeptical of this. From my relative lay perspective, it sure seems like Anthropic and others use justifications like “This could be coming soon. On that assumption, we can get to the forefront and do our best to work out safety and do the right thing.” and then they push the forefront foreward. Which is bad to do.
I am confused about how what you said is related to what you quoted. I don’t take any of this to be speaking to employees of the AGI companies, since none of them have any agency with respect to making AI go well.
AGI companies exist to create AGI, and their structures don’t allow for voluntary courses of action where they intentionally do that more slowly or not at all. Whether they have access to reasonable-sounding justifications for doing so is not a causal factor, and whether individual employees care about making a good human future is not a causal factor in whether that occurs.
Even assuming this is true (it’s of course not 100% true),
they may not view it that way, so Greenblatt’s comment and/or the OP could still be addressing them and their decision-making,
they decided to work there and continue to decide to work there, which is a causal agency path thing.
This also applies to e.g. people supporting / funding these companies for supposed X-risk reduction reasons.