Thanks, those links are interesting. Though, improving algorithms and compute don’t seem like sufficient conditions—I was also thinking about where does the training data come from (and time for getting the data and doing the required training)? Going from being able to do AI development work to being able to do most economically useful tasks seems like a big step—it seems speculative* as I don’t think there are demonstrations of something like this working yet—it would be interesting to know if there is a path here for getting suitable data (enough input-output pairs or tasks with well-specified, automatically verifiable goals) that doesn’t require any speculative leaps in efficiency with respect to the amount of training data required. Or is there a concrete method that has been thought of that wouldn’t require this data? The takeoff forecast page seems to use a framework of saying the AI could speed up any task a human could do—but I’m not aware of it being shown that this could happen when generalising to new tasks. Even outside of considering the takeover scenario, it would be interesting to see analysis about how automation of some job tasks that make up a sizeable fraction of human labour time could be done, with reasonable projections of the resources required given current trends, just for getting a better idea of how job automation would proceed.
Similarly, it’s not clear to me where data would come from to train the agents for long-range strategic planning (particularly about novel scenarios that humans haven’t navigated before) or producing efficient robot factories (which seems like it would take quite a bit of trial and error of working in the physical world, unless some high-quality simulated environment is being presumed).
*By speculative, I don’t mean to say it makes it unlikely, but just that it doesn’t seem to be constrained by good evidence that we have in the present, and so different reasonable people may come to very different conclusions. It seems helpful to me to identify where there could potentially be very wide differences in people’s estimates.
Thanks, those links are interesting. Though, improving algorithms and compute don’t seem like sufficient conditions—I was also thinking about where does the training data come from (and time for getting the data and doing the required training)? Going from being able to do AI development work to being able to do most economically useful tasks seems like a big step—it seems speculative* as I don’t think there are demonstrations of something like this working yet—it would be interesting to know if there is a path here for getting suitable data (enough input-output pairs or tasks with well-specified, automatically verifiable goals) that doesn’t require any speculative leaps in efficiency with respect to the amount of training data required. Or is there a concrete method that has been thought of that wouldn’t require this data? The takeoff forecast page seems to use a framework of saying the AI could speed up any task a human could do—but I’m not aware of it being shown that this could happen when generalising to new tasks. Even outside of considering the takeover scenario, it would be interesting to see analysis about how automation of some job tasks that make up a sizeable fraction of human labour time could be done, with reasonable projections of the resources required given current trends, just for getting a better idea of how job automation would proceed.
Similarly, it’s not clear to me where data would come from to train the agents for long-range strategic planning (particularly about novel scenarios that humans haven’t navigated before) or producing efficient robot factories (which seems like it would take quite a bit of trial and error of working in the physical world, unless some high-quality simulated environment is being presumed).
*By speculative, I don’t mean to say it makes it unlikely, but just that it doesn’t seem to be constrained by good evidence that we have in the present, and so different reasonable people may come to very different conclusions. It seems helpful to me to identify where there could potentially be very wide differences in people’s estimates.