which can produce numbers like 30% yearly economic growth. Epoch feels the AGI.
Ironic. My understanding is that Epoch’s model substantially weakens/downplays the effects of AI over the next decade or two. Too busy now but here’s a quote from their FAQ:
The main focus of GATE is on the dynamics in the leadup towards full automation, and it is likely to make poor predictions about what happens close to and after full automation. For example, in the model the primary value of training compute is in increasing the fraction of automated tasks, so once full automation is reached the compute dedicated to training falls to zero. However, in reality there may be economically valuable tasks that go beyond those that humans are able to perform, and for which training compute may continue to be useful.
(I love Epoch, I think their work is great, I’m glad they are doing it.)
I don’t really get the point in releasing a report that explicitly assumes x-risk doesn’t happen. Seems to me that x-risk is the only outcome worth thinking about given the current state of the AI safety field (i.e. given how little funding goes to x-risk). Extinction is so catastrophically worse than any other outcome* that more “normal” problems aren’t worth spending time on.
I don’t mean this as a strong criticism of Epoch, more that I just don’t understand their worldview at all.
*except S-risks but Epoch isn’t doing anything related to those AFAIK
Working through a model of the future in a better-understood hypothetical refines gears applicable outside the hypothetical. Exploratory engineering for example is about designing machines that can’t be currently built in practice and often never will be worthwhile to build as designed. It still gives a sense of what’s possible.
(Attributing value to steps of a useful activity is not always practical. Research is like that, very useful that it’s happening overall, but individual efforts are hard to judge, and so acting on attempts to judge them risks goodhart curse.)
Ironic. My understanding is that Epoch’s model substantially weakens/downplays the effects of AI over the next decade or two. Too busy now but here’s a quote from their FAQ:
(I love Epoch, I think their work is great, I’m glad they are doing it.)
I don’t really get the point in releasing a report that explicitly assumes x-risk doesn’t happen. Seems to me that x-risk is the only outcome worth thinking about given the current state of the AI safety field (i.e. given how little funding goes to x-risk). Extinction is so catastrophically worse than any other outcome* that more “normal” problems aren’t worth spending time on.
I don’t mean this as a strong criticism of Epoch, more that I just don’t understand their worldview at all.
*except S-risks but Epoch isn’t doing anything related to those AFAIK
Working through a model of the future in a better-understood hypothetical refines gears applicable outside the hypothetical. Exploratory engineering for example is about designing machines that can’t be currently built in practice and often never will be worthwhile to build as designed. It still gives a sense of what’s possible.
(Attributing value to steps of a useful activity is not always practical. Research is like that, very useful that it’s happening overall, but individual efforts are hard to judge, and so acting on attempts to judge them risks goodhart curse.)
I mean, one could say they don’t feel the ASI.
Fair enough