Mostly abstract arguments that don’t actually depend on DL in particular (or at least not to a strong degree). Eg. stupid evolution was able to do it with human brains. This spreadsheet is nice for playing with the implications for different models (couldn’t find Ajeya’s report this belongs to). Though I haven’t taken the time to thoroughly think through this, because playing through resonable values gave distributions that seemed to broad to bother.
The point I wanted to make is that you can believe that things are slowing down (I am more empathetic to the view where AI will not have a big/galactic impact until things are too late) and still be worried.
Mostly abstract arguments that don’t actually depend on DL in particular (or at least not to a strong degree). Eg. stupid evolution was able to do it with human brains. This spreadsheet is nice for playing with the implications for different models (couldn’t find Ajeya’s report this belongs to). Though I haven’t taken the time to thoroughly think through this, because playing through resonable values gave distributions that seemed to broad to bother.
The point I wanted to make is that you can believe that things are slowing down (I am more empathetic to the view where AI will not have a big/galactic impact until things are too late) and still be worried.