I think there are multiple ways of interpreting “alignment is as difficult as X”. There’s “the safety issues in building AGI are similar to the safety issues in building X”, but there’s also “solving the safety issues in building AGI takes the same level of total effort as building X”.
I interpreted Chris Olah’s graph as the latter – that the ‘steam engine world’ is a world where solving AI safety takes as much total effort as building the steam engine, agnostic of how that effort is spent. NOT that in those worlds, you solve AI safety issues in the same way that you solve steam engine safety issues.
Put another way, I was imagining the graph as primarily quantitative – you could crudely replace the x-axis with “# person-hours”.
I think there are multiple ways of interpreting “alignment is as difficult as X”. There’s “the safety issues in building AGI are similar to the safety issues in building X”, but there’s also “solving the safety issues in building AGI takes the same level of total effort as building X”.
I interpreted Chris Olah’s graph as the latter – that the ‘steam engine world’ is a world where solving AI safety takes as much total effort as building the steam engine, agnostic of how that effort is spent. NOT that in those worlds, you solve AI safety issues in the same way that you solve steam engine safety issues.
Put another way, I was imagining the graph as primarily quantitative – you could crudely replace the x-axis with “# person-hours”.