It seems like the growth models already take much of that into account, the same way that they do crime or war: if new technologies create new crime (which of course they often do), then that simply slightly offsets the benefits of those technologies, and it is the net benefit which shows up in the long-term growth rather than some ‘pure’ benefit free of any drawbacks. And likewise for technologies as a whole: if you’re inventing some unknown grabbag of technologies each time-period, then it’s the net of all the good ideas being offset slightly by the bad ones that is getting measured or driving the growth in the next time-period etc. It would be like measuring the growth of an actual tumor: whatever growth you observe, well, that must be the net growth after the defectors inside the tumor have done their worst, by definition.
So you’d have to invoke some sort of non-constant or non-proportionality: “yes, the bad ideas are only an offset, up until some threshold value like ‘inventing nuclear bombs’” (like Bostrom’s ‘black balls’). But then your results seems dangerously circular: if you assume some fat tail payoff from the bad ideas after a certain threshold or increasingly with time, you are building in your conclusions like “we should halt all technological progress forever”.
It seems like the growth models already take much of that into account, the same way that they do crime or war: if new technologies create new crime (which of course they often do), then that simply slightly offsets the benefits of those technologies, and it is the net benefit which shows up in the long-term growth rather than some ‘pure’ benefit free of any drawbacks. And likewise for technologies as a whole: if you’re inventing some unknown grabbag of technologies each time-period, then it’s the net of all the good ideas being offset slightly by the bad ones that is getting measured or driving the growth in the next time-period etc. It would be like measuring the growth of an actual tumor: whatever growth you observe, well, that must be the net growth after the defectors inside the tumor have done their worst, by definition.
So you’d have to invoke some sort of non-constant or non-proportionality: “yes, the bad ideas are only an offset, up until some threshold value like ‘inventing nuclear bombs’” (like Bostrom’s ‘black balls’). But then your results seems dangerously circular: if you assume some fat tail payoff from the bad ideas after a certain threshold or increasingly with time, you are building in your conclusions like “we should halt all technological progress forever”.