On the other hand, there’s another concern I’ve been wary of in the context of AI safety startups (which is what I’m currently exploring) and research in general: following the short-term success gradient. In startups, you can start with a noble vision and then become increasingly pressured away from the initial vision simply because you are pursuing the customer gradient and “building what people want.” If your goal is large-scale (venture) success, then it only makes sense. You need customers and traction for your Series A after all. Even in research, there’s only so much fucking around you can do until people want something legible from you.
This is my biggest concern with d/acc style techno-optimism, it seems to assume that genuinely defensive technologies can compete economically with offensive ones (all it takes is the right founders, seed funding etc.).
Whereas my impression is that any kind of ethical/ideological commitment immediately puts a startup at a massive structural disadvantage against those who chose simply to give the market what it wants (acceleration).
This is my biggest concern with d/acc style techno-optimism, it seems to assume that genuinely defensive technologies can compete economically with offensive ones (all it takes is the right founders, seed funding etc.).
Does it assume that? There are many ways for governments to adjust for d/acc tech being less innately appealing by intervening on market incentives, for example, through subsidies, tax credits, benefits for those who adopt these products, etc. Doing that may for various reasons be more tractable than command-and-control regulation. But either way, doing either (incentivising or mandating) seems easier once the tech actually exists and is somewhat proven, so you may want founders to start d/acc projects even if you think they would not become profitable in the free market and even if you want to mandate that tech eventually.
(That is not to say that there is a lot of useful d/acc tech that awaits being created, and that if implemented would make a major difference. I just think that, if there is, then that tech being able to compete economically isn’t necessarily a huge problem.)
You are right that I am being a bit reductive. Maybe it would be better to say it assumes some kind of ideal combination of innovation, markets and technocratic governance would be enough to prevent catastrophe?
And to be clear I do think its much better for people to be working on defensive technologies, than not to. And its not impossible that the right combination of defensive entrepreneurs and technocratic government incentives could genuinely solve a problem.
But I think this kind of faith in business as usual but a bit better can lead to a kind of complacency where you conflate working on good things with actually making a difference.
This is my biggest concern with d/acc style techno-optimism, it seems to assume that genuinely defensive technologies can compete economically with offensive ones (all it takes is the right founders, seed funding etc.).
Whereas my impression is that any kind of ethical/ideological commitment immediately puts a startup at a massive structural disadvantage against those who chose simply to give the market what it wants (acceleration).
Does it assume that? There are many ways for governments to adjust for d/acc tech being less innately appealing by intervening on market incentives, for example, through subsidies, tax credits, benefits for those who adopt these products, etc. Doing that may for various reasons be more tractable than command-and-control regulation. But either way, doing either (incentivising or mandating) seems easier once the tech actually exists and is somewhat proven, so you may want founders to start d/acc projects even if you think they would not become profitable in the free market and even if you want to mandate that tech eventually.
(That is not to say that there is a lot of useful d/acc tech that awaits being created, and that if implemented would make a major difference. I just think that, if there is, then that tech being able to compete economically isn’t necessarily a huge problem.)
You are right that I am being a bit reductive. Maybe it would be better to say it assumes some kind of ideal combination of innovation, markets and technocratic governance would be enough to prevent catastrophe?
And to be clear I do think its much better for people to be working on defensive technologies, than not to. And its not impossible that the right combination of defensive entrepreneurs and technocratic government incentives could genuinely solve a problem.
But I think this kind of faith in business as usual but a bit better can lead to a kind of complacency where you conflate working on good things with actually making a difference.