Most AI companies are paying at least lip-service to AI safety and alignment. Several of them are actually expending capital on AI alignment research. Safe AI is obviously more marketable and profitable than unsafe AI — shareholder value is zero if the shareholders are all extinct. If we can just solve AI alignment, without too high an alignment tax, then the frontier labs will adopt the solution and use it. This seems a more plausible bet to many people than winning a lobbying competition with an industry which will (before AGI is achieved and AI alignment becomes an x-risk) be raising hundreds of billions of dollars in capital.
I don’t see the argument above argument as conclusive, but I don’t see yours as conclusive either. If we have multiple chances to save humanity, surely we should be funding all of them? So we should be expending time, effort, and money on both approaches. While money is fungible, in general the people who have the skills and talents to do AI research and those who have the skills and talents to do AI governance advertising have very little overlap, so their time and effort is mostly not fungible. So then it comes time to do a cost-benefit analysis, and take into account things like diminishing returns.
I have no complaints at all about technical AI research! Indeed, I agree with you that industry will spontaneously adopt most of the good technical AI safety ideas shortly after they’re invented.
I’m arguing about the relative merits of AI governance research vs. AI governance advocacy. This probably wasn’t clear if you’re just jumping in at this point of the sequence, for which I apologize.
My apologies — I was indeed jumping in without reading your entire sequence. Going and reading the earlier post you linked to, I am now in hearty agreement with you.
Most AI companies are paying at least lip-service to AI safety and alignment. Several of them are actually expending capital on AI alignment research. Safe AI is obviously more marketable and profitable than unsafe AI — shareholder value is zero if the shareholders are all extinct. If we can just solve AI alignment, without too high an alignment tax, then the frontier labs will adopt the solution and use it. This seems a more plausible bet to many people than winning a lobbying competition with an industry which will (before AGI is achieved and AI alignment becomes an x-risk) be raising hundreds of billions of dollars in capital.
I don’t see the argument above argument as conclusive, but I don’t see yours as conclusive either. If we have multiple chances to save humanity, surely we should be funding all of them? So we should be expending time, effort, and money on both approaches. While money is fungible, in general the people who have the skills and talents to do AI research and those who have the skills and talents to do AI governance advertising have very little overlap, so their time and effort is mostly not fungible. So then it comes time to do a cost-benefit analysis, and take into account things like diminishing returns.
I have no complaints at all about technical AI research! Indeed, I agree with you that industry will spontaneously adopt most of the good technical AI safety ideas shortly after they’re invented.
I’m arguing about the relative merits of AI governance research vs. AI governance advocacy. This probably wasn’t clear if you’re just jumping in at this point of the sequence, for which I apologize.
My apologies — I was indeed jumping in without reading your entire sequence. Going and reading the earlier post you linked to, I am now in hearty agreement with you.