[Question] What is to be done? (About the profit motive)

I’ve recently started reading posts and comments here on LessWrong and I’ve found it a great place to find accessible, productive, and often nuanced discussions of AI risks and their mitigation. One thing that’s been on my mind is that seemingly everyone takes for granted that the world as it exists will eventually produce AI, particularly sooner than we have the necessary knowledge and tools to make sure it is friendly. Many seem to be convinced of the inevitability of this outcome, that we can do little to nothing to alter the course. Often referenced contributors to this likelihood are current incentive structures; profit, power, and the nature of current economic competition.

I’m therefore curious why I see so little discussion on the possibility of changing these current incentive structures. Mitigating the profit motive in favor of incentive structures more aligned with human well-being seems to me an obvious first step. In other words, to maximize the chance for aligned AI, we must first make an aligned society. Do people not discuss this idea here because it is viewed as impossible? Undesirable? Ineffective? I’d love to hear what you think.