I think the term “Market Failure” describes an interesting phenomenon and there should be some term that describes situations where negative externalities are being generated, there is suboptimal production of a social good, etc. At the same time, it is easy to see how “market failure” easily gives laypeople additional connotations.
Specifically, I agree that this phenomenon generalizes beyond what most people think of as “markets” (i.e. private firms doing business). I can see where this would bias most peoples’ hasty analysis away from potential free-market solutions and towards that status quo or cognitively-simple solutions (“we just ought to pass a law! Lets form a new agency to enforce stricter regulations!”) without also taking the time to weigh the costs of those government interventions.
In some spaces, there are private self regulatory organizations, consumer watchdogs, civil liability, and licensing firms that can align firms closer towards socially optimal outcomes while having a greater incentive than the government to pay attention to the costs of those “regulations.” But otherwise there’s not really a market for law and regulation itself within the borders of any one country.
In short, I fear many people perceive the words “market failure” as a local condemnation of capitalism and free markets when perhaps the better solution to these “market failures” is making more market in the form of a more responsive and accountable ecosystem of firms performing the currently monopolistic regulatory function of government.
Thank you for a very thorough post. I think your writing has served me as a more organized account of some of my own impressions opposing longtermisim.
I agree with CrimsonChin in that I think there’s a lot of your post many longtermists would agree with, including the practicality of focusing on short-term sub-goals. Also, I personally believe that initiatives like global health, poverty reduction, etc. probably improve the prospects of the far future, even if their expected value seems less than X-risk mitigation.
Nonetheless, I still think we should be motivated by the immensity of the future even if it is off set by tiny probabilities and there are huge margins of error, because the lower bounds of these estimates appear to me as sufficiently high to be very compelling. The post How Many Lives Does X-Risk Work Save From Nonexistance On Average demonstrates my thinking on this by having estimates of future lives that vary by dozens of orders of magnitude(!) but still arrives at very high expected values for X-Risk work even on the lower bounds.
Even I don’t really feel anything when I read such massive numbers, and I acknowledge how large the intervals of these estimates are, but I wouldn’t say they “make no sense to me” or that ‘To the extent we can quantify existential risks in the far future, we can only say something like “extremely likely,’ ‘possible,’ or ‘can’t be ruled out.’”
For what it’s worth, I use to essentially be an egoist, and was unmoved by all of the charities I had ever encountered. It seemed to me that humanity was on a good trajectory and my personal impact would be negligible. It was only after I started thinking about really large numbers, like the duration of the universe, the age of humanity, the number of potential minds in the universe (credit to SFIA), how neglected these figures were, and moral uncertainty, that I started to feel like I could and should act for others.
There are definitely many, possibly most, contexts where incredibly large or small numbers can be safely disregarded. I wouldn’t be moved by them in adversarial situations, like Pascal’s Mugging, or when doing my day-to-day moral decision making. But for question’s like “What should I really care deeply about?” I think they should be considered.
As for Pascal’s Wager, It calls for picking a very specific God to worship out of a space of infinite possible contradictory gods, and this infinitely small probability of success cancels out the infinite reward of heaven over hell or non-existance. Dissimilarly, Longtermism isn’t committed to any specific action regarding the far future, just the well being of entities in the future generally. I expect that most longtermists would gladly pivot away from a specific cause area (like AI alignment) if they were shown some other cause (E.g. a planet-killing asteroid certainly colliding with Earth in 100 years) was more likely to similarly adversely impact the far future.