Can we model almost all money choices in our life as ethical offsetting problems?
Example 1: You do not give money to a homeless person on the street, or to a friend who’s struggling financially and maybe doesn’t show the best sense when it comes to money management. You give the money you save to a homeless shelter or to politicians promoting basic income or housing programs.
Example 2: You buy cheaper clothes from a company that probably treats its workers worse than other companies. You give the money you save to some organization that promotes ethical global supply chains or gives direct money aid to people in poverty.
(Note: In all these examples, you might choose to give the money to some organization that you believe has some larger net positive than the direct offset organization. So you might not give money to homeless people, and instead give it to Against Malaria Foundation, etc. This is a modification of the offsetting problem that ignores questions of fungibility of well-being among possible benefactors.)
The argument for: In the long term, you might promote systems that prevent these problems from happening in the first place.
The argument against: For example 1, social cohesion. You might suck as a friend, might get a reputation for sucking as a friend, and you might feel less safe in your community knowing that if everyone acted the same way as you, you wouldn’t get support. For example 2, the market mechanism might just be better—maybe you should vote directly with your money? It’s fuzzy, though, since paying less money to companies that pay horribly may just drive down pay more? Some studies on this would be helpful.
Critical caveat: Are you actually shuttling the money you’re saving by doing the thing that’s probably negative into the thing that’s more probably positive? It’s very easy to do the bad thing, say you’re going to do the good thing, and then forget to do the good thing or otherwise rationalize it away.
I’d hesitate to make predictions based on the slowdown of GPT-3 to Megatron-Turing, for two reasons.
First, GPT-3 represents the fastest, largest increase in model size in this whole chart. If you only look at the models before GPT-3, the drawn trend line tracks well. Note how far off the trend GPT-3 itself is.
Second, GPT-3 was released almost exactly when COVID became a serious concern in the world beyond China. I must imagine that this slowed down model development, but it will be less of a factor going forward.