The public goods idea _does_ help explain things if we think there’s a threshold issue (not valuable unless a certain amount is redistributed) _AND_ a coordination problem such that many people would like to donate, but only if they know the total is over the threshold.
It may also explain things if the motivation to not get full utility by an altruist donating more is some form of punishment for free riders (those who don’t donate, but still get value).
I agree that the more likely explanation is that poverty altruism isn’t linear (in utility with money donated) for any individual, and most people are, in fact, giving at the level they want to give. They would like to get some “free” utility by encouraging/forcing others to give more.
This isn’t at odds with a public goods model—there are lots of public goods that go un-provided because they’re not worth it to enough people to provide it privately or mandate it publicly. “this is a public good; therefore government must do it” is not a valid argument.
I suspect that you’re well into the measurement-error range for the things you’re talking about. It would be silly to expect a measurable change in QALY for something you spend a tiny fraction of your waking year on or a tiny fraction of your annual income on. Let alone the debate about how to adjust the quality measure based on such things.
Fortunately, at these scales, you can use anecdotal evidence of improvement in YOUR experience, for many things. Your friend’s smile or your co-founder’s continued stream of horrible ideas are plenty of reward for the low cost of the kindness you’re considering.
“helping to decide where to find the interesting bits” is exactly where this technique shines, and I don’t think it’s overrated (at least in my circles).
Note that even in the “yes, this is the right level, for X reasons”, there’s still a bunch of value in identifying the forces at equilibrium that make this the right level. You can then ask “do you want to change some of THOSE values”?
I suspect it all comes down to modeling of outcome distributions. If there’s a narrow path to success, then both biases are harmful. If there are a lot of ways to win, and a few disasters, then optimism bias is very harmful, as it makes the agent not loss-averse enough. If there are a lot of ways to win a little, and few ways to win a lot, then pessimism bias is likely to miss the big wins, as it’s trying to avoid minor losses.
I’d really enjoy an analysis focused on your conditions (maximize vs satisfice, world symmetry) - especially what kinds of worlds and biased predictors lead satisficing to get better outcomes than optimizing.
[upvoted for talking about something that’s difficult to model and communicate about]
Hmm. I believe (with fairly high confidence—it would take a big surprise to shift me) a combination of empty and closed. Moments of self-observed experience are standalone, and woven into a fabric of memories in a closed, un-sharable system that will (sooner than I prefer) physically degrade into non-experiencing components.
I haven’t found anyone who claims to be open AND is rational enough to convince me they’re not just misstating what they actually experience. In fact, I’d love to hear someone talk about what it means to “want” something if you’re experiencing all things simultaneously.
I’m quite sympathetic to the argument that it is what it is, and there’s no reason to be sad. But I’m also unsure whether or why my acceptance of closed-empty existence makes you sad. Presumably, if your consciousness includes me, you know I’m not particularly sad overall (I certainly experience pain and frustration, but also joy and optimistic anticipation, in a balance that seems acceptable).
It’s not clear that positive-sum innovation is linear (or even monotonically positive) with total population. There almost certainly exist levels at which marginal mouths to feed drive unpleasant and non-productive behaviors more than they do the growth-driving shared innovations.
Whether we’re in a downward-sloping portion of the curve, and whether it slopes up again in the next few generations, are both debatable. And they should be debated.
Wait—did someone actually show that Roko’s Basilisk is stupid and dumb? I believe that—it’s roughly a bad Pascal’s wager, but with a causal hook. But I think the reason discussion was banned was that it was causing severe discomfort, not that it was agreed to be harmless and meaningless.
In any case, your proposal does not have the causal hook that makes the Basilisk so tempting and unpleasant. It doesn’t blackmail you into creating the blackmailer.
Sure, it’s consistent to prefer non-you poor people get food over non-you rich people getting iphones. But most actual people prefer THEY get an iphone over feeding any specific poor person. People aren’t fungible, and no actual humans are fully indifferent to which humans are helped or harmed.
Hedonic adaptation (feeling reward/penalty for the relative change, much more than the absolute situation) may be a key strategy for this. It adjusts both upward and downward, to avoid either mistake for very long.
maybe sometimes things suck because there are more people, but sometimes things only suck because mazes have the power to change the law to make things suck.
We’re in complete agreement. I’m looking for the model that tells me how to know which (or what proportion) of these is true for actual mazes today.
The first part of this turned seemed like mostly politics—oversimple and flat-out non-real example being used to justify a policy without any nuance or sense. Point 1 is just unsupported and hard to argue for or against, other than by saying your example is wrong and doesn’t justify any specific type or level of redistribution, and you haven’t specified even what “redistribution” means, especially in a dynamic equilibrium where wealth and income are related but distinct.
Point 2 is completely missing the fundamental question of what people want—Friedman’s point that if people actually were self-aligned that they care about feeding specific poor people rather than getting a new iPhone, they’d do it. Instead, they want abstract poor people to get fed, and only if they can force others to do so (along with themselves, in many cases, but rarely just unilaterally). You don’t address this disparity.
Point 3 is actually a reasonable start to laying out the fundamental puzzle of large-group behaviors. I’ll say that I am a consequentialist, and that I consider myself somewhat altruistic, but not an Altruist with a capital A. And I’m in the first group: I consider myself many orders of magnitude more important (to me) than very distant strangers. Not zero, but for some of ’em, more than tens of millions of times more important. For some, much less discount, maybe half as import for very close friends and relatives.
Other people have a declining marginal utility for me, and for a given level of resources, it CAN go negative. There are almost 8 billion of them currently, and I think that’s probably more than I prefer while we’re still at today’s tech level and limited to one planet. I don’t know if concentrations of resources are necessary to large-scale endeavors, but I suspect it, and I don’t worry too much about it.
What? In this example, the problem is not Carl—he’s harmless, and Dave carries on with the cycle (of improving the design) as he should. Showing a situation where Carl’s sensationalist misstatement actually stops progress would likely also show that the problem isn’t Carl—it’s EITHER the people who listen to Carl and interfere with Alice, Bob, and Dave, OR it’s Alice and Dave for letting Carl discourage them rather than understanding Bob’s objection directly.
Your description implies that the problem is something else—that Carl is somehow preventing Dave from taking Bob’s analysis into consideration, but your example doesn’t show that, and I’m not sure how it’s intended to.
In the actual world, there’s LOTS of sensationalist bad reporting of failures (and of extremely minor successes, for that matter). And those people who are actually trying to build things mostly ignore it, in favor of more reasonable publication and discussion of the underlying experiments/failures/calculations.
beware the unstated alternative: is there reason to believe that un-coordinated individuals grow in power linearly (or any faster/slower than corporate/government aggregations)?
If this power calculation holds, then things suck because there are more people, not because of how they’re organized. I’d call that conclusion fairly repugnant.
Not sure there’s a general term for it, but “psychoacoustic compression” is the term for modeling the importance of information in lossy audio encoding such as MP3.
I’d argue that “personhood” is rarely what these things actually care about—it’s just a cheap-to-measure proxy for “likelihood of conversion to sale” or “amount I’d get paid for an ad” or the like. A bot that can enter into contracts and is more likely than a real person to make a purchase would be welcomed, but there are few of them and there’s no good test of it.
For actually valuable things, a bot could just pay humans to pass the captcha and all would be well. Shadier bots could man-in-the-middle pretty easily if they just pass through a captcha on their cat picture site.
For implementation, it’s worth looking at the OAuth specs and common federated authentication systems that google, facebook and a number of other sites provide—those do NOT assert human-ness, they assert authenticated account identity, but for most uses, that’s a better proxy anyway. In cases where it’s not, you could build a provider that uses OAuth to assert humanity using whatever verification it likes.
Interesting take. When I see “agenty” used on this site and related blogs, it usually seems to map to something like self-actualization or percieved locus of control, more psychological frameworks. I’d not thought too much about how different (or similar) it was to “agent” in decision theory and game-theoretical usage, which is not about the feeling of control, but about behavior selection according to legible reasoning.
I think there’s a contradiction here. Idealized game-theoretic agents are the OPPOSITE of what we call “agenty behavior”. Executing a knowable strategy is the trivial part. Agenty-ness in the human rationalist sense is about cases where the model is too complicated to reflectively know or analyze formally.
This is a new assertion—mazes only occur in monopolies? And I guess the answer for why people would participate in the maze is that they only happen in labor monopsony conditions? It’s possible, in which case the solution is simpler (to state; not always to do): break up the monopoly. I don’t think that’s what Zvi and others are claiming, though (except maybe in the finance industry, which may be an effective monopoly on employment: there are no options which aren’t mazes), and it doesn’t match my experiences or second-hand stories of acquaintances close enough that I’ve gotten details. Even in cases where it _is_ currently a monopoly, you have to answer WHY there are no competing options to do it better and more pleasantly at the same time. (note: if pressed, I will admit that this paragraph was written mostly for me to introduce the phrase “cultural monopsony”).
Oh, wait—you said “if mazes are inevitable”. They’re not universal today. I don’t know about eventual inevitability, but there are large organizations that are not entirely maze-like, at least not to the degree described in this series. I have indirect experience (not myself, but relatively close friends and/or relatives) with GM, IBM, and the US Navy, and none are all that bad for middle managers—there’s politics, but there’s also actual production and rewarding work impact.
I don’t think I’d claim that “good Moloch” exists or is possible. I make the much weaker claim that Moloch hasn’t actually optimized very far, so you CAN beat ‘em and don’t have to join ’em. For some time, at least—perhaps decades or generations. I really have no prediction about the long-term beyond “today isn’t a stable equilibrium”, but I don’t see anything that overall beats competition as a motive for optimizing on legible dimensions over illegible ones, in a finite universe with infinite potential desires.
How do you deal with the knowledge problem? Typically, the actual, experienced pain in steps 2 and 3 is critical to the safety measures implemented in 3 and enjoyed in 4. The progress is not delayed for all possible problems, but the worst of them get addressed—the incentive to be safe (reduce pain) aligns with the incentive to use the technology at all.
This works for pain (risk that’s short-term enough to measure the cost and incidence of). It’s not clear that it works for rarer but more severe risks (x-risk or just giant economic risk).
In other words, the regulators are part of the technology in the first place—what’s the guarantee (or even the mechanism to start) that the regulators are addressing only the critical risks?
I think this is the central puzzle on the topic: where is the money coming from to pay the rats who are in (and creating) the mazes? Why wouldn’t customers prefer a more efficient provider?
My current speculation is that there’s a ton of slack on the scale were talking about. Mazes aren’t actually less efficient than non-mazes, they just spend the slack on unpleasant things rather than pleasant. To the extent this is true, my advice will actually reduce overall slack—the winners still will have to work harder and longer than they like. But they’ll enjoy it (both the work, and the remaining slack) more. So, less overall non-work energy, but better able to use it for non-work purposes.
Moloch still wins eventually, as eventually you have to compete with other hard-working non-maze-waste orgs. But that can take a long time, and the ramp is far more pleasant.