Most proposals I’ve heard of use a graduated income tax to pay for the UBI. This essentially means that people making more than X don’t actually get a UBI. (Or rather, they receive $1000, but they also paid $1000 in taxes for it, so it’s a wash).
How expensive this is depends on what value of X you pick.
The advantage of this over the status quo is avoidance of welfare cliffs and generally reduced accounting by not making people prove that they’re poor.
FYI, I think Robby’s “Politics is Hard Mode” works better as a more up-to-date “politics is the mindkiller” referent. (edit: although it turns out Scott Alexander raised some interesting points that I somehow missed the last time I read the post)
Ooh, I like this (while being aware that there’s a decent chance I’d be the sort of person who’d unreflectively do it)
Not 100% sure I grok what philh meant in the first place, but I also want to note that I didn’t mean for my example-from-fiction to precisely match what I interpreted philh to mean. It was just an easily-accessible example from thinking about the show and game theory.
I do happen to also think there are generalizable lessons from that, which apply to both punishment and pigouvian tax. But that was sort of accidental. (i.e. I quickly searched my brain for the most relevant seeming fictional example, found one that seemed relevant, and it happened to be reasonably relevant)
One could implement a monetary tax that involves shame and social stigma, which’d feel more like being punched. One could also have a culture where being punched comes with less stigma, and is a quick “take your lumps” sort of thing. There are benefits and tradeoffs to wielding shame/stigma/dominance as part of a punishment strategy. In all cases though, you’re trying to impose a cost on an action that you want to see less of.
I strongly doubt that your group house has decided “We like you, and that act was right for that situation, but we’re going to punish you so others won’t try it”.
We’ve definitely done things of the form “okay, in this case it seems like the house is okay with this action, but we can tell that if people started doing it all the time it’d start to cause resentment, so lets basically install a Pigouvian Tax on this action so that it only ends up happening when it’s important enough.”
In a TV show where stakes are life-and-death, the consequences might look like “banishment” and in a group house the consequences are more like “pay $5 to the house”, but it feels like fairly similar principles at play.
You definitely do need different tools and principles as things grow larger and more impersonal, for sure. And I’d definitely like to see a show where the situations getting hashed out are more applicable-to-life than “zombie apocalypse.” But I do think Walking Dead is a fairly uniquely-good-show at depicting group rationality though.
Actually, it occurs to me that I’ve sort of been doing this via fiction.
My group house is currently watching “Walking Dead” which has a large number of instances of people having to negotiate with each other during high-stakes situations, where people disagree about object and meta level a lot. This has led to my house having a bunch of discussions about how the group-rationality of the characters is checking out, which is (mostly) divorced from considerations of actual real people.
This includes things like “it’s necessary to punish Bob in this situation, even though Bob was object-level-right, because allowing people to act like Bob did willy-nilly would destabilize their fragile society”. And this sort of thing happens at various scales, ranging from places where civilization is just 2 people, to civilization being a small town.
(If you want to consider cases where civilization is millions of people, you’ll need to watch Battlestar Galactica instead)
I agree with this, but with the unfortunate caveat that I think people are most likely to think about when it’s appropriate to harm people when they have some motivation to either harm or prevent someone from coming to harm.
And I’m not 100% sure if the takeaway of “think about which circumstances it’s okay to harm people sometimes at random” is actually better (although I lean towards it).
When IDC is helpful to understanding the point, there’s a splitting effect; the person who already knows IDC gets their understanding strengthened but the person who doesn’t know IDC gets distracted.
Hmm. This makes me think about something like “Arbital style ‘click here to learn the math-heavy explanation, click here to learn a more standard explanation’” thingy, except for “have you practiced this particular introspective skill?”
(I don’t actually think that’ll turn out to be a good idea for various reasons, but seemed a bit interesting)
Mod Note: this comment seems more confrontational than it needs to be. (A couple other comments in the thread in the thread also seem like they probably cross the line. I haven’t had time to process everything and form a clear opinion, but wanted to make at least a brief note)
(this is not a comment one way or another on the overall conversation)
Added: It seems the comment I replied to has been deleted.
Note: this post originally appeared in a context without comments on Overcoming Bias. Old comments on this post are over here.
I’d be curious to list the concepts that weren’t familiar—I have a hard time noting which concepts are newer and have some interest in getting some general CFAR style updates more formally merged into the main lesswrong discourse.
We encourage people to post whatever seems interesting to them on their personal blog (which is, basically, the default-submit-post process at the moment). Mods move stuff to frontpage if it seems like a good fit.
Something like that makes sense to me.
There was some discussion about epistemic status as a norm here. Sample size of, like, 2-3. My own take is that this class of epistemic status would be fine if it were rare, but when it’s one of three epistemic statuses, it makes it hard to really grok what epistemic-status-as-a-norm is even doing. (see Magic the Gathering where complicated things are fine as long as they’re not at common)
I do think it makes sense to step back, but in the opposite order (you can’t rederive your entire ontology and goal structure every time something doesn’t make sense—it’s too much work and you’d never get anything done).
“Why am I seeking status?” and “Why is EA and/or EA-organizations the right way to go about A?” seem like plausible steps-backwards to take given the questions toon is raising here.
“Why altruism?” is a question every altruist should take seriously at least once, but none of the dilemmas raised in toon’s post seem like the sort of thing that warrants questioning the entire underpinning of your goal structure. (I realize if you think the entire structure is flawed, you’re going to disagree, but I think it’s strongly meta-level important for people to be able to think through problems within a given paradigm without every conversation being about re-evaluating that paradigm)
Happy to talk more in a different top-level post but not really interested in talking more in this particular comment-section
Edit: I’ve self-downvoted now that this thread pretty much reached it’s conclusion (or at least, I don’t think it makes sense to continue it further)
Now, we point out that even if the new bundle is strictly better there is still a problem, because it being more better for other people means the price goes up more than the value proposition improves.
Doesn’t strictly better include price? I did a doubletake on this paragraph and had to process that you meant “strictly better apart from price.”
Additional nitpick/minor-grump about use-of-epistemic-status-as-weird-poetry-thing.
There’s a bunch of important considerations here that I’m glad to see a comprehensive writeup of. Not 100% sure how the moral/economic calculus all plugs together (although pretty clearly crosses into the threshold of “yes, build more and remove/simplify at least some of the regulations”). But regardless, agree that many of these points should be part of one’s model.
From a purely selfish standpoint:
I moved to the Bay Area in part because it’s current bundle was a significant improvement in several important dimensions that would definitely get worse if a lot more people moved there (specifically, access to nature and nature-esque things). I’m very glad that there isn’t good public transportation to the Marin Headlands and Mill Valley, because then those places would probably become terrible. I’m (selfishly) glad some combination of weird social forces and (apparently, I learned last week on Slatestar) weird regulations cause every house in my neighborhood to have gardens. I’m glad I can look up at night and see stars.
I’m happy to have-to-have paid extra money and weird networking costs to get that.
(Again, not saying this is good in the cosmic or medium-cosmic sense – it being good for me depended on me having a particular collection of fortunate circumstances. But, it’d still be bad for me personally if those things went away)
I think there’s a different sort of conversation where this sort of comment might be helpful (I think there’s plenty of perspectives from which EA, or “A”, doesn’t make sense, that are worth talking about). But it feels a bit outside the scope of this conversation.
(Not 100% about toonalfrink’s goals for the conversation)
What does synergy mean to you?
I think the original stipulation was not “how much would you give to a program saving X, Y or Z birds?“, but “how much would you pay to save X, Y or Z birds?” in which the fixed amount of money is explicitly saving different numbers.