I think the disparity in number of words is proportionally so large that this method won’t work. The (small) hypothetical set of dolphin words wouldn’t match to a small subset of English words, because what’s being matched is really the (embedded) structure of the relationship between the words, and any sufficiently small subset of English words loses most of its interesting structure because its ‘real’ structure relates it to many words outside that subset.
Support that dolphins (hypothetically! counterfactually! not realistically!) use only 10 words to talk about fish, but humans use 100 words to do the same. I expect you can’t match the relationship structure of the 10 dolphin words to the much more complex structure of the 100 human words. But no subset of ~10 English words out of the 100 is a meaningful subset that humans could use to talk about fish.
The approach of the linked article tries to match words meaning the same thing across languages by separately building a vector embedding of each language corpus and then looking for structural (neighborhood) similarity between the embeddings, with an extra global ‘rotation’ step mapping the two vector spaces on one another.
So if both languages have a word for “cat”, and many other words related to cats, and the relationship between these words is the same in both languages (e.g. ‘cat’ is close to ‘dog’ in a different way than it is close to ‘food’), then these words can be successfully translated.
But if one language has a tiny vocabulary compared to the other one, and the vocabulary isn’t even a subset of the other language’s (dolphins don’t talk about cats), then you can’t get far. Unless you have an English training dataset that only uses words that do have translations in Dolphin. But we don’t know what dolphins talk about, so we can’t build this dataset.
Also, this is machine learning on text with distinct words; do we even have a ‘separate words’ parser for dolphin signals?
I think you are absolutely right.
I’m not quite that sure I’m right. (I was genuinely asking about the mechanism, not claiming there isn’t one!) I am not an expert and there are other epidemics that die out without having infected most of the population, like indeed seasonal flu and cold, and I don’t know all the causes of that that might apply here.
End result will be less casualties but longer pandemic.
It could be worse; ‘less overall casualties’ relies on the reasonable but unproven assumptions of:
Reliable natural immunization, i.e. people won’t (often) catch it twice
Few or no mutations that act as a ‘second wave’ or in the extreme case like the seasonal flu that happens every year
(Most) people with light/no symptoms don’t end up with long term complications, or a persistent virus that can reactivate later
Moreover, the worst of the pandemic, or at least of this first wave, will be over in 2-3 months, as long as the containment measures are in place,
What’s the mechanism behind the slowing growth? When we let up on our measures (quarantine etc) why will the growth not speed up again, until most of the population have been exposed?
If you don’t mind the expense, you might want to consider an electric bike; I found them to be just as fun to ride as regular ones. You can set the e-assist level low enough to exert serious effort when you want to, or just turn off the engine entirely, and conversely set it high if you’re tired or your knees hurt.
I found a dropper post to be a great help with that. It’s much easier to figure out the right height while riding and not having to dismount to adjust it. And anecdotally, it sometimes feels better to adjust it up or down by 5-10 millimeters, maybe due to different clothes or shoes or posture or surface grade.
Note: even the cheapest dropper posts costs around 100 euro (from a cursory Google search). People who aim for cheap bikes often don’t consider them. If you can afford it, consider if it would be a small investment into your comfort and longer-term health.
I don’t see how that is applicable.
In the first case, to avoid the penalty of being fined, you pay taxes.
In the second case, to avoid the penalty of being taxed, you don’t donate.
If I allow you to donate without being taxed, it doesn’t follow that you will donate. Maybe you don’t want to donate to begin with, or not unless everyone else does as well. That’s the model the OP assumes.
Tax rates on non-donation gifts (= marginal income taxes of the non-rich) are “only” a few tens of percents. For the OP’s model to work, he had to assume a ratio of 1:1,000,000 between the value to a noble of keeping or donating money. That’s as if there was a 99.999,999,9% tax rate on donations! If there was such a tax rate, then making donations tax-free would certainly stimulate a lot of donations. But as it is, under the OP’s general assumptions, tax rates of ~~ 30% should not much matter.
I don’t understand your point. Paying taxes or not is not related to whether and how much other people also make charitable deductions. Bezos donating less or more doesn’t influence Gates donating less or more. What is the coordination mechanism?
Thanks, that’s informative.
One thing I would like to figure out is whether this can be explained by businesses restructuring so that some of the rich people who used to be owners getting dividends are now company executives getting salaries—but the salaries are still set mostly by themselves to benefit themselves, out of proportion to the value of their work to the company. Directors or board members often also get salaries, again for very little work in most cases.
These are things that might be colloquially called ‘capital’. Jeff Bezos has a total compensation of 1.6 million; that is indeed a tiny part of his net worth, but I still think of it as “Jeff Bezos is a capitalist who is making money from the successful business he owns”, not as “Jeff Bezos is being paid for his talents as a CEO”. I don’t care about the distinction from the income he gets from Amazon dividends, shares, or his salary as a CEO. But then I’m not an economist; perhaps these are really significant differences that I should care about.
That doesn’t seem to be enough to explain the rich not voting in the past to increase the marginal tax (like a few of them are now calling to do). Many different tax bills have been proposed over history; this doesn’t seem like an idea nobody thought of until now.
It’s likely that the rich (or people in general) don’t trust the government with their money, don’t believe it would be spent nearly as effectively or beneficially as pure redistribution, and may entirely oppose some of the government’s uses of tax money and not want to fund them.
In that case, what we need is a bill that proposes a special tax on the rich whose proceeds go only and directly towards redistribution, or some sort of universal income which not sufficient to live on but also doesn’t count as ‘income’ for ordinary taxation and disqualifications for social services for the poor. And such a thing is plausibly hard to think of, draft, and get enough support behind.
Also, I would claim charity tax deduction already is such a coordination mechanism
It’s not a coordination mechanism; it doesn’t allow people to commit to giving money if and only if everyone else also gives money, as a tax does. Even if giving money was free (untaxed), the OP’s coordination problem would remain.
That’s a good point. Then Wei_Dai’s question becomes more important: why don’t we see other coordination mechanisms in this space, besides forcible taxation? And why don’t rich people disproportionately vote in favor of more progressive taxes on themselves?
After all, if all we can learn from this post is that rich people don’t in fact have the posited preference, so this model doesn’t apply, then it’s not very interesting.
I think this probably isn’t right—e.g. capital income is a minority for the top 1% of earners in the US today, and the situation is even starker for global inequality.
That’s surprising to me. Where does most of the income of rich people come from, then? Can you point me to some relevant resource?
Why would you suppose that?
Exactly for the reason you give yourself—we now change our behavior and our environment on much shorter timescales than evolution operates on, due in large part to modern technology. We have a goal of circumventing evolution (see: this post) and we modify our goals to suit ourselves. Evolution is no longer fast enough to be the main determinant of prevailing behavior.
In the ancestral environment of pre-agricultural societies these behaviors you describe line up with maximizing inclusive genetic fitness pretty well
We don’t know almost anything about most relevant human behavior from before the invention of writing. Did they e.g. consume a lot of art (music, storytelling, theater, dancing)? How much did such consumption correlate with status or other fitness benefits, e.g. by conspicuous consumption or advertising wealth? We really don’t know.
Pessimistic errors are no big deal. The agent will randomly avoid behaviors that get penalized, but as long as those behaviors are reasonably rare (and aren’t the only way to get a good outcome) then that’s not too costly.
Also, if an outcome really is very bad, evolution has no reason to limit the amount of suffering experienced.
Getting burned is bad for you. Evolution makes it painful so you know to avoid it. But if strong and extreme pain result in the same amount of avoidance, evolution has no reason to choose “strong” over “extreme”. In fact, it might prefer “extreme” to get a more robust outcome.
And that’s how we get the ability to inflict the kind of torture which is ‘worse than death’, and to imagine a Hell (and simulations thereof) with infinite negative utility, even though evolution doesn’t have a concept of a fate being “worse than death”—certainly not worse than the death of your extended family.
But optimistic errors are catastrophic. The agent will systematically seek out the behaviors that receive the high reward, and will use loopholes to avoid penalties when something actually bad happens. So even if these errors are extremely rare initially, they can totally mess up my agent.
In other words, make sure your agent can’t wirehead! And from evolution’s point of view, not spending your time promoting your inclusive genetic fitness is wireheading.
We didn’t have the technology to literally wirehead until maybe very recently, so we went about it the long way round. But we still spend a lot of time and resources on e.g. consuming art, even absent signalling benefits, like watching TV alone at home. And evolution doesn’t seem to be likely to “fix” that given some more time.
We don’t behave in a “Malthusian” way, investing all extra resources in increasing the number or relative proportion of our descendants in the next generation. Even though we definitely could, since population grows geometrically. It’s hard to have more than 10 children, but if every descendant of yours has 10 children as well, you can spend even the world’s biggest fortune. And yet such clannish behavior is not a common theme of any history I’ve read; people prefer to get (almost unboundedly) richer instead, and spend those riches on luxuries, not children.
The immediate example that comes to mind is when Richard Stallman was canceled in October, some people feared he was or was in danger of being homeless. I remember reading a post about this on Eric Raymond’s personal blog, which he has since apparently deleted or hidden. Part of the info was in posts on Stallman’s own blog stallman.org, which seems to be down, also referenced in this reddit thread.
I would like to think that RMS didn’t end up homeless, or not for more than a few days, since there must be many people who would give him donations if it came to that (and if he would accept them). But there has been no (very) public announcement of him being alright, for understandable reasons. The list of people and organizations who denounced him was impressively long, regardless. (I mean the ones like GNU and MIT, not the professional denouncers who decided to attack him.)
The framing of “nobles” and “peasants” distracts me from your question; it implies connotations that you might want to clarify or endorse, or change your terminology.
Real-life nobles don’t produce 10,000x value; they extract value from peasants, by force of arms and law and custom. It makes no sense to redistribute wealth by taxing everyone’s income if the nobles get their income by taxing the peasants; just stop the nobles from extracting so much value.
Some of the modern super-rich do generate disproportionately high value, e.g. from high-risk bets they made to build innovative companies. But most of their income still comes from capital and owning the tools of production and all that (citation required). And this influences the moral calculus for a lot of people. The reason for taking some of their property (income) is not just that most people want to do it or that someone else would enjoy it much more, it’s that it shouldn’t be theirs to begin with.
A terminology of “nobles” and “peasants” implies to me the idea that most all of the nobles’ (the modern rich) income is extracted from from the peasants (everyone else), enabled by the same state that then taxes them. Did you intend or endorse this view? If not, or if you think it’s irrelevant to the thought experiment, do you think the framing of “nobles” and “peasants” distracts from the issue? It does for me.
This post is well written and not over-long. If the concepts it describes are unfamiliar to you, it is a well written introduction. If you’re already familiar with them, you can skim it quickly for a warm feeling of validation.
I think the post would be even better with a short introduction describing its topic and scope, but I’m aware that other people have different preferences. In particular:
There are more than two ‘cultures’ or styles of discussion, perhaps many more. The post calls this out towards the end (apparently this is new in v2).
The post gives two real examples of Combat Culture, and only one made-up scenario of Nurture Culture. It does not attempt to ground the discussion in anything quantitative—how common these cultures are, what they correlate with, how to recognize or test for them, how gradually they may shade into each other or into something else altogether.
I don’t want to frame these as shortcomings; the post is still useful and interesting without them!
This post raises some reasonable-sounding and important-if-true hypotheses. There seems to be a vast open space of possible predictions, relevant observations, and alternative explanations. A lot of it has good treatment, but not on LW, as far as I know.
I would recommend this post as an introduction to some ideas and a starting point, but not as a good argument or a basis for any firm conclusions. I hope to see more content about this on LW in the future.