Blog at thelimelike.wordpress.org
Closed Limelike Curves
I should probably expand on this—it can make sense to have a mechanism or decision-making rule that’s inefficient or irrational for reasons of incentive compatibility, information or computational limits, or other practical constraints. That said, we should be very explicit about describing these as mechanisms, institutions, or collective decision rules and not as preferences. These are second-best tools for governance that lack basic properties you’d expect of human preferences. Actually, as Harsanyi proved back in the 1950s, the unique social choice function—up to affine transformations—which preserves individual rationality (i.e. really can be called a group’s “preferences”) is the utilitarian rule. For the same reason I’d reject calling this “geometric rationality” rather than one of the common names already used for this technique (e.g. the proportional-fair rule, Nash bargaining—or just geometric maximization for the whole family of methods).
If we’re not very clear when we describe this, it confuses the hell out of people who start to think these are alternative, contradictory formulations of rationality, and then use these arguments to reject VNM-rationality.
Argument here is basically a special case of Harsanyi’s utilitarian theorem, which shows the only rational social-choice rule is the utilitarian (family of) rules.
[I wrote this blog post as part of the Asterisk Blogging Fellowship. It’s substantially an experiment in writing more breezily and concisely than usual, and on a broader topic. Let me know how you feel about the style.]
Here’s a concise version:
I wrote this as part of the Asterisk Blogging Fellowship. It’s an experiment in writing shorter posts on broader topics. Let me know how you feel about it.
I read the whole policy change as meaning this old policy (filtering for quality, not LLM use) is about to end.
I mean I think so; I have never in my life heard anyone say good things about preservatives in food until now.
The model you’ve described (hidden quality differences) is a huge part of it, yes. I’ll try and find the paper, but in general nominal/market exchange rates tend to be stronger predictors of most objective, cross-comparable outcomes than indices that try to control for cost of living (PPP). If two goods/services that look equivalent are selling for different prices, it’s usually (though not always) because there’s some difference you’re not able to measure.
To be more precise—I don’t think it’s logically coherent to apportion voting power between states according to wealth, but between people by population (i.e. equally). Either you want to upweight high taxpayers or you don’t.
There’s countries that use equal-population districts for both houses, but at that point it feels like the bicameralism is just copied from the US without strong reasoning (e.g. Italy, Japan, South Korea).
A clever adversary who knows your preferences can now exploit this. They offer you a sequence of trades: pay a small amount to switch from plan-A to plan-B before the coin flip (because ex ante you prefer B in context), then after the coin lands heads, pay a small amount to switch from B to A (because ex post you prefer A in isolation). You have paid twice and ended up exactly where you started.
OK, I see you’ve confused Independence of Irrelevant Alternatives with VNM-Independence here. I don’t want to be too harsh here, because this is probably my fault, but this basically completely undermines your core argument since you haven’t actually refuted the Dutch Book for VNM independence. In short, c. 2022:
I made an edit describing VNM-independence as a probabilistic version of IIA in the Wikipedia article on the VNM theorem. By this I meant they share a common structure in that both are about ignoring details that obviously shouldn’t affect the outcome, but which several people misunderstood as saying VNM-independence encompasses IIA (it doesn’t).
In this StackExchange comment, I answered the question in the OP’s title without explaining they were confusing the two axioms in the body.
I’ve since fixed both of these, but for a while these were two of the first things that showed up when you Googled the topic, so chances are you read them.
There’s several other major misunderstandings in this post that trace back to Wikipedia articles that I, unfortunately, haven’t had time to correct. In general: Wikipedia articles and online resources on rational choice or decision theory are not reliable. I do not recommend reading them. They consistently make several basic errors along that show up in this post, like conflating descriptive and prescriptive validity, confusing expected value with expected utility, and citing confused philosophers as if they represent substantial viewpoints (when they contain mathematical errors that have been repeatedly pointed out by experts).
I recommend plugging this into Gemini and asking it to explain all the mistakes in this post, which it should be able to do; it did a good job explaining the difference between IIA and VNM-independence when I tried it out.
Bicameral systems can respect one person one vote, and they even do in the 49 states with bicameral legislatures.
It’s certainly possible but very weird/pointless. IMO it’s not clear what the point is since the two houses will be effectively the same.
You have to write these articles with primary sources, but they hate those; see one of their favorite jargons, WP:PRIMARY.
Quick comment that this hasn’t been as big an issue IME, so maybe this changed. Nowadays there’s a stronger precedent that academic sources tend to override non-academic ones.
Or rather, it’s not explained by an honest+rational agents model (people are either overestimating or lying).
Also worth noting there’s no fixed reference point—people are being asked to compare themselves to the overall population. That means another way of looking at this is that people have a bias toward consistently overestimating how dumb other people are.
Yep! Like I mentioned, this has the problem that state borders are fairly arbitrary and let you “launder” one group’s wealth into another group’s voting power.
But I just remembered—before Reynolds v. Sims established “one man, one vote”, New Hampshire used this exact system. From 1784 through 1964, districts were apportioned based on taxable wealth.
“people[...] update away from that bias based on their competence or lack thereof, but they don’t update hard enough”
I don’t think this a bias; I’d actually take it as a striking sign of rationality and a great example of Friedman’s billiards: people can subconsciously perform incredibly complex calculations to optimize for some goal (even if they’re incapable of understanding or reasoning about them explicitly).
Here’s an intuition pump: I’m going to measure your floorbitude and tell you your percentile. What do you think it will be? (...) Answer: 50%. You have no reason to think you’re more or less floorbacious than the average person, after all.
In general this pattern is exactly what you’d expect from a rational actor model, where people combine the information they have (e.g. IQ tests or grades) with a prior distribution (“my IQ is probably around 100, like most people’s”).
(People overestimating their IQ does seem like an actual bias, though.)
I feel like it might be a Great Filter the other way. I vaguely remembered hearing something in school about how the asteroid kicked up a bunch of dust into the sky which created a long, cold winter that starved all the dinosaurs.
Then I listened to a paleontologist get asked this question. The conversation went something like:
>Well, in the Americas, every animal on the surface would’ve been incinerated because literally everything was on fire.
>What about the eastern hemisphere?
>Oh, well over there there it was RAINING FUCKING LAVA.
We had no right surviving that shit.
Outside of digital electronics, time is continuous
Oh, I see the misunderstanding! So, what you can do here is fix the issue by taking the limit as the time step goes to 0, or equivalently, saying
(Temperature) directly affects both and (Regulator), while affects .What we need to do here is remember that both
and are (possibly continuous) functions of (time), and think of these as showing at one point in time. The important fact about this system is can’t affect directly/instantaneously (which violates causality), but it can affect (or in discrete time, ). For example, your thermostat might control how much electricity travels through a coil to heat your room. However, it can’t instantly bump the temperature up by 20 degrees. This would be a logical contradiction— clearly can’t force to be 20 degrees higher than… . just doesn’t work.
At the subnational or individual level? Apportioning e.g. states this way has the issue of being arbitrary and sensitive to the specific borders you draw, but was proposed by Madison IIRC. You can “gerrymander” these states (intentionally or unintentionally) by packing a lot of high-productivity citizens into a state with a majority of low-income voters.
At the individual level, weighted setups like this were very common in the 19th century. I know Prussia/Germany, Belgium, Austria, and France used class-based voting systems of this sort. These systems were mostly swept away in the late 19th and early 20th centuries under pressure from liberals and social democrats (sometimes peacefully, as in Belgium, and sometimes by revolutionaries as in Germany).
Of these, the Prussian three-class franchise was the “purest”/closest to what you describe: people were assigned one of 3 classes based on how much they paid in taxes, with all three groups paying the same amount overall. (Taxes were roughly proportional to income.) Each of these groups received the same number of representatives.
Everyone thinks that, but not necessarily! There’s a big body of research (e.g. here and meta-analysis here but let me know if you want more examples) showing that training people to suppress their angry/depressive/anxious/etc. thoughts is generally helpful: it reduces negative thoughts and emotions, even in the long run (followups after several months). Similarly, encouraging an angry person to “let it all out” or “vent” tends to make them feel worse/angrier (meta-analysis here, researchers discussing their findings here).
That said, it depends what specifically you mean—you want to encourage people to avoid those thoughts entirely, not just train them to avoid saying them out loud (expressing negative thoughts has ambiguous/inconsistent effects on well-being depending on lots of things like frequency, who you express them to, etc.). I’m not sure how you’d separate those out in an LLM, where the thinking is the speech.
“Output” is a very common word in non-programming contexts, and I think only programmers will associate it with computer science. (My first thought is outputs of a production process.) It’s a very simple Anglish combination (out + put → “thing that is put out”).
More importantly, I think this is missing a big part of this post’s point, which is that how hard it is to understand a text has very little to do with how difficult the individual words are. To quote Randall Munroe:
“I’ve noticed you physics people can be a little on the reductionist side.”
“That’s ridiculous. Name ONE reductionist word I’ve ever said.”
You can produce a 5-word passage that only uses words from the 850 BASIC English list, but is still hard to understand. Example: “the old man the boat”.
That sounds like something someone with low morale would do.