I was going for a walk yesterday night, and when I looked up at the sky, I saw something I had never seen before: a bright orange dot, like a star, but I had never seen a star that bright and so orange before. “No… that can’t be”- but it was: I was looking at Mars, that other world I had heard so much about, thought so much about.
I never realized until yesterday that I had never seen Mars with my own two eyes until that day- one of the closest worlds that humans could, with minimal difficulty, make into a new home one day.
It struck me then in a way that I never felt before, just how far away the world Mars is. I knew it in an abstract sense, but seeing this little dot in the distance, a dot that I knew to be an object larger even than the Moon, but seeming so small in comparison, made me realize, in my gut, just how far away this other world was, just like how when I stand on top of a mountain, and see small buildings on the ground way below me, I realize that those small buildings are actually skyscrapers far away.
And yet, as far as Mars was that night, it was so bright, so apparent, precisely because it was closer now to us than it normally ever is- normally this world is even further from us than it is now.
Religion isn’t about believing false things. Religion is about building bonds between humans, by means including (but not limited to) costly signalling. It happens that a ubiquitous form of costly signalling used by many prominent modern religions is belief taxes (insisting that the ingroup professes a particular, easily disproven belief as a reliable signal of loyalty), but this is not neccesary for a religion to successfully build trust and loyalty between members. In particular, costly signalling must be negative-value for an individual (before the second-order benefits from the group dynamic), but need not be negative-value for the group, or for humanity. Indeed, the best costly sacrifices can be positive-value for the group or humanity, while negative-value for the performing individual. (There are some who may argue that positive-value sacrifices have less signalling value than negative value sacrifices, but I find their logic dubious, and my own observations of religion seem to suggest positive-value sacrifice is abundant in organized religion, albeit intermixed with neutral- and negative-value sacrifice)
The rationalist community is averse to religion because it so often goes hand in hand with belief taxes, which are counter to the rationalist ethos, and would threaten to destroy much that rationalists value. But religion is not about belief taxes. While I believe sacrifices are an important part of the functioning of religion, a religion should avoid asking its members to make sacrifices that destroy what the collective values, and instead encourage costly sacrifices that help contribute to the things we collectively value.
In particular, costly signalling must be negative-value for an individual
That’s one way to do things, but I don’t think it’s necessary. A group which requires (for continued membership) members to exercise, for instance, imposes a cost, but arguably one that should not be (necessarily*) negative-value for the individuals.
*Exercise isn’t supposed to destroy your body.
If it’s not negative value, it’s not costly signalling. Groups may very well expect members to do positive-value things, and they do—Mormons are expected to follow strict health guidelines, to the extent that Mormons can recognize other Mormons based on the health of their skin; Jews partake in the Sabbath, which has personal mental benefits. But even though these may seem to be costly sacrifices at first glance, they cannot be considered to be costly signals, since they provide positive value
If a group has standard which provide value, then while it isn’t a ‘costly signal’ it sorts out people who aren’t willing to invest effort.*
Just because your organization wants to be strong and get things done, doesn’t mean it has to spread like cancer*/cocaine**.
And something that provides ‘positive value’ is still a cost. Living under a flat 40% income tax by one government has the same effect as living under 40 governments which each have a flat 1% income tax. You don’t have to go straight to ‘members of this group must smoke’. (In a different time and place, ‘members of this group must not smoke’ might have been regarded as an enormous cost, and worked as such!)
*bigger isn’t necessarily better if you’re sacrificing quality for quantity
**This might mean that strong and healthy people avoid your group.
If you know someone is rational, honest, and well-read, then you can learn a good bit from the simple fact that they disagree with you.
If you aren’t sure someone is rational and honest, their disagreement tells you little.
If you know someone considers you to be rational and honest, the fact that they still disagree with you after hearing what you have to say, tells you something.
But if you don’t know that they consider you to be rational and honest, their disagreement tells you nothing.
It’s valuable to strive for common knowledge of you and your partners’ rationality and honesty, to make the most of your disagreements.
If you know someone is rational, honest, and well-read, then you probably don’t know them all that well. If someone considers you to be rational and honest, and well-read, that indicates they are not.
Does newspeak actually decrease intellectual capacity? (No)
In George Orwell’s book 1984, he describes a totalitarian society that, among other initiatives to suppress the population, implements “Newspeak”, a heavily simplified version of the English language, designed with the stated intent of limiting the citizens’ capacity to think for themselves (thereby ensuring stability for the reigning regime)
In short, the ethos of newspeak can be summarized as: “Minimize vocabulary to minimize range of thought and expression”. There are two different, closely related, ideas, both of which the book implies, that are worth separating here.
The first (which I think is to some extent reasonable) is that by removing certain words from the language, which serve as effective handles for pro-democracy, pro-free-speech, pro-market concepts, the regime makes it harder to communicate and verbally think about such ideas (I think in the absence of other techniques used by Orwell’s Oceania to suppress independent thought, such subjects can still be meaningfully communicated and pondered, just less easily than with a rich vocabulary provided)
The second idea, which I worry is an incorrect takeaway people may get from 1984, is that by shortening the dictionary of vocabulary that people are encouraged to use (absent any particular bias towards removing handles for subversive ideas), one will reduce the intellectual capacity of people using that variant of the language.
A slight tangent whose relevance will become clear: If you listen to a native Chinese speaker, then compare the sound of their speech to a native Hawaiian speaker, there are many apparent differences in the sound of the two languages. Chinese has a rich phonological inventory containing 19 consonants, 5 vowels, and quite famously, 4 different tones (pitch patterns) which are used for each syllable, for a total of 5400 (approximately) possible syllables, including diphthongs and multi-syllabic vowels. Compare this to Hawaiian, which has 8 consonants, and 5 vowels, and no tones. Including diphthongs, there are 200 possible Hawaiian syllables.
One might naïvely expect that Mandarin speakers can communicate information more quickly than Hawaiian speakers, at a rate of 12.4 bits / syllable vs. 7.6 bits / syllable—however, this is neglecting the speed at which syllables are spoken- Hawaiian speakers speak much faster than Chinese speakers, and accounting for this difference in cadence, Hawaiian and Mandarin are much closer to each other in speed of communication than their phonologies would suggest.
Back to 1984. If we cut the dictionary down, so it is only 1/20th the size it is now (while steering clear of the thoughtpolice and any bias in removal of words), what should we expect will happen? One may naïvely think, that just as banning the words “democracy”, “freedom”, and “justice” would inhibit people’s ability to think about Enlightenment Values, banning most of the words should inhibit our ability to think about most of the things.
But that is not what I would expect to see happen. One should expect to see compound words take the place of deprecated words, speaking speeds increased, and to accommodate the increased cadence of speech, tricky sequences of sounds will be elided (blurred / simplified), allowing for complex ideas to ultimately be communicated at a pace that rivals that of standard English. Plus, it’d be (massively) easier for non-Anglophones to learn, which would be a big plus.
If I had more time, I’d write about why I think we nonetheless find the concept of Simplified English to be somewhat aversive- speaking a simplified version of a language becomes an antisignal for intelligence and social status, so we come to look down on people who attempt to utilize simplified language, while celebrating those who flex their mental capacity by using rare vocabulary.
Since I’m tired and would rather sleep than write more, I’ll end with a rhetorical question: would you rather be in a community that excels at signaling, or a community that actually gets stuff done?
Yes, the important thing is the concepts, not their technical implementation in the language.
Like, in Esperanto, you can construct “building for” + “the people who are” + “the opposite of” + “health” = hospital. And the advantage is that people who never heard that specific word can still guess its meaning quite reliably.
we nonetheless find the concept of Simplified English to be somewhat aversive
I think the main disadvantage is that it would exist in parallel, as a lower-status version of the standard English. Which means that less effort would be put into “fixing bugs” or “implementing features”, because for people capable of doing so, it would be more profitable to switch to the standard English instead.
(Like those software projects that have a free Community version and a paid Professional version, and if you complain about a bug in the free version that is known for years, you are told to deal with it or buy the paid version. In a parallel universe where only the free version exists, the bug would have been fixed there.)
would you rather be in a community that excels at signaling, or a community that actually gets stuff done?
How would you get stuff done if people won’t join you because you suck at signaling? :( Sometimes you need many people to join you. Sometimes you only need a few specialists, but you still need a large base group to choose from.
As an aside, I think it’s worth pointing out that Esperanto’s use of the prefix mal- to indicate the opposite of something (akin to Newspeak’s un-) is problematic: two words that mean the exact opposite will sound very similar, and in an environment where there’s noise, the meaning of a sentence can change drastically based on a few lost bits of information, plus it also slows down communication unnecessarily.
In my notes, I once had the idea of a “phonetic inverse”: according to simple, well defined rules, each word could be transformed into an opposite word, which sounds as different as possible from the original word, and has the opposite meaning. That rule was intended for an engineered language akin to Sona, so the rules would need to be worked a bit to have something good and similar for English, but I prefer such a system to Esperanto’s inversion rules
The other problem is that opposite is ill defined depending and requires someone else to know which dimension you’re inverting along as well as what you consider neutral/0 for that dimension
While this would be an inconvenience for the on-boarding process for a new mode of communication, I actually don’t think it’s that big of a deal for people who are already used to the dialect (which would probably make up the majority of communication) and have a mutual understanding of what is meant by [inverse(X)] even when X could in principle have more than one inverse.
That makes the concept much less useful though. Might as well just have two different words that are unrelated. The point of having the inverse idea is to be able to guess words right?
I’d say the main benefit it provides is making learning easier—instead of learning “foo” means ‘good’ and “bar” means ‘bad’, one only needs to learn “foo” = good, and inverse(“foo”) = bad, which halves the total number of tokens needed to learn a lexicon. One still needs to learn the association between concepts and their canonical inverses, but that information is more easily compressible
“From AI to Zombies” is a terrible title… when I recommend The Sequences to people, I always feel uncomfortable telling them the name, since the name makes it sound like cookey bull****- in a way that doesn’t really indicate what it’s about
I’m also bothered by the fact that it is leading up to AI alignment and the discussion of Zombies is in the middle!Please change?
I usually just call it “from A to Z”
I think “From AI to Zombies” is supposed to imply “From A to Z”, “Everything Under the Sun”, etc., but I don’t entirely disagree with what you said. Explaining either “Rationality: From AI to Zombies” or “The Sequences” to someone always takes more effort than feels necessary.
The title also reminds me of quantum zombies or p-zombies everytime I read it...are my eyes glazed over yet?
Counterpoint: “The Sequences” sounds a lot more cult-y or religious-text-y.
“whispers: I say, you over there, yes you, are you familiar with The Sequences, the ones handed down from the rightful caliph, Yudkowsky himself? We Rationalists and LessWrongians spend most of our time checking whether we have all actually read them, you should read them, have you read them, have you read them twice, have you read them thrice and committed all their lessons to heart?” (dear internet, this is satire. thank you, mumbles in the distance)
Suggestion: if there were a very short eli5 post or about page that a genuine 5 year old or 8th grader could read, understand, and get the sense of why The Sequences would actually be valuable to read, this would be a handy resource to share.
I’m quite baffled by the lack of response to my recent question asking about which AI-researching companies are good to invest in (as in, would have good impact, not necessarily most profitable)- It indicates either A) most LW’ers aren’t investing in stocks (which is a stupid thing not to be doing), or B) are investing in stocks, but aren’t trying to think carefully about what impact their actions have on the world, and their own future happiness (which indicates a massive failure of rationality)
Even putting this aside, the fact that nobody jumped at the chance to potentially shift a non-trivial (for certain definitions of trivial) amount of funding away from bad organizations and towards good organizations (which I’m investing primarily as a personal financial strategy), seems very worrying to me. While it is (as ChristianKI pointed out) debatable that the amount of funding I can provide as a single person will make a big difference to a big company, it’s bad decision theory to model my actions as only being correlated with myself; and besides, if the funding was redirected, it probably would have gone somewhere without the enormous supply of funds Alphabet has, and very well could have made an important difference, pushing the margins away from failure and towards success.
There’s a good chance I may change my mind in the future about this, but currently my response to this information is a substantial shift away from the LW crowd actually being any good at usefully using rationality instrumentally
(For what it’s worth, the post made it not at all clear to me that we were talking about a nontrivial amount of funding. I read it as just you thinking a bit through your personal finance allocation. The topic of divesting and impact investing has been analyzed a bunch on LessWrong and the EA Forum, and my current position is mostly that these kinds of differences in investment don’t really make much of a difference in total funding allocation, so it doesn’t seem worth optimizing much, besides just optimizing for returns and then taking those returns and optimizing those fully for philanthropic impact.)
This seems to be the common rationalist position, but it does seem to be at odds with:
The common rationalist position to vote on UDT grounds.
The common rationalist position to eschew contextualizing because it ruins the commons.
I don’t see much difference between voting because you want others to also vote the same way, or choosing stocks because you want others to choose stocks the same way.
I also think it’s pretty orthogonal to talk about telling the truth for long term gains in culture, and only giving money to companies with your values for long term gains in culture.
eschew contextualizing because it ruins the commons
I don’t understand. What do you mean by contextualizing?
More here: https://www.lesswrong.com/posts/7cAsBPGh98pGyrhz9/decoupling-vs-contextualising-norms
For what it’s worth, I get frustrated by people not responding to my posts/comments on LW all the time. This post was my attempt at a constructive response to that frustration. I think if LW was a bit livelier I might replace all my social media use with it. I tried to do my part to make it lively by reading and leaving comments a lot for a while, but eventually gave up.
either A) most LW’ers aren’t investing in stocks
Does LW 2.0 still have the functionality to make polls in comments? (I don’t remember seeing any recently.) This seems like the question that could be easily answered by a poll.
It doesn’t; this feature didn’t survive the switchover from old-LW to LW2.0.
While it is (as ChristianKI pointed out) debatable that the amount of funding I can provide as a single person will make a big difference to a big company
My point wasn’t about the size about the company but about whether or not the company already has large piles of cash that it doesn’t know how to invest.
There are companies that want to invest more capital then they have available and thus have room for funding and there are companies where that isn’t the case.
There’s a hilarious interview with Peter Thiel and Eric Schmidt where Thiel charges Google with not spending their 50 billion dollar in the bank that it doesn’t know what to do with and Eric Schmidt says “What you discover running these companies is that there are limits that are not cash...”
That interview happened back in 2012 but since then the amount of cash reverse of Alphabet has more then doubled despite some stock buybacks.
Companies like Tesla or Amazon seem to be willing to invest additional capital to which they have access in a way that Alphabet and Microsoft simply don’t.
A) most LW’ers aren’t investing in stocks (which is a stupid thing not to be doing)
My general model would be that most LW’ler think that the instrumentally rational thing is to invest the money into a low-fee index fund.
Wow, that video makes me really hate Peter Thiel (I don’t necessarily disagree with any of the points he makes, but that communication style is really uncool)
In most context I would also dislike this communication style. In this case I feel that the communication style is necessary to get a straight answer from Eric Schmidt who would rather avoid the topic.
On the contrary, I aspire to the clarity and honesty of Thiel’s style. Schmidt seems somewhat unable to speak directly. Of the two of them, Thiel was able to say specifics about how the companies were doing excellently and how they were failing, and Schmidt could say neither.
Thank you for this reply, it motivated me to think deeper about the nature of my reaction to Thiel’s statements, and my thoughts on the conversation between Thiel and Schmidt. I would share my thoughts here, but writing takes time and energy, and I’m not currently in position to do so.
During today’s LW event, I chatted with Ruby and Raemon (seperately) about the comparison between human-made photovoltaic systems (i.e. solar panels), and plant-produced chlorophyll. I mentioned that in many ways chlorophyll is inferior to solar panels—consumer grade solar panels operate in the 10% to 20% efficiency range (i.e. for every 100 joules of light energy, 10 − 20 joules are converted into usable energy), while chlorophyll is around 9% efficient, and modern cutting edge solar panels can go even as high as nearly 50% efficiency. Furthermore, every fall the leaves turn red and fall down to the ground only for new leaves – that is plant-based solar panels – to be generated again in the spring. One sees green plants where there very well could be solar panels capturing light, and naïvely we would expect solar panels to do a better job, but we plant plants instead, and let them gather energy for us.
One of them (I think Ruby) didn’t seem convinced that it was fair to compare solar panels with chlorophyll – is it really an apples to apples comparison? I think it is a fair comparison. It is true that plants do a lot of work beyond simply capturing light, and electricity goes to different things than what plants do, but ultimately what both plant-based farms and photovoltaic cells do is they capture energy from sunlight coming to the earth from the sun, and convert them to human usable energy. One could imagine genetically engineered plants doing much of what we use electricity for these days, or industrial processes being hooked up to solar panels that do the things plants do, and in this way we can make a meaningful comparison of how much energy plants allow us to use for human desired goals and compare that to how much energy photovoltaic cells can redirect to human-desired uses.
Huh, somehow while chatting with you I got the impression that it was the opposite (chlorophyll more effective than solar panels). Might have just misheard.
The big advantage chlorophyll has is that it is much cheaper than photovoltaics, which is why I was saying (in our conversation) we should take inspiration from plants
Gotcha. What’s the metric that it’s cheaper on?
Well, money, for one?
It would be interesting to see the efficiency of solar + direct air capture compared to plants. If it wins I will have another thing to yell at hippies (before yelling about there not being enough land area even for solar)
There’s plenty of land area for solar. I did a rough calculation once, and my estimate was that it’d take roughly twice the land area of the Benelux to build a solar farm that produced as much energy per annum as the entirety of humanity uses each year (The sun outputs an insane amount of power, and if one steps back to think about it, almost every single joule of energy we’ve used came indirectly through the sun—often through quite inefficient routes). I didn’t take into account day/night cycles, or losses of efficiency due to transmission, but if we assume 4x loss due to nighttime (probably a pessimistic estimate) and 5x loss due to transmission (again, being pessimistic), it still comes out to substantially less than the land we have available to us (About 1⁄3 the size of the Sahara desert)
I’m quite scared by some of the responses I’m seeing to this year’s Petrov Day. Yes, it is symbolic. Yes, it is a fun thing we do. But it’s not “purely symbolic”, it’s not “just a game”. Taking things that are meant to be serious is important, even if you can’t see why they’re serious.
As I’ve said elsewhere, the truly valuable thing a rogue agent destroys by failing to live up to expectations on Petrov day, isn’t just whatever has been put at stake for the day’s celebrations, but the very valuable chance to build a type of trust that can only be built by playing games with non-trivial outcomes at stake.
Maybe there could be a better job in the future of communicating the essence of what this celebration is intended to achieve, but to my eyes, it was fairly obvious what was going on, and I’m seeing a lot of comments by people (whose other contributions to LW I respect) who seemed to completely fail to see what I thought was obviously the spirit of this exercise
Epistemic: Intend as a (half-baked) serious proposal
I’ve been thinking about ways to signal truth value in speech- in our modern society, we have no way to readily tell when a person is being 100% honest- we have to trust that a communicator is being honest, or otherwise verify for ourselves if what they are saying is true, and if I want to tell a joke, speak ironically, or communicate things which aren’t-literally-the-truth-but-point-to-the-truth, my listeners need to deduce this for themselves from the context in which I say something not-literally-true. This means that almost always, common knowledge of honesty never exists, which significantly slows down positive effects from Aumann’s Agreement Theorem
In language, we speak with different registers. Different registers are different ways of speaking, depending on the context of the speech. The way a salesman speaks to a potential customer, will be distinct from the way he speaks to his pals over a beer—he speaks in different registers in these different situations. But registers can also be used to communicate information about the intentions of the speaker—when a speaker is being ironic, he will intone his voice in a particular way, to signal to his listeners that he shouldn’t be taken 100% literally.
There are two points that come to my mind here: One: establishing a register of communication that is reserved for speaking literally true statements, and Two: expanding the ability to use registers to communicate not-literally-true intent, particularly in text.
On the first point, a large part of the reason why people speaking in a natural register cannot always be assumed to be saying something literally true, is that there is no external incentive to not lie. Well, sometimes there are incentives to not lie, but oftentimes these incentives are weak, and especially in a society built upon free speech, it is hard to—on a large scale—enforce a norm against not lying in natural-register speech. Now my mind imagines a protected register of speech, perhaps copyrighted by some organization (and which includes unique manners of speech which are distinctive enough to be eligible for copyright), which that organization vows to take action against anybody who speaks not-literally-true statements (i.e., which communicate a world model that does not reliably communicate the actual state of the world) in that register; anybody is free (according to a legally enforcable license) to speak whatever literally-true statements they want in that register, but may not speak non-truths in that register, at pain of legal action.
If such a register was created, and was reliably enforced, it would help create a society where people could readily trust strangers saying things that they are not otherwise inclined to believe, given that the statement is spoken in the protected register. I think such a society would look different from current society, and would have benefits compared to current society. I also think a less-strict version of this could be implemented by a single platform (perhaps LessWrong?), replacing legal action with the threat of being suspended for speaking not-literal-truths in a protected register, and I also suspect that it would have a non-zero positive effect. This also has the benefit of being probably cheaper, and in a less unclear legal position related to speech.
I don’t currently have time to get into details on the second point, but I will highlight a few things: Poe’s law states that even the most extreme parody can be readily mistaken for a serious position;; Whereas spoken language can clearly be inflected to indicate ironic intent, or humor, or perhaps even not-literally-true-but-pointing-to-the-truth, the carriers of this inflection are not replicated in written language—therefore, written language, which the internet is largely based upon, lacks the same richness of registers that allows a clear distinction between extreme-but-serious postitions from humor. There are attempts to inflect writing in such a way as to provide this richness, but as far as I know, there is no clear standard that is widely understood that actually accomplishes this. This is worth exploring in the future. Finally, I think it is worthwhile to spend time reflecting on intentionally creating more registers that are explicitly intended to communicate varying levels of seriousness and intent.
most extreme parody can be readily mistaken for a serious position
I may be doing just that by replying seriously. If this was intended as a “modest proposal”, good on you, but you probably should have included some penalty for being caught, like surgery to remove the truth-register.
Humans have been practicing lying for about a million years. We’re _VERY_ good at difficult-to-legislate communication and misleading speech that’s not unambiguously a lie.
Until you can get to a simple (simple enough for cheap enforcement) detection of lies, an outside enforcement is probably not feasible. And if you CAN detect it, the enforcement isn’t necessary. If people really wanted to punish lying, this regime would be unnecessary—just directly punish lying based on context/medium, not caring about tone of voice.
I assure you this is meant seriously.
Until you can get to a simple (simple enough for cheap enforcement) detection of lies, an outside enforcement is probably not feasible.
Until you can get to a simple (simple enough for cheap enforcement) detection of lies, an outside enforcement is probably not feasible.
There’s plenty of blatant lying out there in the real world, which would be easily detectable by a person with access to reliable sources and their head screwed on straight- I think one important facet of my model of this proposal, that isn’t explicitly mentioned in this shortform, is that validating statements is relatively cheap, but expensive enough that for every single person to validate every single sentence they hear is infeasible. By having a central arbiter of truth that enforces honesty, it allows one person doing the heavy lifting to save a million people from having to each individually do the same task.
If people wanted to punish lying this regime would be unnecessary—just directly punish lying based on context/medium, not caring about tone of voice.
If people wanted to punish lying this regime would be unnecessary—just directly punish lying based on context/medium, not caring about tone of voice.
The point of having a protected register (in the general, not platform-specific case), is that it would be enforceable even when the audience and platform are happy to accept lies- since the identifiable features of the register would be protected as intellectual property, the organization that owned the IP could enforce a violation of the intellectual property, even when there would be no legal basis for violating norms of honesty
The point of having a protected register (in the general, not platform-specific case), is that it would be enforceable even when the audience and platform are happy to accept lies
Oh, I’d taken that as a fanciful example, which didn’t need to be taken literally for the main point, which I thought was detecting and prosecuting lies. I don’t think that part of your proposal works—“intellectual property” isn’t an actual law or single concept, it’s an umbrella for trademark, copyright, patent, and a few other regimes. None of which apply to such a broad category of communication as register or accent.
You probably _CAN_ trademark a phrase or word, perhaps “This statement is endorsed by TruthDetector(TM)”. It has the advantage that it applies in written or spoken media, has no accessibility issues, works for tonal languages, etc. And then prosecute uses that you don’t actually endorse.
Endorsing only true statements is left as an excercise, which I suspect is non-trivial on it’s own.
I suspect there’s a difference between what I see in my head when I say “protected register”, compared to the image you receive when you hear it. Hopefully I’ll be able to write down a more specific proposal in the future, and provide a legal analysis of whether what I envision would actually be enforceable. I’m not a lawyer, but it seems that what I’m thinking of (i.e., the model in my head) shouldn’t be dismissed out of hand (although I think you are correct to dismiss what you envision that I intended)
Scott Garrabrandt presents Cartesian Frames as being a very mathematical idea. When I asked him about the prominence of mathematics in his sequence, he said “It’s fundamentally math; I mean, you could translate it out of math, but ultimately it comes from math”. But I have a different experience when I think about Cartesian Frames- first and foremost, my mental conception of CF is as a common sense idea, that only incidentally happens to be expressable in mathematical terms (edit: when I say “common sense” here, I don’t mean that it’s a well known idea—it’s not, and Scott is doing good by sharing his ideas—but the idea feels similar to other ideas in the “common sense” category). I think both perspectives are valuable, but the interesting thing I want to note here is the difference in perspective that the two of us have. I hope to explore this difference in framing more later.
What’s the common sense idea?
Aumann Agreement != Free Agreement
Oftentimes, I hear people talk about Aumann’s Agreement Theorem as if it means that two rational, honest agents cannot be aware of disagreeing with each other on a subject, without immediately coming to agree with each other. However, this is overstating the power of Aumann Agreement. Even putting aside the unrealistic assumption of Bayesian updating, which is computationally intractable in the real world, as well as the (not strictly required, but valuable) non-trivial presumption that the rationality and honesty of the agents is common knowledge, the reasoning that Aumann provides is not instantaneous:
To illustrate Aumann’s reasoning, let’s say Alice and Bob are rational, honest agents capable of Bayesian updating, and have common knowledge of eachother’s rationality.
Alice says to Bob: “Hey, did you know pineapple pizza was invented in Canada?”
Bob: “What? No. Pineapple pizza was invented in Hawaii.”
Alice: “I’m 90% confident that it was invented in Canada”
Bob is himself 90% confident of the opposite, that it has its origins in Hawaii (it’s called Hawaiian Pizza, after all!), but since he knows that Alice is rational and honest, he must act on this information, and thereby becomes less confident in what he previously believed—but not by much.
Bob: “I’m 90% confident of the opposite. But now that I hear that you’re 90% confident yourself, I will update to 87% confidence that it’s from Hawaii”
Alice notices that Bob hasn’t updated very far based on her disagreement, which now provides some information to her that she may be wrong. But she read from a source she trusts that pineapple pizza was first concocted in Canada, so she doesn’t budge much:
“Bob, even after seeing how little you updated, I’m still 89% sure that pineapple pizza has its origins in Canada”
Bob is taken aback, that even after he updated so little, Alice herself has barely budged. Bob must now presume that Alice has some information he doesn’t have, so updates substantially, but not all the way to where Alice is:
B: “Alright, after seeing that you’re still so confident, I’m now only 50% confident that pineapple pizza is from Hawaii”
Alice and Bob go back and forth in this manner for quite a while, sharing their new beliefs, and then pondering on the implications of their partner’s previous updates, or lack of updating. After some time, eventually Alice and Bob come to agreement, and both determine that there’s an 85% chance pineapple pizza was developed in Canada. Even though it would have been faster if they had just stated outright why they believed what they did (look, Alice and Bob enjoy the Aumann Game! Don’t judge them.), simply by playing this back-and-forth ping-ponging of communicating confidence updates, they managed to arrive at the optimal beliefs they would arrive at if they both, together, had access to all the information they each individually had.
What I want to highlight with this post is this: Even being perfect Bayesian agents, Alice and Bob didn’t immediately come to the correct beliefs instantly by sharing that they had disagreeing beliefs; they had to take time and effort to share back and forth before they finally reached Aumann Agreement. Aumann agreement does not imply free agreement
https://arxiv.org/abs/cs/0406061 is a result showing tht Aumann’s Agreement is computationally efficient under some assumptions, which might be of interest.
I don’t really buy that paper, IIRC it says that you only need to change a polynomial number of messages, but that each message takes exponential time to produce, which doesn’t sound very efficient.
From the abstract: The time used by the procedure to achieve agreement within epsilon is on the order of O(e^(epsilon ^ −6))… In other words, yeah, the procedure is not cheap
There’s a good number of ideas that I want to share here on LW in the linguistics / communication cluster. The question always comes to mind: “But what does communication have to do with rationality?”- to which I answer, rationality is the attempt to win, in part by believing true things which help one accomplish winning. If humans had infinite computational resources and infinite free time in which to do experiments, there would be nothing stopping us from arriving at the truth by ourselves. But in reality, we can’t arrive at all the logical consequences of everything we know by ourselves, nor can we learn every facet of nature’s dynamics alone. So humans which aspire to be rational must communicate- and the faster one can exchange information with other humans aspiring to the truth, the more rational one can be. Therefore, it is important for an aspiring rationalist to think deeply about how to best exchange information with their peers.
I’m not without precedent in applying linguistics and communication to the project of rationality- one of my favorite of Yudkowsky’s Sequences is “A Human’s Guide to Words”.
All the food you have on your table,
Your potatoes, corn, and lox,
To grow them yourself you would be able;
But if all were minded such,
Then who would have saved you from the pox?
If I were a middle school teacher, I would implement this system to make nerdy kids more popular (and maybe make aspiring popular kids work harder in class): every week, I would select a handful of students who I felt had done good work that week (according to my subjective taste), and they could write down the names of 3 or 4 other students in the class (but not themselves) who would earn a modest amount of extra credit. Ideally, I would name the students at the start of the week, and only take their nominations at the end of the week, so they have plenty of time for other students to attempt to curry favour with them. (Although perhaps having the students be unknown until they make their nominations would encourage students to anticipate who I would select each week, which may make for more salient long-term effects)
This way, I can hijack the vicious social mechanisms that are prevalent in middle school, and use them to promote an intellectual culture
I read somewhere that intelligent people are a positive externality for their neighbors. Their activity improves the country on average, and they only capture a part of the value they add.
If you could clone thousand Einsteins (talented not all in physics, but each one in something different), they could improve your country so much that your life would be awesome, despite the fact that you couldn’t compete with them for the thousand best jobs in the country. From the opposite perspective, if you appeared in Idiocracy, perhaps you could become a king, but you would have no internet, no medicine, probably not even good food, or plumbing. From the moment you would actually need something to work, life would suck.
But this effect is artifically removed in schools. Smart classmates are competitors (and grading on the curve takes it to the extreme), and cooperation is frowned upon. The school system is an environment that incentivizes hostility against smart people.
You suggest an artificial mechanism that would incentivize being friendly with the nerds. I like it! But maybe a similar effect could be achieved by simply removing the barriers to cooperation. Abolish all traces of grading on curve; make grades dependent on impartial exams by a computer, so that one year everyone may succeed and another year everyone may fail. (Also, make something near-mode depend on the grades. Like, every time you pass an exam, you get a chocolate. Twenty exams allow you to use a gym one afternoon each week. Etc.) And perhaps, students will start asking their smater classmates to tutor them; which will in turn increase the status of the tutors. Maybe. Worth trying, in my opinion.
I saw an anecdote from a parent with two children somewhere, saying that when going outside, they used to reward the child who would get dressed first. This caused competition and bad feelings between the kids. Then they switched to rewarding both based on how quickly they got to the point where both were dressed. Since the children now had a common goal, they started helping each other.
I wonder if one could do apply something like that to a classroom, to make the smart kids be perceived as an asset by the rest of the class.
And perhaps, students will start asking their smater classmates to tutor them; which will in turn increase the status of the tutors. Maybe.
Datapoint: Finnish schools mostly don’t grade on a curve, and some kids did ask me for help in high school, help that I was happy to provide. For the most part it felt like nobody really cared about whether you were smart or not, it was just another personal attribute like the color of your hair.
A cute senior in my high school Physics class asked me to tutor her after school because she was having a hard time. I can’t overstate the ways in which this improved me as a young-geek-person, and I think she got better at doing physics, too. Your proposal would tend to create more opportunities like that, I think, for cross-learning among students who are primarily book-intelligent and those who may be more social-intelligent.
Viliam’s shortform posts have got me thinking about income taxes versus wealth taxes, and more generally the question of how taxes should be collected. In general I prefer wealth taxes over income taxes, although I suspect there may very well be better forms of taxes than either of those two—But considering wealth taxes specifically, I think the main problem with wealth taxes is that over the long term they take away control of resources from people who have proven in the past that they know how to use resources effectively, and while this can allow for short-term and medium-term useful allocations of resources, it prevents very long horizon investing – as exemplified by Elon Musk’s projects including SpaceX, Tesla, Neuralink and The Boring Company – projects that are good investments primarily because Musk understands that in the very long term these projects will pay off – both in personal financial returns and in general global welfare. While Tesla is very close to becoming profitable (they could turn a profit this year if they wanted to), and SpaceX isn’t too far off either, he founded companies without any eye for medium term profits—he founded them understanding the very long game, which is profitable in the absence of year-over-year wealth taxes, but could potentially be unprofitable if year-over-year wealth taxes were introduced
The proposal that came across my mind in regards to alleviating the negative impact wealth taxes would have this way, is to allow entrepreneurs to continue to have control of the money they pay in wealth taxes, but that money is held in trust for the greater public, not for the personal use of the entrepreneur.
To clarify my point, I think it’s worth noting that there are two similar concepts that get conflated into the single word “ownership”: the 1st meaning of “own” (personal ownership) is that a person has full rights to decide how resources are used, and can use or waste those resources for their own personal pleasure however they wish; the 2nd meaning of “own” (entrusted to) is that a person has the right to decide how resources are used and managed, but ultimately they make decisions regarding those resources for the good of a greater public, or another trustor (entrusting entity), not for themselves.
When resources are owned by (i.e., entrusted to) somebody, they have the right to allocate those resources however they think is best, and aside from the most egregious examples of the resources being used for the personal gain or pleasure of the trustee, nobody can or should question the judgement of the trustee.
Back to wealth taxes: in my proposal, while an entrepreneur would still be expected to “pay” a certain percentage of their wealth each year to the greater public, instead of the money going directly to the government, the resources will instead continue to be “owned” by the entrepreneur, but instead of being personally owned for the entrepreneur’s gain and pleasure, it would be entrusted to the entrepreneur in the name of the public, and the entrepreneur will be allowed to continue to use the resources to support any enterprises they expect to be a worthwhile investment, but when the enterprise finally turns a profit, the percentage of revenues that correspond to the part that is entrusted in the name of the public, will then be collected as taxes.
The main benefit of this proposal (assuming wealth taxes are already implemented) is that, while it cannot make profitable any venture that would be rendered unprofitable by a wealth tax, it can maintain the feasibility of ventures that are profitable in the long run, but which are made unfeasible in the short and medium terms by a wealth tax, due to the cost of taxes being more than medium term gains.
two similar concepts that get conflated into the single word “ownership”
Sounds like “owner” vs “manager”.
So, if I understand it correctly, you are allowed to create a company that is owned by state but managed by you, and you can redirect your tax money there. (I assume that if you are too busy to run two companies, it would also be okay to put your subordinate in charge of the state-owned company.)
I am not an expert, but it reminds me of how some billionaires set up foundations to avoid paying taxes. If you make the state-owned company do whatever the foundation would do, it could be almost the same thing.
The question is, why would anyone care whether the state-owned company actually generates a profit, if they are not allowed to keep it? This could means different things for different entrepreneurs...
a) If you have altruistic goals, you could use your own company to generate profit, and the state-owned company to do those altruistic things that don’t generate profit. A lot of good things would happen as a result, which is nice, but the part of “generating profit for the public” would not be there.
b) If the previous option sounds good, consider the possibility that the “altruistic goal” done by the state-owned company would be something like converting people to the entrepreneur’s religion, or lobbying for political changes you oppose.
c) For people without altruistic or even controversially-altruistic goals, the obvious option is to mismanage the state-own company and extract as much money as possible. For example, you could make the state-owned company hire your relatives and friends, give them generous salary, and generate no profit. Or you could make the state-owned company buy overpriced services from your company. If this would be illegal, then… you could do the nearest thing that is technically legal. For example, if your goal is to retire early, then the state-owned company could simply hire you and then literally do nothing. Or you would pretend to do something, except that nothing substantial would ever happen.
The intention is that there would be not two separate companies, but one company which is split between being owned fully by the entrepreneur, and being managed by the entrepreneur- so the entrepreneur would still be motivated to make the company do as well as possible, thereby generating revenue for the public at large
over the long term they take away control of resources from people who have proven in the past but I know how to use resources
Umm, that’s the very point of taxes—taking resources from non-government entities because the government thinks they can use those resources better. We take them from people who have resources, because that’s where the resources are.
I step out of the airlock, and I look around. In the distance, I see the sharp cliff extending around the crater, a curtain setting the scene, the Moon the stage. I look up at the giant blue marble in the sky, white clouds streaked across the oceans, brown landmasses like spots on the surface. The vibrant spectacle of the earth contrasts against the dead barren terrain that lies ahead. I look behind at the glass dome, the city I call home.
Within those arched crystal walls is a new world, a new life for those who dared to dream beyond the heavy shackles that tied them to a verdent rock. New songs, new gardens, new joys, new heartbreaks, reaching, for the first time, to the skies, to the stars, to the wide open empty sea.
A voluminous frontier, filled with opportunity, filled with starlight, filled with the warmth and strength of the sun. We are one step further from the tyrannical grip of gravity, stretching our wings, just now preparing to take off, to soar and harness the fullness of the prosperity that gave us form
NB: I’m currently going through my old blog, which I’m planning on deactivating soon. I may repost some relevant posts from there over here, either to shortform or as a main post, as appropriate. This piece is one of the posts from there which touches on rationality-adjacent themes. You may see other posts from me in the coming days that also originate from there.
To ⌞modern eyes living in a democracy with a well-functioning free market⌟, absolute monarchy and feudalism  (as were common for quite a while in history) seem quite stupid and suboptimal (there are some who may disagree, but I believe most will endorse this statement). From the perspective of an ideal society, our current society will appear quite similar to how feudalism seems to us—stupid and suboptimal—in large part because we have inadequate tools to handle externalities (both positive and negative). We have a robust free market which can efficiently achieve outcomes that are optimal in the absence of externalities, and a representative government that is capable of regulating and taxing transactions with negative externalities, as well as subsidizing transactions with positive externalities. However, these capabilities are often under- and over-utilized, and the representative government is not usually incentivized to deal with externalities that affect a small minority of the population represented—plus, when the government does use its mandate to regulate and subsidize, it is very often controversial, even in cases where the economic case for intervention is straightforward.
If the free market and representative government are the signs that separate us from feudalism, what separates the ideal society from us? If I had to guess, public goods markets (PGMs) such as quadratic funding are a big player—PGMs are designed to subsidize projects that have large positive externalities, and I suspect that the mechanism can be easily extended to discourage actions with negative externalities (although I worry that cancel-culture dynamics may cause problems with this)
 ‘Feudalism’ is understood in this context not just as a political structure, but also as an alternative to a free market
(NB: My use of the corner brackets ⌞ ⌟ is to indicate intended parsing to prevent potential misreadings)
If the free market and representative government are the signs that separate us from feudalism, what separates the ideal society from us?
The things that separate us from the ideal society will probably seem obvious from hindsight—assuming we get there. But in order to know that, large-scale experiments will be necessary, and people will oppose them, often for quite good reasons (a large-scale experiment gone wrong could mean millions of lives destroyed), and sometimes for bad reasons, too.
Frequently proposed ideas inclide: different voting systems, universal basic income, land tax, open borders...
It seems to me that months ago, we should have been founding small villages or towns that enforce contact tracing and required quarantines, both for contacts of people who are known to have been exposed, and for people coming in from outside the bubble. I don’t think this is possible in all states, but I’d be surprised if there was no state where this is possible.
I think it’d be much simpler to find the regions/towns doing this, and move there. Even if there’s no easy way to get there or convince them to let you in, it’s likely STILL more feasible than setting up your own.
If you do decide to do it yourself, why is a village or town the best unit? It’s not going to be self-sufficient regardless of what you do, so why is a town/village better than an apartment building or floor (or shared- or non-shared house)?
In any case, if this was actually a good idea months ago, it probably still is. Like planting a tree, the best time to do it is 20 years ago, and the second-best time is now.
Are there any areas in the states doing this? I would go to NZ or South Korea, but getting there is a hassle compared to going somewhere in the states. Regarding size, it’s not about self-sufficiency, but rather being able to interact in a normal way with other people around me without worrying about the virus, so the more people involved the better
getting there is a hassle
That was my point. Doesn’t the hassle of CREATING a town seem incomparably larger than the hassle of getting to one of these places.
On an individual basis, I definitely agree. Acting alone, it would be easier for me to personally move to NZ or SK than to found a new city. However, from a collective perspective (and if the LW community isn’t able to cordinate collective action, then it has failed), if a group of 50 − 1000 people all wanted to live in a place with sane precautions, and were willing to put in effort, creating a new town in the states will scale better (moving countries has effort scaling linearly with magnitude of population flux, while founding a town scales less than linearly)
while founding a town scales less than linearly
I think you’re omitting constant factors from your analysis; founding a town is so, so much work. How would you even run out utilities to the town before the pandemic ended?
I acknowledge that I don’t know how the effort needed to found a livable settlement compares to the effort needed to move people from the US to a Covid-good country. If I knew how many person-hours each of these would take, it would be easier for me to know whether or not my idea doesn’t make sense.
FYI, folk at MIRI seem to be actively look into this, but, it is indeed pretty expensive and not an obviously good idea.
if the LW community isn’t able to cordinate collective action, then it has failed
Oh, we’re talking about different things. I don’t know much about any “LW community”, I just use LW for sharing information, models, and opinions with a bunch of individuals. Even if you call that a “community”, as some do, it doesn’t coordinate any significant collective action. I guess it’s failed?
Sorry, I don’t think I suceeded at speaking with clarity there. The way you use LW is perfectly fine and good.
My view of LW is that it’s a site dedicated to rationality, both epistemic and instrumental. Instrumental rationality is, as Eliezer likes to call it, “the art of winning”. The art of winning often calls for collective action to achieve the best outcomes, so if collective action never comes about, then that would indicate a failure of instrumental rationality, and thereby a failure of the purpose of LW.
LW hasn’t failed. While I have observed some failures of the collective userbase to properly engage in collective action to the fullest extent, I find it does often succeed in creating collective action, often thanks to the deliberate efforts of the LW team.
Fair enough, and I was a bit snarky in my response. I still have to wonder, if it’s not worth the hassle for a representative individual to move somewhere safer, why we’d expect it’s worth a greater hassle (both individually and the coordination cost) to create a new town. Is this the case where rabbits are negative value so stags are the only option (reference: https://www.lesswrong.com/posts/zp5AEENssb8ZDnoZR/the-schelling-choice-is-rabbit-not-stag)? I’d love to see some cost/benefit estimates to show that it’s even close to reasonable, compared to just isolating as much as possible individually.
Life needs energy to survive, and life needs energy to reproduce. This isn’t just true of biological life made of cells and proteins, but also of more vaguely life-like things—cities need energy to survive, nations need energy to survive and reproduce, even memes rely on the energy used by the brains they live in to survive and spread.
Energy can take different forms—as glucose, starches, and lipids, as light, as the difference in potential energy between four hydrogen atoms and the helium atom they could (under high temperatures and pressures) become, as the gravitational potential of water held behind a dam or of a heavy object waiting to fall, or as the gradient of heat that exists between a warm plume of water and the surrounding cold ocean, just to name a few forms. But anything that wants claim to the title of being alive, must find energy.
If a lifeform cannot find energy, it will cease to create new copies of itself. Those things which are abundant in our world, are things that successfully found a source of energy with which to be created (cars and chairs might be raised as an exception, but they too indeed were created with energy, and either a prototypical idea, or the image of another car or chair in someone’s mind, needed to find energy in order to create that object).
The studies of biology and economics and not so far separated as they might seem—at the core of both fields in the question: “Can this phenomenon (organization, person, firm) find enough energy to survive and inspire more things like it?”. This question also drives the history of the world. If the answer is no, that phenomenon will die, and you will not notice it. Or, you might notice the death throes of a failed phenomenon, but only because something else, which did find energy, enabled that failed phenomenon to happen. Look around you. All the flowers you see, the squirrels, the humans, the buildings, the soda cans, the roadways, the grass, the birds. All of these phenomena somehow found energy with which to be created. If they didn’t, you wouldn’t be looking at them, they would never exist.
The ultimate form of life is the life that best gathers energy. The Cambrian explosion happened because first plants discovered they could turn light into usable food, then animals discovered they could use a toxic waste by-product of that photosynthesis—oxygen—as a (partial) source of energy. Look around you. Where is there free energy laying around, unused? How could that energy be captured? Remember, the nation that can harness that energy will be the nation that influences the world. The man who takes hold of that energy can become the wealthiest man in the world.
Thinking about rationalist-adjacent poetry. I plan on making a post about this once I have a decent collection to seed discussion, then invite others to share what they have.
Tennyson’s poems Ulysses and Locksley Hall both touch on rationalist-adjacent themes, among other themes, so I’d want to share excerpts from those
Piet Hein has some ‘gruks’ that would be worth including (although I am primarily familiar with them in the original Danish—I know there exist English translations of most of them, but I’ll have to choose carefully, and the translations don’t always capture the exact feeling of the original)
I have shared two works of my own here on my shortform that I’d want to include
Shakespeare’s “When I do count the clock that tells the time” is a love poem, but it invokes transhumanist feelings in me
Hey, “When I do count the clock” is my favorite sonnet too! “And death once dead, there’s no more dying then” <3
I also recommend “Almighty by degrees” by Luke Murphy (only available on Kindle I think) – I bought it because of an SSC Classified Thread, and ended up using a poem from it in my Solstice last year. There’s also a poetry tab on my masterlist of Solstice materials. Damn I love poetry.
Daniel’s secular sermons are good.
Thanks for the link ὄD
Chess is fairly well known, but there’s also an entire world of chess variants, games that take the core ideas of chess and change either a few details or completely reimagine the game, either to improve the game, or just change the flavour of the game. There’s even an entire website dedicated to documenting different variants of chess.
Today I want to tell you about some classic chess variants: Crazyhouse chess, Grand chess, and Shogi (Japanese chess), and posit a combination of the first two that I suspect may become my favorite chess when I have a chance to try it.
Shogi is the version of chess that is native to Japan, and it is wildly different from western chess—both western chess and shogi have evolved continuously from the original chaturanga as the game spread out from India. The core difference between shogi and the familiar western chess is that once a piece has been captured, the capturing player may later place the piece back on the board as his own piece. But if that was the only difference, it would make for a very crazy game, since the pieces in western chess are so powerful while the king is so weak, that the game would be filled with precarious situations that would require the players to always have their guard up for an unexpected piece drop, and checkmate is never more than a few moves away unless both players are paying close attention.
In fact, this version is precisely crazyhouse chess, and this property is both what makes crazyhouse chess so beloved and fun, but also what stands in its way of being taken as seriously as orthodox chess. There are two ways that this barrier could be overcome—either the king can be buffed, giving him more mobility to better dodge the insanity that the drops create, or the pieces can be nerfed, making them much less powerful, and particularly to have less influence at a long range. Shogi chooses the route of nerfing the pieces, replacing the long-ranged and very influential pieces used in orthodox chess with a set of pieces that have much more limited mobility, such as the lance, which moves like a rook, but can only move straight forward (thereby limiting its position to a single track), the uma (horse), which moves like a knight, but can only move in the two forwards most positions, or the gold and silver generals, who can only move in a subset of the directions that a king can move in. Since each piece isn’t much stronger than a king, it is much easier for the king to dodge the threats produced by each piece, and a king can only be checkmated when the pieces are acting in coordination to create a trap for the king. (This is the basis of tsume-shogi, checkmate puzzles for shogi. They are fun to solve, and I recommend trying them out to get a feel for how different checkmates in shogi are from orthodox chess checkmates)
I think shogi and crazyhouse solve a problem that I have with modern orthodox chess: the game ends in draws far too often, and the endgame is just too sparse for my taste. You can get good puzzles out of orthodox endgames, but I find the endgames of shogi and crazyhouse to be much more fun and much more exciting.
While I’m on the topic of shogi and crazyhouse, shogi pieces look quite different from the pieces used in orthodox chess:
I quite like the look of these pieces, and it provides a solution to a practical problem that arises from the piece drop mechanic: With orthodox chess pieces, one would need two sets of chess pieces, a double-sized army for each player, since each player may have up to twice the regular amount of each type of piece after they capture the enemy’s pieces. With these flat, wedge shaped pieces, though, a player can just make the piece face in the opposite direction towards their opponent, and a single set of pieces is enough to play the game. While I think this solution works, and these pieces are quite iconic for shogi, it just doesn’t feel right to play crazyhouse chess with pieces like this: crazyhouse chess is orthodox chess at its core, and it feels right to play crazyhouse with orthodox chess pieces. My ideal solution would be pieces that are as tall as orthodox chess pieces, and have a similar design language, but which are anti-symmetric: the pieces would have flat tops and bottoms, and can be flipped upside down to change the colour of the piece, since one end would be white, and the other end would be black. I imagine the two colours would meet in the middle, with a diagonal slant so that it would show one colour primarily to one player, and the other colour to the other.
It’s been an observation made more than once, that there’s a certain feeling of completeness to the orthodox chess pieces: The rook and bishop each move straight in certain directions, either perpendicularly / parallel to the line of battle, or diagonally to the line of battle. If you were to draw a 5x5 square around each piece, the knight can move to precisely the squares that a rook and bishop can’t go to. And the queen can be viewed as the sum of a rook and a bishop. It all feels very interconnected, and almost perfectly complete and platonic. Almost perfectly, because there’s two sums that we don’t have in orthodox chess: the combination of a rook + knight, and a bishop + knight. These pieces, called the marshal and cardinal are quite fun pieces to play with, and I would not argue that chess is a better game for omitting these pieces. As such, there have been proposals to add these pieces to the game, the most well-known of which are Capablanca chess and grand chess, proposed by Chess World Champion J. R. Capablanca and the game designer Christian Freeling, respectively. The main difference between the two is that Capablanca chess is played on a board 10 wide by 8 tall, while grand chess is played on a 10x10 board, with an empty file behind each player’s pieces, aside from the rooks, which are placed in the very back corners (what about castling? Simple, you can’t castle in grand chess):
The additional width of the board in Capablanca and grand chess is used to allow one each of the marshal and cardinal to be placed in each player’s army. Aside from the additional pieces and larger board, grand chess plays just like regular chess, but I think it deserves to be considered seriously as an alternative to the traditional rules for chess.
While an introduction to chess variants would make a good topic for a post on this website, that’s not what I’m writing right now. While these three games would certainly be present in such an article, the selection would be far too limited, and far too conservative—there’s some really crazy, wacky, fun, and brilliant ideas in the world of chess variants which I won’t be touching on today. I’m writing today because I want to talk about what I think may be the best contender as a replacement for orthodox chess, a cross between grand chess and crazyhouse, with a slight modification to better handle the drop mechanic of crazyhouse. It’s clear that Capablanca chess and grand chess were intended from the very start as rivals to the standard ruleset, and I mentioned previously that shogi solves a problem that I have with orthodox chess: orthodox ends in too many draws, and I find orthodox endgames to be less exciting than crazyhouse and shogi endgames. My ideal game of chess would look more like crazyhouse than orthodox chess, since drops just make chess more fun. As I mentioned before, while crazyhouse is a fun game, it’s just too intense and unpredictable to present a serious challenge to orthodox chess (at least, that is what I suspected as I was thinking about this post). There are two ways this can be addressed: the first is to do as shogi did, and make the pieces almost all as weak as the king, so the king can more easily survive against the enemy pieces; but doing this makes the game a different game from orthodox chess; it’s no longer just a variant on orthodox chess, it’s a completely different flavour. A flavour that I happen to love, but not the flavour of orthodox chess. I wanted a game that would preserve the heart of orthodox chess, while giving it the dynamic aspect allowed by drops, but more balanced and sane than crazyhouse chess.
So let’s explore the second way to balance crazyhouse chess: instead of nerfing the pieces, let’s make the king more formidable, more nimble, and able to more easily survive the intensity of drop chess. I haven’t playtested this yet, but it seems appropriate to give the king the 4 backwards moves of the knight: This will give mobility to the king, without giving it too much mobility, and limiting the king to the backwards moves will ensure that it remains a defensive piece, and doesn’t gain a new life as an aggressive part of the attacking force. Playtesting may prove this to be too weak (I don’t anticipate that it will make it too strong): If this is the case, a different profile of movement may make sense for the king, but in any case, it is clear that increasing the mobility of the king will allow for a balanced form of drop chess.
So my ideal chess would differ from orthodox in the following ways:
The game is played on a 10x10 board, instead of the traditional 8x8 board (I feel that a wider board will make for a more fun, and deeper, game of chess)
The game will feature the marshall (rook + knight) and cardinal (bishop + knight) of grand chess, and will have the pieces arranged in the same way as grand chess (this also implies no castling)
When a piece is captured, it may be dropped back in to the game by the capturing player (working exactly as in crazyhouse chess or shogi)
The king may, in addition to its usual move, move using one of the 4 backwards moves of the knight. Pieces may be captured using this backwards move.
Ideally, the game would be played using the tall, bichromatic, antisymmetric pieces I propose in section I a of this post.
This was neat, would appreciate it as a top-level post (albeit probably a personal blog one), although it also does seem fine as shortform.
I have now made this into a top-level post
I’m curious to hear more about why you are recommending putting it as a top level personal post- is it length, format, quality, a combination of these, or something else?
I notice that I have some reluctance to post “personal blog” items on the top level- even though I know that the affordance is there, I instinctively only want to post things that I feel belong as frontpage items as top-level posts. I also notice that I feel a little weird when I see other people’s personal posts as top-level posts here. I’m certainly not arguing that I have any problem with the way things are now, or arguing that this shouldn’t be a top-level post, I’m just putting my subconscious feelings into words.
As for how this post ended up in shortform, I originally started typing it into the shortform box, and I didn’t realize it would be this long until after I had already written a good chunk of it, and I just never decided to change it to a top-level post
I think if something might be want to be shared via a link putting it into a top-level post is valuable.
There’s two ways to consider the constitutional foundation of the modern United States: A) as the Constitution itself and its amendments, interpretted according to what the authors meant when it was written, or B) as the de facto modern interpretation and application of constitutional jurisprudence and precedent, which is often considered to be at odds with the original intent of the authors of the Constitution and its admendments, but nonetheless has become widely accepted practice.
Consider: which of these is the conservative approach, and which is the liberal approach? By liberal and conservative, I don’t mean left-wing or right-wing, but am using them in the sense that conservatives conserve what exists, while liberals are liberal in considering different ways things might be (the original meaning of these terms)
The first option, A, which only looks at the written document itself, might often be described as a conservative approach, while B, which throws out the original intent and substitutes a new spirit to it, may be viewed as liberal. But I contend that it is actually the inverse: the conservative view of the US’s constitutional foundation is to conserve the existing precedent in how its government functions, which dates back broadly to the 1930′s, with some of the modern understanding of the constitutional foundation even going back as far as the Civil War, and has thus been the law of the land for nearly a century and a half.
Meanwhile, the approach of throwing out modern interpretation and precedent in favor of the original intent and meaning of the Constitution is quite a liberal approach, swapping a system that has been shaped and strengthened by cultural evolution for a prototype which is untouched by cultural evolution, and should (by default) be regarded with the same level of suspicion that liberal (i.e. paradigm shifting) proposals should (by default) be regarded with.
I’ve been considering the possibility of the occurrence of organized political violence in the wake of this year’s election. I have been noticing people questioning the legitimacy of the process by which the election will be conducted, with the implied inference that the outcome will be rigged, and therefore without legitimacy. It is also my understanding that there exist organized militias in the US, separate from the armed forces, which are trained to conduct warfare, ostensibly for defense reasons, which I have reason to believe have a nontrivial probability of attempting to take control in their local areas in the case of an election result that they find unfavourable.
Metaculus currently gives 3% probability of a civil war occurring in the wake of this election. While there are many scenarios which would not lead to the description of civil war, this probability seems far too low to me.
3% seems too high for me, depending on definition. I’d put it at around 1% of significant violent outbreaks (1000+ deaths due to violence), and less than 0.2% (below which point my intuitions break down) of civil war (50k+ deaths). If you include chance of a coup (significant deviance from current civil procedures with very limited violence), it might hit 3%.
Metaculus is using a very weak definition—at least two of four listed agencies (Agence France-Presse (AFP), Associated Press (AP), Reuters and EFE) describe the US as being in civil war. There are a lot of ways this can happen without truly widespread violence.
I think you’re misinformed about militias—there are clubs and underground organizations that call themselves that—they exist and they’re worrisome. But they’re not widespread nor organized, and ‘trained to conduct warfare’ is vastly overstating it. There IS some risk (IMO) in big urban police forces—they are organized and trained for control of important areas, and over the years have become too militarized. I think it’s most likely that they’re mostly well-enough integrated into their communities that they won’t go much further than they did in the protests this summer, but if the gloves really come off, that’ll be a key determinant.
The phrase “heat death of the universe” refers to two different, mutually exclusive possibilities:
The universe gets so hot, that it’s practically impossible for any organism to maintain enough organization to be able to sustain itself and create copies of itself
The universe gets so cold, that everything freezes to death, and no organism can put make work happen to create more copies of itself
Originally, the heat death hypothesis referred to #1, we thought that the universe would get extremely hot. After all, heat death is a natural consequence of the second law of thermodynamics, which states that entropy can only increase, never decrease, and ceteris paribus (all else equal) when entropy increases, temperature also increases.
But ceteris is never actually paribus, and in this case, physicists found out that the universe is constantly getting bigger, things are always getting further apart. When volume is increasing, things can get colder even as entropy increases, and physicists now expect that, given our current understanding of how the universe works, possibility #2 is more likely, the universe will eventually freeze to death.
But our current understanding is only ever the best guess we can make of what the laws of the universe actually are, not the actual laws themselves. We currently expect the universe will freeze, but we could very well find evidence in the future that the universe will burn instead. Maybe (quite unlikely) things will just happen to balance out, so that the increase in temperature due to entropy equals the decrease in temperature due to the expansion of the universe.
Perhaps we will discover a loophole in a set of laws that would otherwise suggest a heat death of one kind or the other, but where a sufficiently intelligent process can influence the evolution of temperature so as to counteract the otherwise prevailing temperature trend—in the vein of (I’d like to note that I do not intend to imply that any of these are likely to happen) creating a large enough amount of entropy to create a permanent warm zone in a universe that is otherwise doomed to freeze (this would probably require a violation of the conservation of energy that we currently have no reason to believe exists), or using an as-yet undiscovered mechanism to accelerate the expansion of the universe that can create a long-lasting cool zone in a universe that is otherwise doomed to burn.
Hrm. I though it referred to distribution of energy, not temperature. “heat death of the universe” is when entropy can increase no more, and there are no differentials across space by which to define anything at conscious scale. No activity is possible when everything is uniform.
At least, that’s my simplistic summary - https://en.wikipedia.org/wiki/Heat_death_of_the_universe gives a lot more details, including the fact that my summary was probably not all that good even in the 19th century.
The way we measure most populous cities / most dense cities is weird, and hinges on arbritary factors (take, for example, Chongqing, the “most populous city”, which is mostly rural land, in a “city” the size of Austria)
I think a good metric that captures the population / density of a city is the number of people that can be reached with half an hour’s or an hour’s worth of transportation (1/2 hour down and 1⁄2 hour back is one hour both ways, a very common commute time, though a radius of 1 hour each way still contributes to the connections available) - this does have the effect of counting a larger area for areas with better transportation, but I think that’s a good feature of such a metric.
This metric would remove any arbitrary influences caused by arbitrary boundaries, which is needed for good, meaningful comparisons. I would very much like to see a list organized by this metric.
(Edited: misremembered commute times. See Anthropological Invariants in Travel Behaviour)
related map of the US, with clustering of actual commutes: https://www.atlasobscura.com/articles/here-are-the-real-boundaries-of-american-metropolises-decided-by-an-algorithm . Note this uses longer commutes than I’d ever consider.
(edit: removed stray period at end of URL)
Huh, I’m seeing a 404 when I click the link
What is often used today is “metropolitan area”. This is less arbitrary than city boundaries, but not as rigorous as your “typical 1 hour from given point”—it boils down to “people pay extra to live somewhat near that conceptual location”. I think the base ranking metric is not very useful, as well. Why do you care about “most populous” or “densest (population over area)”, regardless of definition of location?
Why do you care about “most populous” or “densest (population over area)”, regardless of definition of location?
1) Population density has an important impact on the mileau and opportunities that exist in a given location, but we can only make meaningful comparisons when metrics are standardized. 2) I’ve heard it said that in medieval times, many lords would collect a “bushel” of taxes from the peasants, where the bushel was measured in a large basket, but then when paying a “bushel” of taxes to their king, the bushel would be measured with a much smaller basket, thereby allowing the lord to keep a larger amount of grain for himself. When we don’t have consistent standards for metrics, similar failure modes can arise in (subtler) ways—hence why I find reliance on arbitrary definitions of location to have bad taste
A: Reading about r/K reproductive strategies in humans, and slow/fast life histories.
B: It’s been a belief of mine, that I have yet to fully gather evidence on / have a compelling case that it should be true/false, that areas with people in poverty leads to increased crime, including in neighboring areas, which would imply that to increase public safety, we should support people in poverty to help them live a comfortable life.
In niches with high background risk, having many children, who each attempt to reproduce as quickly as possible, is a dominant strategy. In niches where life expectancy is long, strategies which invest heavily in a few children, and reproduce at later ages are dominant.
Fast life histories incentivize cheating and criminal behaviour, slow life histories incentivize cooperating and investing in a good reputation. Some effects mediating this may be genetic / cultural, but I suspect that there’s a lot of flexibility in each individual—if one grows up in an environment where background risk is high, one is likely to be more reckless, if one grows up in an environment with long life expectancy, the same person will likely be more cooperative and law-abiding
So what you’re saying is that by helping people, we might also improve their lives as a side effect? Awesome! :P
More seriously, on individual level, I agree; whatever fraction of one’s behavior is determined by their environment, by improving the environment we likely make the person’s behavior that much better.
But on a group level, the environment mostly consists of the individuals, which makes this strategy much more complicated. And which creates the concentrated dysfunction in the bad places. Suppose you want to take people out of the crime-heave places: do you also move the criminals? or only the selected nice people who have a hope to adapt to the new place? Because if you do the latter, you have increased the density of criminals at the old place. And if you do the former, their new neighbors are going to hate you.
I don’t know what is best; just saying that there seems to be a trade-off. If you leave the best people in the bad places, you waste their potential. But if you help the best people leave the bad places, there will be no one left with the desire and skills to improve those places a little.
On the national scale, this is called “brain drain”, and has some good and some bad effects; the good effects mostly consist of emigrants sending money home (reducing local poverty), and sometimes returning home and improving the local culture. I worry that on a smaller scale the good effects would be smaller: unlike a person moving to another part of the world with different culture and different language, an “emigrant” to the opposite side of the city would not feel a strong desire to return to their original place.
I wasn’t mainly thinking of helping people move from one environment to another when I wrote this, but generally improving the environments where people already are (by means of e.g. UBI). I share many of your concerns about moving people between environments, although I suspect that done properly, doing so could be more beneficial than harmful
What happens if we assume that a comfortable life and reproduction are inviolable priviledges, and imagine a world where these are (by the magic of positing) guaranteed never to be violated for any human? This suggests that the number of humans would increase exponentially, without end, until eventually some point is hit where the energy and resources available in the universe, available at the reach of mankind, is less than the resources needed to provide a comfortable life to every person. Therefore, there can exist no world where both reproduction and a comfortable life are guaranteed for all individuals, unless we happen to live in a world where there is infinite energy (negentropy) and resources.
The explanation might not be perfect, and the important implications that I believe follow may not be clear from this, but this is a principle that I often find myself meditating upon.
(This was originally written as a response to the daily challenge for day 12 of Hammertime)
With vaccines on the horizon, it seems likely that we are nearing the end of lockdowns and the pandemic, but there is talk of worry that it’s possible a mutant strain might resist the vaccine, which could put off the end of the pandemic for a while longer.
It seems to me that numerous nations have had a much better response to the pandemic than any state in the US, and have been able to maintain a much better quality of life during the pandemic than the states, including New Zealand, Japan, and South Korea. For someone with the flexibility, moving to one of these countries would have seemed like a smart move when it seemed there was still a long time left in the pandemic; and would still seem like a good idea if one feels that the pandemic will not be over soon enough.
While every US state has as a whole failed to reign in the virus, I suspect that it may be possible and worthwhile to establish a town or village in some state—perhaps not CA or NY, or whichever state you would most want to live in, but in some state—where everybody consents to measures similar to those taken in nations that have gotten a grasp of the virus, and to take advantage of a relative freedom from the virus to live a better life. This may be, if taken up by a collective, be a cheaper and more convenient (in some ways) alternative to moving to a country on the other side of the world.
In “Emedded Agency”, Scott and Abram write:
In theory, I don’t understand how to do optimization at all—other than methods that look like finding a bunch of stuff that I don’t understand, and seeing if it accomplishes my goal. But this is exactly the kind of thing that’s most prone to spinning up adversarial subsystems.
In theory, I don’t understand how to do optimization at all—other than methods that look like finding a bunch of stuff that I don’t understand, and seeing if it accomplishes my goal. But this is exactly the kind of thing that’s most prone to spinning up adversarial subsystems.
One form of optimization that comes to mind that is importantly different, is to carefully consider a prototypical system, think about how the parts interplay, and identify how the system can be improved, and create a new prototype that one can expect to be better. While practical application of this type of optimization will still often involve producing and testing multiple prototypes, it differs from back-propogation or stochastic hill-climbing because the new system will be better than the prototype it is based on due to reasons that the optimizing agent actually understands.
I think capitalism staddles the line between these two modes: an inventor or well-function firm will optimize by making modifications that they actually understand, but the way the market optimizes products is how Scott and Abram describe it: you get a lot of stuff that you don’t attempt to understand deeply, and choose whichever one looks best. While I am generally a fan of capitalism, there are examples of “adversarial subsystems” that have been spun up as a result of markets—the slave trade and urban pollution (e.g. smog) come to mind.
I recently wrote about combining Grand Chess with Drop Chess, to make what I felt could become my favorite version of chess. Today, I just read this article, which argues that the queen’s unique status as a ‘power piece’ in Orthodox Chess—a piece that is stronger than any other piece on the board—is part of what makes Orthodox so iconic in the west, and that other major chesslikes similarly have a unique power piece (or pair of power pieces). According to this theory, Grand Chess’s trifecta of power pieces may give it less staying power than Orthodox Chess. I’m not convinced, since Shogi has 2 power pieces, which is only 1 less than Grand Chess, and twice as many as Orthodox, but it is food for thought.
My first reaction was to add an Amazon (bishop + rook + knight in one piece) as a power piece, but it’s not clear to me that there’s an elegant way of adding it (although an 11x11 board might just be the obvious solution), and it has already been pointed out that my ‘Ideal Chess’ already has a large amount of piece power, and the ability to create a sufficiently buffed King has already been called into question, before an Amazon is added, so I’m somewhat dubious of that naïve approach.
Recently I was looking at the list of richest people, and for the most part it makes sense to me, but one thing confuses me: why is Bernard Arnault so rich? It seems to me that one can’t get that rich simply off of fashion—you can get rich, but you can’t become the third richest person in the world off of fashion. It’s possible that I’m wrong, but I strongly suspect that there’s some part of the story that I haven’t heard yet- I suspect that one of his ventures is creating value in a way that goes beyond mere fashion, and I am curious to figure that out.
Most of his wealth comes from his stake in LVMH, a luxury real estate group.
Edit: Actually LVMH is involved in several luxury verticals, not just real estate.
But that doesn’t answer my question. What is LVMH doing that makes them so valuable? Wikipedia says they “specialize in luxury goods”, but that takes us right back to what I say in my original post. What value is LVMH creating, beyond just “luxury”? Again, I may be wrong, but it just doesn’t seem possible to become the third richest person by selling “luxury”—whether real estate, champagne, clothes, or jewelry.
Expensive real estate actually seems like a great way to become one of the richest people. Maybe we just have different priors.
Edit: apparently, the real estate isn’t where they make their money though...
I agree that real estate can make a person rich. But the path I see for that is only tangentially connected to luxury
For most sectors, I think there’s tiers. Apple sells less devices at a slightly more expensive price point than e.g. Microsoft or Google. I think the highest tiers, that only a few can afford, but at the highest price point (which is actually a selling point of your product) makes intuitive sense as a path to being one of the richest, and real estate, as an asset class, makes intuitive sense to apply this strategy to.
An infographic I found shows that LVMH’s revenues are driven by the following sections:
“Fashion and leather goods” is 38% of LVMH’s revenues
“Selective retailing” is 28%
“Perfumes and cosmetics” is 13%
“Wines and Spirits” is 10%
Between these, they account for ~90% of the value of LVMH, with watches and jewelry making up most of the remaining 10%. So perhaps I should be asking: What is LVMH’s fashion and retail sectors doing to make them so valuable?
I will also note, that this is the percentage of revenues, not profits. I might want to find out the proportion each of these sectors contributes to profits (to ensure I don’t accidentally chase a high-revenue, low profit wild goose), and I could probably find that out by looking at LVMH’s shareholder report.
It’s a shame that in practice Aumann Agreement is expensive, but we should try to encourage Aumann-like updating whenever possible.
While, as I pointed out in my previous shortform, Aumann Agreement is neither cheap nor free, it’s powerful that simply by repeatedly mutually communicating the fact that they have opposing beliefs, two people can come to arrive at (in theory) the same beliefs together, that they would have if they had access to all the information the other person has, even without being aware of the specific information the other person has.
While it’s not strictly necessary, Aumann’s proof of the Agreement Theorem assumes that A) both agents are both honest and rational, and importantly: B) both agents are aware that the other is honest and rational (and furthermore, that the other agent knows that they know they are rational, and so on). In other words, the rationality and honesty of each agent is presumed to be common knowledge between both agents.
In real life, I often have conversations with people (even sometimes on LW) who I’m not sure are honest, or rational, and who I’m not sure consider me to be honest and rational. Lack of common knowledge of honesty is a deal-breaker, and the lack of common knowledge of rationality, while not a deal-breaker, slows the (already cumbersome) Aumann process down quite a bit.
So, I invite you to ask: How can we build common knowledge of our rationality and honesty? I’ve already posted one shortform on this subject, but there’s more to be said.
I don’t think there’s any shortcut. We’ll have to first become rational and honest, and then demonstrate that we’re rational and honest by talking about many different uncertainties and disagreements in a rational and honest manner.
I don’t think there’s any shortcut.
Not sure I agree with you here. Well, I do agree that the only practical way I can think of to demonstrate honesty is to actually be honest, and gain a reputation for honesty. However, I do think there are ways to augment that process: right now, I can observe people being honest when I engage with their ideas, verify their statements myself, and update for the future that they seem honest; however, this is something that I generally have to do for myself, and if someone else comes along and engages with the same person, they have to verify the statements all over again for themselves; multiply this across hundreds or thousands of people, and you’re wasting a lot of time; and I can only build trust based on content that I have engaged with; even if a person has a large backlog of honest communication, if I don’t engage with that backlog, I will end up trusting that person less than they deserve. If there are people who I already know I can trust, it’s possible to use their assignment of trust to give trust to people who I otherwise wouldn’t be able to. There are ways to streamline that.
Regarding rationality, since rationality is not a single trait or skill, but rather many traits and skills, there is no single way to reliably signal the entirety of rationality; however, each individual trait and skill can reliably be signaled in a way that can facilitate building of trust. As one example, if there existed a test that required an ability to robustly engage with the ideas communicated in Yudkowsky’s sequences, if I noticed that somebody had passed this test, I would be willing to update on that person’s statements more than if I didn’t know they were capable of passing this test. (I anticipate that people reading this right now will object that test generally aren’t reliable signals, and that people often forget what they are tested on. To the first objection, I have many thoughts on robust testing that I have yet to share, and haven’t seen written elsewhere to my knowledge, and my thoughts on this subject are too long to write in this margin. Regarding forgetting, spaced repetition is the obvious answer)
Riemannian geometry belongs on the list of fundamental concepts that are taught and known far less than they should be in any competent society