Very cool! I sent a message through the form at the website. Curious to see where this goes :)
btw, the social media links at the bottom of contact us just lead to the homepage of the websites, not to your specific pages.
Brandon Sanderson is also very good at this. As an example, he’s religious, but he’s very good at writing both other religions and characters that atheistic (Jasna from Stormlight Archive is an atheist and she’s written very well).
His most extreme consequentialist is also supposed to be a bad guy, but he does not strawman him, and you actually get to hear a lot of his reasoning and you can agree with him. My problem with him (in world, not a problem of writing, I think he’s a great character) was he didn’t sufficiently consider the possibility he was wrong. But there are other consequentialist that aren’t portrayed in a bad light (like Jasna from before), and many of the main characters struggle with these moral ideas.
Even the character he had the most excuses to write badly, a god called ruin who is almost more a force of nature than a god (from Mistborn), isn’t written as a dull, obviously evil and wrong character, but is “steelmaned”, if you will. And he shows the many flaws of his counterpart, preservation, that doesn’t let things grow to preserve them, which often ends up being counter productive.
One of the core style guides of LessWrong writing (of which there are few) is “Aim to explain, not persuade”. This post wouldn’t benefit from optimizing more for persuasion.
I think an event like you describe might be more powerful for the participants, but it’s also inherently less visible. There’s just something about going to a website and seeing a big red button, it can affect even people who stumble on the site for the first time. Perhaps it would be good to do both.
“Not taking unilaterally taking large (and irreversible) action”—Two “taking”.
I frequently fix typos, even in older comments if I happen to come across them and notice. So it should only be added if it works really well to distinguish between substantial and insubstantial comments. But even then it would make me think twice whether I should update a typo, and I expect that to be annoying.
And the word count should be even less as other people’s posts (like Eliezer’s) are usually much longer than Hanson’s.
Doesn’t that include posts by other people too? Like Eliezer, for example?
This is different from a thought crime, right? I would distinguish in the page description. Otherwise, if it’s not already an accepted term, I would consider changing it to avoid confusion.
I think you’re equivocating a bit between information and truth. For example in the TV example, you would pay to get the information of what the ending is. It would make more sense to talk about the truth of the show’s ending if, say, there was a character you were very attached which you didn’t want to die, and they might die in the last episode. Would you like to know how the show ends even if you have to face the truth of this character’s death?
In other words, truth is more about what you believe than what information you have (though obviously you need information to get at the truth). You can have different beliefs with the same information, so the question is more about whether you’re willing to accept the truth if it costs you something.
This is great! I have a dyslectic friend that may benefit from this, so I’ll be sure to tell him.
I think it would be great to go even further and have this as an in-built feature on the website, so (ideally) for any every article you could click on a listen to this article button and listen to an auto-generated reading. This has far less friction and is more accessible to those who don’t know about the library or don’t use any podcast apps (like me). It can also scale better for multiple voices, if that’s a desired feature.
ETA: If it’s an in-built feature then it can also be applied to comments and tags, which could also be very useful.
If creating an audio version of each post is too expensive, then perhaps it could be limited to a certain Karma score like you’re doing right now, or only do it for posts when someone first clicks the button.
Perhaps as a step towards that something can be added to the post page on the site that shows when there’s an audio version available and either links to Spotify or directly to the MP3 file?
I would like to see older posts also get audio versions, there are many good old posts and I don’t see a reason to heavily prioritize new posts (also, more selfishly, I would love to have audio versions of my own posts that pass the threshold but were posted before this project began)
Question: If an article passes the karma threshold only a week after it’s posted, will it still be narrated? In other words, what’s the threshold to pass the threshold? :)
(Two small suggestions: I’d put the link to the audio version of this post before the first paragraph, and I’d add another link to the library at the end of the post.)
And use the profit to narrate even more LessWrong/EA posts! Double win :)
Any recommendations for Mechanism Design textbooks?
In Introduction to Mechanism Design Badger recommended A Toolbox for Economic Design (2009) and An Introduction to the Theory of Mechanism Design (2015).
In the the preface to the latter, the author mentions a few other books too:
Designing Economic Mechanisms (2006) by Leonid Hurwicz and Stanley Reiter. “The focus of this text is on informational efficiency and privacy preservation in mechanisms. Incentive aspects play a much smaller role than they do in this book.”
Communication in Mechanism Design: A Differential Approach (2008) by Steven R. Williams “This book covers material similar to that of Hurwicz and Reiter. The emphasis that both books place on the size of the message space in a mechanism differentiates them from more modern treatments of mechanism design.”
A Toolbox for Economic Design (2009, also recommended by Badger) by Dmitrios Diamantaras, with Emina I. Cardamone, Karen A. Campbell, Scott Deacle, and Lisa A. Delgado. “This book is closest to mine among those listed here, but it covers more than I do, such as the theory of Nash implementation, the theory of matching markets, and empirical evidence on mechanisms. Sometimes I wish I had written this book. My own book is more narrowly focused, perhaps goes somewhat into greater depth, and places a greater emphasis on the relation between game theoretic foundations and mechanism design.”
Mechanism Design, A Linear Programming Approach (2011) by Rakesh Vohra. “This is a superb book, demonstrating how large parts of the theory of mechanism design can be developed as an application of results from linear programming. Vohra puts less emphasis than I do on the game theoretic aspects of mechanism design.”
He also wrote this about prerequisites:
This book is meant for advanced undergraduate and graduate students of economics who have a good understanding of game theory. Fudenberg and Tirole (1993) contains more than the reader needs for this book. I shall also assume a basic knowledge of real analysis that can, for example, be acquired from Rudin (1976).
Possibly! I would like to see an analysis that models elites somehow, and what it would give us. I tried to do a quick search for articles that tackled the question of elites and selectorate theory and found ‘Elites, Voters, and Democracies at War’
my instinct is to interpret winning coalition as everyone supporting those in power, including those who have very little power.
That’s exactly what the winning coalition is supposed to mean. It’s the base of supporters the leader chooses to satisfy the minimum coalition size requirement in their state. They’re not supposed to be elites.
In autocracies they often are elites, as there’s a lot of inequality and only a few get big private rewards. Also, if you’re rich and not part of the coalition your wealth is likely to get taken away from you.
In democracies the coalition is to large for most of them to be elites, pretty much by definition of “elite”, and also someone can become an elite without supporting the leadership.
So the selectorate is indeed the potential coalition members, even though that doesn’t mean they’re potential elites (at least not with as much probability). In autocracies many have a very small chance of getting into the coalition, in monarchies few have a big chance, and in democracies many have a big chance.
So both p(is coalition member|elite) and p(is elite|is coalition member) are lower in democracies than autocracies.
What you might find missing in the theory is some representation of elites, as the theory treats all coalition members as equally important (except in the case of correlated voting, but that probably doesn’t cover it), when in reality that’s clearly not the case—A campaign donor is vastly more important than any single voter in democracies. It would be interesting to see what happens if you put the selectorate on a distribution for the importance of their support.
I don’t think this makes the word keys inadequate, but the word they use in the dictator’s handbook to describe the members of the coalition is “essentials”.
That’s exactly the point. Ambiguity means moving away from precision, moving away from truth. It would mean sacrificing truth for emotional impact. That’s the complete antithesis of this post.
It’s not that ambiguity can never be right or bring you closer to truth (say, if you’re precise but wrong), but in this case it clearly doesn’t.
Thanks! I tried to research that claim and find a list of leaders who ruled more than one country at a time, but missed that key term so didn’t find anything. I’ll have to read a bit more deeply to see if he was still something spacial among these or not at all spacial, then I’ll edit that part accordingly.
Edit: Changed to “Leopold II is a prominent example of a person who ruled two nations simultaneously.”, I figured the specifics weren’t important, but still wanted to point to that reference class and link that Wikipedia page.
Thanks a lot! I started writing my Selectorate Theory post 9 months ago and got to 3k words on it, but it was difficult to write and, though I still really wanted it done and published, I lost motivation to write it. So this gave me the push I needed and I can be even more sure now I’m on the other side that it wouldn’t have been written without it—even if I came back to it I likely would have left it again after another 3K words or so, which would still leave me at less than half the 14K total it ended up being.
Truth is the penultimate value
Really liked that line, even though I’m not sure it can’t be the ultimate value.
Secondly, to preserve the flow of writing, it’s important to use words/phrases that don’t need links to be understood. For example, the way you use ‘grok’ or ‘information asymmetry’ breaks the flow. It’s alright to use those words, and it’s alright to link them, but it’s important that you explain the words as you use them so that the reader doesn’t break focus.
I disagree with that. I really like this style and I’m happy it’s popular on LessWrong. It means that those who know what a term means can read through without interruption, and those that don’t are pointed to somewhere they can actually read and learn deeply about it instead of just explaining it in one line. It also allows to build upon ideas much more easily, some posts here are jargon heavy (in a good way), and it would be very difficult to write them if everyone had to explain every term from scratch. To be clear, It’s not that I have a problem with people explaining terms (though it can get excessive, you see that in journalism a lot), in some cases It’s good and It’s in large part a matter of style.
What formatting do you want? Generally if you want formatting options you select the text and a menu appears.