LessWrong developer, rationalist since the Overcoming Bias days. Connoisseur of jargon.
This is indeed one of the things the Frontpage vs Personal Blog distinction is meant to handle; people are attracted to Criticize The Outgroup and Interpersonal Conflict posts, in ways they wouldn’t endorse given a bit of distance and which don’t seem to be reliably handled through the karma system.
Note that even things where the scores aren’t affected directly may still change score, because the vote-strength of users who voted on them have changed. The karma-change notifier (the star icon in the top-right corner of the page) won’t notify you of these changes, as it works by looking at the recent votes themselves rather than at computed scores.
There is a joke about programmers, that I picked up long ago, I don’t remember where, that says: A good programmer will do hours of work to automate away minutes of drudgery. Some time last month, that joke came into my head, and I thought: yes of course, a programmer should do that, since most of the hours spent automating are building capital, not necessarily in direct drudgery-prevention but in learning how to automate in this domain.
I did not think of this post, when I had that thought. But I also don’t think I would’ve noticed, if that joke had crossed my mind two years ago. This, I think, is what a good concept-crystallization feels like: an application arises, and it simply feels like common sense, as you have forgotten that there was ever a version of you which would not have noticed that.
I think this points to a mismatch between Benquo and Baudrillard, but not to a problem with the version of the concept Benquo uses. Given how successful the (modified, slightly different) concept has been, I consider this more of a problem with Baudrillard’s book than a problem with Benquo’s post.
I continue to think this post is important, for basically the same reasons as I did when I curated it. I think for many conversations, having the affordance and vocabulary to talk about frames makes the difference between them going well and them going poorly.
I think that, among those who’ve done serious thought about how intellectual progress happens, it was pretty well known that in some domains a lot of research is happening on forums, and that forum participation as a research strategy can work. But in the broader world, most people treat forums as more like social spaces, and have a model of research works that puts it in distant, inaccessible institutional settings. Many people think research means papers in prestigious journals, with no model of where those papers come from. I think it’s worth making common knowledge that getting involved in research can be as simple as tweaking your forum subscriptions.
I observe: There are a techniques floating around the rationality community, with models attached, where the techniques seem anecdotally effective, but the descriptions seem like crazy woo. This post has a model that predicts the same techniques will work, but the model is much more reasonable (it isn’t grounded out in axon-connections, but in principle it could be). I want to resolve this tension in this post’s favor. In fact I want that enough to distrust my own judgment on the post. But it does look probably true, in the way that models of mind can ever be true (ie if you squint hard enough).
This is not the clearest or the best explanation of simulacrum levels on LessWrong, but it is the first. The later posts on the subject (Simulacra and Subjectivity, Negative Feedback and Simulacra, Simulacra Levels and Their Interactions) are causally downstream of it, and are some of the most important posts on LessWrong. However, those posts were written in 2020, so I can’t vote for them in the 2019 review.
I have applied the Simulacrum Levels concept often. I made spaced-repetition cards based on them. Some questions are easy to notice and ask, in simulacrum level terms, and impossible to ask otherwise: What things drive it higher or lower? What level is my conversational partner at? Can I make things more object-level? These questions were hard to notice before, but I think I’ve been able to answer them, in contexts where I wouldn’t.
For someone who reads the Best of 2019 Review books, I think failing to mention the simulacrum levels would be a grave disservice, both because they’re a really key concept for understanding the conversations that happened on LessWrong, and for understanding the world in general.
So I’m voting for inclusion. It’s not the best of the explanations, but it’s good enough, and it’s the one we’ve got.
I think there’s one more piece to the story of how the Politics Is the Mindkiller post morphed into the distorted four-words version of itself, which is: sometimes someone wants to talk about politics but they’re clearly not ready, rationality-wise. Telling them “politics is the mindkiller” (in general, across all people) is more polite than saying “you-in-particular are not rational enough to talk about politics”. Unfortunately, I suspect this sort of doublespeak reduced the amount of attention people paid to other peoples’ skill levels, and contributed to some failures of gatekeeping.
For reducing CO2 emissions, one person working competently on solar energy R&D has thousands to millions of times more impact than someone taking normal household steps as an individual. To the extent that CO2-related advocacy matters at all, most of the impact probably routes through talent and funding going to related research. The reason for this is that solar power (and electric vehicles) are currently at inflection points, where they are in the process of taking over, but the speed at which they do so is still in doubt.
I think the same logic now applies to veganism vs meat-substitute R&D. Considering the Impossible Burger in particular. Nutritionally, it seems to be on par with ground beef; flavor-wise it’s pretty comparable; price-wise it’s recently appeared in my local supermarket at about 1.5x the price. There are a half dozen other meat-substitute brands at similar points. Extrapolating a few years, it will soon be competitive on its own terms, even without the animal-welfare angle; extrapolating twenty years, I expect vegan meat-imitation products will be better than meat on every axis, and meat will be a specialty product for luddites and people with dietary restrictions. If this is true, then interventions which speed up the timeline of that change are enormously high leverage.
I think this might be a general pattern, whenever we find a technology and a social movement aimed at the same goal. Are there more instances?
If you’re going to do this, I would suggest getting a few DEXA scans to make sure you aren’t losing muscle mass. Also, you may need to replenish salt during the fast, and your salt needs may change with the weather, so watch out if heat or exercise makes you sweat.
I call this subcategory of Berkson’s paradox issues the conservation of virtue effect: when there is a filter somewhere for something like a sum of good qualities, then all good qualities are negatively correlated. Another major subcategory is the “if you observe something which has multiple possible explanations, those explanations are negatively correlated” effect. I don’t think these two subtypes cover all the instances, but they do seem to cover a large fraction, and they aren’t too difficult to internalize.
The C-style “oops I used an object after freeing it and now anyone can execute arbitrary code” style of vulnerabilities is confined to a fairly narrow set of programming languages, most notably C and C++, which unfortunately happen to be popular and to have some interoperability advantages. One of the key requirements of a security-amenable language is that it can never tempt its users into writing parts of their project in C, which happens if the language is too slow (eg Python) or can’t otherwise interoperate with important systems (most languages, unfortunately). Some programming language choices create performance and sysadmin problems that are treated by end-users as dealbreakers; in fact this was responsible for most of C’s historical success.
A lot of issues crop up at the interfaces between programming languages, which tend not to fall neatly into one of the languages’ scope. SQL injection is a classic example. A language can’t do a whole lot in practice to protect against that, but a library can: by designing the APIs so that doing things the safe way is easy, and doing things the dangerous way requires calling functions with “dangerous” in the name (like React’s dangrouslySetInnerHTML).
If I import third-party libraries from the internet, there is a lot of opportunity for mischief (both by those third parties, and by the people who might hack them). This is probably the nodejs ecosystem’s weakest link at the moment. This problem is largely social; the best defense is a trustworthy group of curators providing a core set of libraries that you rarely need to step outside, plus an expectation that libraries are large projects that avoid creating indirect dependencies and that you use a small number of. (As opposed to having a hundred different left-pad style tiny libraries from a hundred different authors.)
Reducing security vulnerability incidence to zero is possible in some domains, but a programming language alone can’t do it; I can write if(password=="backdoor") acceptLogin() in any language.
I think there’s a fair amount of room left for incremental improvement, but in practice I think it looks less like “move everyone to Haskell and Coq” and more like “design a good core-library crypto API for X” and “reform common practices around npm”.
It seems obviously correct, just too specific; a more general policy like “be extra careful before signing up for anything with a recurring fee” would prevent this mistake, and also many others.
Not highly confident. Maybe it was only the audio?
This may have failed for some subset of attendees, but it played successfully for me, and I remember it as a highlight of the solstice.
How hard is it to setup the sys admin side of things ? Deploying a prod server behind an nginx with a non-sqlite db and pointing it to your own cdn
What kind of machine (very roughly speaking) would you need to handle top volumes ~20,000 visitors/hr (say, max 3,000/min) without the core functionality breaking and ~500 visitors/hr (say, max 100/min) with top-notch user experience (assuming optimal db, distro and reverse proxy choices)
Logged-out users visiting the front page and a few post pages (ie, getting Slashdotted) will all be served from the page cache, so pretty much only limited by bandwidth. Lesswrong itself runs on a dynamically scaling pool of t2.small instances (though one would probably be enough) and a MongoDB Atlas M30 cluster.
How hard is it to modify the theme (e.g. fonts, color-scheme, icons) ?
If you’re familiar with CSS/JSS, this should be pretty straightforward.
How hard is it to integrate your own 3rd party service or get rid of them ? Specifically, add your own app cred for the signups and remove google analytics, intercom and all other 3rd party integrations that would make Richard stallman cry which aren’t critical to the commenting experience.
OAuth, Google Analytics, and Intercom are all already used, so integrating them is just be a matter of getting an API key and putting it into the right config setting.
Are the makers of LW explicitly fine and open to it being used by other people or is it open source mainly for the sake of community debugging?
We are explicitly fine with this. We just haven’t gotten around to optimizing it much for this use case.
What are particularly difficult/annoying/deal-breaking parts of the setup that were unexpected?
Site search is powered by Algolia, which is kind of expensive and not especially good.
Including this in the 2019 Review is a bit odd, since most of the content is in the answers rather than the question, but I like how those answers set a research agenda that can be followed up on.
In the great puzzle of what’s going wrong with civilization, I think this is a key piece. And it’s a piece at risk of having our collective attention slide off it, and slipping through the cracks; awareness of approval-process distortions tends to do that.
A naive take on this is that having a higher average level of jargon usage makes the incomprehensibility bluff easier to pull off against people who don’t know the jargon, so you might think it reduces the legibility of peoples’ knowledge and skill levels overall. But I don’t think it works out this way in practice. My experience is that on subjects where I have medium knowledge (not an expert, but more informed than most laypeople), when I come across laypeople pretending to be experts, they often give themselves away by using a jargon term incorrectly. I also find that glossaries are a good entry point into a subject, and avoiding jargon too much would make the glossaries less useful for this purpose.
I am a bit worried about people invisibly bouncing off our community because of the jargon, but I think the jargon is important enough that I’d rather solve it by making the jargon better (and making its intellectual infrastructure better) rather reduce the amount of it.