Oops, that’s a mistake. Fixed now. Thanks.
Curated (with multiple endorsements from the mod team). As noted in my previous comment, this post includes lots of links and references to further resources, but it also motivates the need for lit reviews well. It’s not just a “how to guide”, but also a “why guide” as well. It’s a timely post too.Go back a few years, and lukeprog was the champion/symbol of scholarship on LessWrong. Unfortunately for us, he’s not able to contribute to LessWrong as much anymore; which it makes great that others are taking up the banner and reminding us of the need to build on existing knowledge (and helping people know how to do so).I say this post is timely, that’s because making LessWrong more scholarly continues to be a major focus of my work on the LessWrong team. Scholarship/Lit Reviews are actually a major goal of the new Tagging/Wiki system, whose larger goal still is increasing LessWrong’s intellectual output. The hope is to make it much easier for writers on LessWrong to discover and build upon LessWrong’s decade of previous work. “Shoulder of Giants”, etc.
Obviously, the overwhelming supermajority of the world’s knowledge isn’t in LessWrong’s posts (though the very best insights might be), and our thinkers absolutely need to the skills (and virtue) to mine the troves of knowledge outside our shores. Hence the value in this post.[At the same time, I do think we shouldn’t let a requirement of lit review become too high a barrier to contributing on LessWrong. There’s a lot of value in thinking through things for yourself fresh, and sometimes just getting random uninformed thoughts published stimulates discussion and provides motivation to then go for a thorough survey of the literature.]
All in all, kudos.(And thanks for the recommendation of Intellectual Foundation of Information Organization, that was a good one.)
Welcome!The dictionary definition of “persuade” misses some of the connotations. Persuading someone often means “get them to agree with you” and not “jointly arrive at what’s true, which includes the possibility that others can point out your mistakes and you change your mind.” Explaining usually means more something like “explain your reasoning and facts, which might lead someone to agree with if they think your reasoning is good.”The key difference might be something like “persuade” is written to get the reader to accept what is written regardless of whether it’s true, while “explain” wants you to accept the conclusion only if it’s true. It’s the idea symmetric/asymmetric weapons in this post.Sorry if that’s still a bit unclear, I hope it helps.
Many thanks for writing this. Great overall and I really like the large number of links and references to other resources too (and would have said that even if it wasn’t actually the whole topic :P). I’m so pleased when LW gets another thing about how to study/research. I gave this a strong tag relevance vote on the Scholarship & Learning wikitag.
I believe that military stuff, including and maybe especially culture, is a long-term interest of LW user, Lionhearted. You could message him, also look at his writing on mental toughness within the Strategic Review series.
Rationalist culture and life extension might make sense. We have a Cryonics tag already. If we can round up a few posts on either of those topics, would create these.
To remove a tag, just downvote (it might look like it’s gone to −2, which is fine, upon refresh it will be gone).Yeah, some of those definitely seem like good tag. I’ve had the idea for Coordination/Cooperation, Group Rationality, and Communication.The others I think we’d want to ensure there isn’t too much overlap with existing things. There’s a programming tag, does that do the thing for Software? And then curious about what you seeing going in tools vs the existing techniques (which might also cover “soft skills”)It’s good to see all these suggestions though. Even if we don’t make a tag because of an existing one, soon we might send up “redirects” for terms towards things that are almost the same, or at least the closest match.
It’s reasonable to mention “there’s this comment which is relevant to this topic...”
These are really good.
Embedded Agency is a clear win.
Mechanism Design/Aligning Incentives seems good too. Agree there are choices about the name, and I guess scope too. Do you mean it to be material about how to align incentives but exclude related stuff of examples where incentives failed to be aligned. Would Boeing 737 MAX MCAS as an agent corrigibility failure be part of it?”Resource Bounded Epistemics” sounds like a cool category. So does “Interdisciplinary Analogies”, or should it be “Interdisciplinary Applications”? Anyhow, these are great. More are welcome.Fake Frameworks, yeah, hmm. We might consider “only authors can apply these tags”, I’m not sure. Those might make sense for general “epistemic state” tags.
These are great! I’ll make these soon. Those posts definitely justify doing so in my mind. Re: Wei Dai’s comment, I think it’s reasonable to mention in the tag description text (and those will soon be everyone-editable wiki entries and should include extra info relevant to the tag/wiki/concept, including “notable comments”).
Yeah, I agree with most of that.I do think there will be an appreciable number of tags (even if they’re a minority) that are strictly subsets of, say, AI alignment. Like everything under Value Learning or Embedded Agents, etc, and maybe it’s worth it to have that automatically update.I do feel tag descriptions linking to other tags is extremely important for the system to work and will help a lot here.
Interesting. I agree we want more specific tags too for that post. Though “Problem-Solving Tactics”, actually feels pretty broad too, though a good definition/description might help give it shape. I’ll think about it, not sure if you had one in mind.Another thing that helps is having other posts in mind too for the tag.
I think I understand the motivation behind that. They’re too easy to create and end up applying to too many different things? Does that seem right?A challenge which stark in my mind is how to avoid creating too many heavily overlapping tags, which seems easy to do with higher-level tags.
To untag a post, just downvote its tag relevance. (Either in the hover-over or on the tag page).Yeah, agree with need a better solution for showing currently available tags. In the meantime, you can look at www.lesswrong.com/tags or www.lesswrong.com/tags/allA heuristic the team has discussed is that tags should have 3 good post by at least two different authors. I do want some kind of wellbeing category, and a separate health one make sense too. Anatomy, if it isn’t a topic discussed by others, may or may not make sense. I’m not sure. If it’s to help people find your other writing (the main goal of tagging), you could create a sequence or two to link them.
I’m inclined to treat COVID-19 posts as exception and not tag them with anything except Coronavirus, unless they’re also applicable more broadly and timelessly.
Nice, will definitely look at these.
These papers on viral load probably help inform the answer. It was flagged to me that Ct might not have straightforward interpretation, but I haven’t looked into it. So posting these as resources.https://www.thelancet.com/journals/laninf/article/PIIS1473-3099(20)30113-4/fulltext?fbclid=IwAR3crOZxhVP1eVPMcO_wujJBxHFAjp2fj4_jNj30ld_nVcKTqtcT1IjXozIhttps://www.nejm.org/doi/full/10.1056/NEJMc2001737
This came up on my Facebook feed. I have only glanced at a briefly, but is probably of interest here:Belgian-Dutch Study: Why in times of COVID-19 you should not walk/run/bike close to each other.
Embedded Agency (full-text version)
An Untrollable Mathematician Illustrated
AlphaGo Zero and the Foom Debate
The unexpected difficulty of comparing AlphaStar to humans
Outperforming the human Atari benchmark
How does OpenAI’s language model affect our AI timeline estimates?
Jeff Hawkins on neuromorphic AGI within 20 years
What failure looks like
Soft takeoff can still lead to decisive strategy advantage
My current framework for thinking about AGI timelines
2019 AI Alignment Literature Review and Charity Comparison
What I’ll be doing at MIRI
Offer of collaboration and/or mentorship
Where are people thinking and talking about global coordination for AI safety?