Adam Zerner
I learned about S-curves recently. It was in the context of bike networks. As you add bike infrastructure, at first it doesn’t lead to much adoption because the infrastructure isn’t good enough to get people to actually use it. Then you pass some threshold and you get lots of adoption. Finally, you hit a saturation point where improvements don’t move the needle much because things are already good.
I think this is a really cool concept. I wish I knew about it when I wrote Beware unfinished bridges.
I feel like there are a lot of situations where people try to make progress on the “introduction phase” of the S-curve without having a plan for actually reaching the growth phase. It happens with bike infrastructure. If a startup founder working on a new social network did this, it’d likely be fatal. I’m struggling to come up with good examples of this though.
Also, I wonder if there is a name for this failure mode where you work on the introduction phase without having a plan for actually reaching the growth phase. Seems worth naming.
Default arguments in casual speech
Haha, yup. I have a Shoulder Justis now that frequently reminds me
of thisto disambiguate words like “this” and “that”, which I’m grateful for.
Yeah, that seems plausible. I have no issues with that sort of a recommendation. I think cover-to-cover recommendations also happen not infrequently though.
I don’t think social obligations play much if any role in my pet peeve here. If someone recommends a book to me without considering the large investment of time I’d have to make to read it, but doesn’t apply any social pressure, I’d still find that to be frustrating.
I guess it’s kinda like if someone recommends a certain sandwich without factoring in the cost. Maybe the sandwich is really good, but if it’s $1,000, it isn’t worth it. And if it’s moderately good but costs $25, it also isn’t worth it. More generally, whether something is worthwhile depends on both the costs and the benefits, and I think that in making recommendations one should consider them both.
My claim isn’t that they capture all the content or that they are a perfect replacement. My (implied) claim is that they are a good 80-20 option.
A pet peeve of mine is when people recommend books (or media, or other things) without considering how large of an investment they are to read. Books usually take 10 hours or so to read. If you’re going to go slow and really dig into it, it’s probably more like 20+ hours. To make the claim “I think you should read this book”, the expected benefit should outweigh the relatively large investment of time.
Actually, no, the bar is higher than that. There are middle-ground options other than reading the book. You can find a summary, a review, listen to an interview with the author about the book, or find blog posts on the same topic. So to recommend reading the book in full, doing so has to be better than one of those middle-ground options, or worthwhile after having completed one of the middle-ground options.
To be charitable, maybe people frequently aren’t being literal when they recommend books. Maybe they’re not actually saying “I think it would be worth your time to read this book in full, and that you should prioritize doing so some time in the next few months”. Maybe they are just saying they though the book was solid.
Now, every program believes they give students a chance to practice because they have them work with real clients, during what is even called “practicums”. But seeing clients does not count as practice, at least not according to the huge body of research in the area of skill development.
According to the science, seeing clients would be categorized, not as practice, but as “performance”. In order for something to be considered practice, it needs to be focused on one skill at a time. And when you’re actually seeing a client, you’re having to use a dozen or more skills at once, in real time, without a chance to slow down and focus on one skill long enough to improve upon it.
The research on expertise is clear: performance, where you’re doing the whole thing at once, does not lead to improvement in one’s abilities. That’s why therapists, on average, don’t improve in their outcomes with more years of experience.
The truth is, having the chance to see more clients (gain clinical experience) does not make us better therapists. What does? Something called deliberate practice.
-- Dr. Tori Olds, Picking a Graduate Program | How to Become a Therapist—Part 4 of 6
I was thinking about what I mean when I say that something is “wrong” in a moral sense. It’s frustrating and a little embarrassing that I don’t immediately have a clear answer to this.
My first thought was that I’m referring to doing something that is socially suboptimal in a utilitarian sense. Something you wouldn’t want to do from behind a veil of ignorance.
But I don’t think that fully captures it. Suppose you catch a cold, go to a coffee shop when you’re pre-symptomatic, and infect someone. I wouldn’t consider that to be wrong. It was unintentional. So I think intent matters. But it doesn’t have to be fully intentional either. Negligence can still be wrong.
So is it “impact + intent”, then? No, I don’t think so. I just bought a $5.25 coffee. I could have donated that money and fed however many starving families. From behind a veil of ignorance, I wouldn’t endorse the purchase. And yet I wouldn’t call it “wrong”.
This thought process has highlighted for me that I’m not quite sure where to draw the boundaries. And I think this is why people talk about “gesturing”. Like, “I’m trying to gesture at this idea”. I’m at a place where I can gesture at what I mean by “wrongness”. I can say that it is in this general area of thingspace, but can’t be more precise. The less precise your boundaries/clouds, the more of a gesture it is, I suppose. I’d like to see a (canonical) post on the topic of gesturing.
In these situations I suppose there’s probably wisdom in replacing the symbol with the substance. Ditching the label, talking directly about the properties, talking less about the central node.
Many people (including me) have opinions on current US president Donald Trump, none of which are relevant here because, as is well-known to LessWrong, politics is the mind-killer.
I think that “none of which are relevant” is too strong a statement and is somewhat of a misconception. From the linked post:
If you want to make a point about science, or rationality, then my advice is to not choose a domain from contemporary politics if you can possibly avoid it. If your point is inherently about politics, then talk about Louis XVI during the French Revolution. Politics is an important domain to which we should individually apply our rationality—but it’s a terrible domain in which to learn rationality, or discuss rationality, unless all the discussants are already rational.
So one question is about how ok it is to use examples from the domain of contemporary politics. I think it’s pretty widely agreed upon on LessWrong that you should aim to avoid doing so.
But another question is whether it is ok to discuss contemporary politics. I think opinions differ here. Some think it is more ok than others. Most opinions probably hover around something like “it is ok sometimes but there are downsides to doing so, so approach with caution”. I took a glance at the FAQ and didn’t see any discussion of or guidance on how to approach the topic.
Related: 0 and 1 Are Not Probabilities
I’ve been doing Quantified Intuitions’ Estimation Game every month. I really enjoy it. A big thing I’ve learned from it is the instinct to think in terms of orders of magnitude.
Well, not necessarily orders of magnitude, but something similar. For example, a friend just asked me about building a little web app calculator to provide better handicaps in golf scrambles. In the past I’d get a little overwhelmed thinking about how much time such a project would take and default to saying no. But this time I noticed myself approaching it differently.
Will it take minutes? Eh, probably not. Hours? Possibly, but seems a little optimistic. Days? Yeah, seems about right. Weeks? Eh, possibly, but even with the planning fallacy, I’d be surprised. Months? No, it won’t take that long. Years. No way.
With this approach I can figure out the right ballpark very quickly. It’s helpful.
Many years after having read it, I’m finding that the “Perils of Interacting With Acquaintances” section in The Great Perils of Social Interaction has really stuck with me. It is probably one of the more useful pieces of practical advice I’ve come across in my life. I think it’s illustrated really well in this barber story:
But that assumes that you can only be normal around someone you know well, which is not true. I started using a new barber last year, and I was pleasantly surprised when instead of making small talk or asking me questions about my life, he just started talking to me like I was his friend or involving me in his conversations with the other barber. By doing so, he spared both of us the massive inauthenticity of a typical barber-customer relationship and I actually enjoy going there now.
I make it a point to “be normal” around people and it’s become something of a habit. One I’m glad that I’ve formed.
I get the sense that autism is particularly unclear, but I haven’t looked closely enough at other conditions to be confident in that.
Something I’ve always wondered about is what I’ll call sub-threshold successes. Some examples:
A stand up comedian is performing. Their jokes are funny enough to make you smile, but not funny enough to pass the threshold of getting you to laugh. The result is that the comedian bombs.
Posts or comments on an internet forum are appreciated but not appreciated enough to get people to upvote.
A restaurant or product is good, but not good enough to motivate people to leave ratings or write reviews.
It feels to me like there is an inefficiency occurring in these sorts of situations. To get an accurate view of how successful something is you’d want to incorporate all of the data, not just data that passes whatever (positive or negative) threshold is in play. But I think the inefficiencies are usually not easy to improve on.
[Question] What is autism?
In A Sketch of Good Communication—or really, in the Share Models, Not Beliefs sequence, which A Sketch of Good Communication is part of—the author proposes that, hm, I’m not sure exactly how to phrase it.
I think the author (Ben Pace) is proposing that in some contexts, it is good to spend a lot of effort building up and improving your models of things. And that in those contexts, if you just adopt the belief of others without improving your model, well, that won’t be good.
I think the big thing here is research. In the context of research, Ben proposes that it’s important to build up and improve your model. And for you to share with the community what beliefs your model outputs.
This seems correct to me. But I’m pretty sure that it isn’t true in other contexts.
For example, I wanted to buy a new thermometer recently. Infrared ones are convenient, so I wanted to know if they’re comparably accurate to oral ones. I googled it and Cleveland Clinic says they are. Boom. Good enough for me. In this context, I don’t think it was worth spending the effort updating my model of thermometer accuracy. In this context, I just need the output.
I think it’d be interesting to hear people’s thoughts on when it is and isn’t important to improve your models. In what contexts?
I think it’d also be interesting to hear more about why exactly it is harmful in the context of intellectual progress to stray away from building and improving your models. There’s probably a lot to say. I think I remember the book Superforecasters talk about this, but I forget.
Hm. On the one hand, I agree that there are distinct things at play here and share the instinct that it’d be appropriate to have different words for these different things. But on the other hand, I’m not sure if the different words should fall under the umbrella of solitude, like “romantic solitude” and “seeing human faces solitude”.
I dunno, maybe it should. After all, it seems that in different conceptualizations of solitude, it’s about being isolated from something (others’ minds, others’ physical presence).
Ultimately, I’m trusting Newport here. I think highly of him and know that he’s read a lot of relevant literature. At the same time, I still wouldn’t argue too confidently that his preferred definition is the most useful one.
That makes sense. I didn’t mean to imply that such an extreme degree of isolation is a net positive. I don’t think it is.
Would anyone be interested in having a conversation with me about morality? Either publicly[1] or privately.
I have some thoughts about morality but I don’t feel like they’re too refined. I’m interested in being challenged and working through these thoughts with someone who’s relatively knowledgeable. I could instead spend a bunch of time eg. digging through the Stanford Encyclopedia of Philosophy to refine my thoughts, but a) I’m not motivated enough to do that and b) I think it’d be easier and more fun to have a conversation with someone about it.
To start, I think you need to be clear about what it is you’re actually asking when you talk about morality. It’s important to have clear and specific questions. It’s important to avoid wrong questions. When we ask if something is moral, are we asking whether it is desirable? To you? To the average person? To the average educated person? To one’s Coherent Extrapolated Volition (CEV)? To some sort of average CEV? Are we asking whether it is behavior that we want to punish in order to achieve desirable outcomes for a group? Reward?
It seems to me that a lot of philosophizing about morality and moral frameworks is about fit. Like, we have intuitions about what is and isn’t moral in different scenarios, and we try to come up with general rules and frameworks that do a good job of “fitting” these intuitions.
A lot of times our intuitions end up being contradictory. When this happens, you could spend time examining it and arriving at some sort of new perspective that no longer has the contradiction. But maybe it’s ok to have these contradictions. And/or maybe it’s too much work to actually get rid of them all.
I feel like there’s something to be said for more “enlightened” feelings about morality. Like if you think that A is desirable but that preference is based on incorrect belief X, and if you believed ~X you’d instead prefer B, something seems “good” about moving from A to B.
I’m having trouble putting my finger on what I mean by the above bullet point though. Ultimately I don’t see a way to cross the is-ought gap. Maybe what I mean is that I personally prefer for my moral preferences to be based on things that are true, but I can’t argue that I ought to have such a preference.
As discussed in this dialogue, it seems to me that non-naive versions of moral philosophies end up being pretty similar to one another in practice. A naive deontologist might tell you not to lie to save a child from a murderer, but a non-naive deontologist would probably weigh the “don’t lie” rule against other rules and come to the conclusion that you should lie to save the child. I think in practice, things usually add up to normality.
I kinda feel like everything is consequentialism. Consider a virtue ethicist who says that what they ultimately care about is acting in a virtuous way. Well, isn’t that a consequence? Aren’t they saying that the consequence they care about is them/others acting virtuously, as opposed to eg. a utilitarian caring about consequences of involving utility?
The feature’s been de-emphasized but you can initiate a dialog from another user’s profile page.