Team Lead for LessWrong
Ruby
Jobs, Relationships, and Other Cults
I’d be pretty interested in the non-cartoonish version, also from people who are more competent and savvy.
For balanced feedback, I enjoyed the choice of diction, and particularly those two words.
Trivia: in racetracks, a “chicane” is a random “unnecessary” kink or twist inserted to make it more complicated (and more challenging/fun).
My understanding is commitment is you say that won’t swerve first in a game of chicken. Pre-commitment is throwing your steering wheel out the window so that there’s no way that you could swerve even if you changed your mind.
Sparsity seems like maybe a relevant keyword.
I feel like mar the reputation of a person in response to wrongdoing has a very important basic purpose for warning other people about interacting with the wrongdoer, i.e. Sarah Smith is dishonest, so don’t trust things she says to be true. This is valuable in worlds where everyone is already a fixed truth-teller/liar and everybody has fixed values.
I like the content/concept here but feel “curse of doom” doesn’t communicate the idea very well. This does seem like effectively a curse of dimensionality though? (Perhaps that’s what inspired this name). Not sure of “Pareto Best of the Curse of Dimensionality” is the right name, but I think it gets at the idea better than generic “doom”.
Curated. This post feels to me like a kind of a survey of the mental skills and properties people do/don’t have for effectiveness, of which I don’t recall any other examples right now, and so is quite interesting. I think it’s both interesting from allowing someone to ask themselves if they’re weak on any of these, but also helpful in modeling others and answering questions of the sort “why don’t people just X?”. For all that we spend a tonne of time interacting with people, people’s internal mental lives are private, and so much like shower habits (I’m told) vary a lot more than externally observable behaviors.
I would like to see the “scope sensitivity” piece fleshed out more. I can see how it applies to eliminating annoyances that take 10 minutes every day and add up, but I don’t think that’s at the heart of rationality. I’d be curious how much mileage someone gets from just reflection on their own mind, and how much that can be done without invoking numeracy.
The “context window” analogy for human minds
Throughput vs. Latency
Taking responsibility and partial derivatives
The proper response to mistakes that have harmed others?
It does, quite a bit! Definitely speeds me up somewhere between 20% and 100% depending on task. And I think it’s a bigger deal for those now working on code and who are newer to it.
This is basically what we do, capped by our team capacity. For most of the last ~2 years, we had ~4 people working full-time on LessWrong plus shared stuff we get from EA Forum team. Since the last few months, we reallocated people from elsewhere in the org and are at ~6 people, though several are newer to working on code. So pretty small startup. Dialogues has been the big focus of late (plus behind the scenes performance optimizations and code infrastructure).
All that to say, we could do more with more money and people. If you know skilled developers willing to live in the Berkeley area, please let us know!
My intuition (not rigorous) is there a multiple levels in the consequentialist/deontoligical/consequentialist dealio.
I believe that unconditional friendship is approximately something one can enter into, but one enters into it for contingent reasons (perhaps in a Newcomb-like way – I’ll unconditionally be your friend because I’m betting that you’ll unconditionally be my friend). Your ability to credibly enter such relationships (at least in my conception of them) is dependent on you not starting to be more “conditional” because you doubt that the other person is also being there. This I think is related to not being a “fair weather” friend. I continue to be your friend even when it’s not fun (you’re sick, need taking care of whatever) even if I wouldn’t have become your friend to do that. And vice versa. Kind of a mutual insurance policy.
Same thing could be with contracts, agreements, and other collaborations. In a Newcomb-like way, I commit to being honest, being cooperative, etc to a very high degree even in the face of doubts about you. (Maybe you stop by the time someone is threatening your family, not sure what Ben, et al, think about that.) But the fact I entered into this commitment was based on the probabilities I assigned to your behavior at the start.
I see interesting points on both sides here. Something about how this comment(s) is expressed makes me feel uncomfortable, like this isn’t the right tone for exploring disagreements about correct moral/cooperative behavior, it at least it makes it a lot harder for me to participate. I think it’s something like it feels like performing moral outrage/indignation in a way that feels more persuadey than explainy, and more in the direction of social pressure, norms-enforcery. The phrase “shame on you” is a particularly clear thing I’ll point at that makes me perceive this.
I was going to write stuff about integrity, and there’s stuff to that, but the thing that is striking me most right now is that the whole effort seemed very incompetent and naive. And that’s upsetting.
I am now feeling uncertain about the incompetence and naivety of it. Whether this was the best move possible that failed to work out, or best move possible that actually did get a good outcome, or a total blunder is determined by info I don’t have.
I have some feeling of they were playing against a higher-level political player which both makes it hard but also means they needed to account for that? Their own level might be 80+th percentile in reference class of executive/board type-people, but still lower than Sam.
The piece that does seem most like they really made a mistake was trying to appoint an interim CEO (Mira) who didn’t want the role. It seems like before doing that, you should be confident the person wants it.
I’ve seen it raised that the board might find the outcome to be positive (board stays independent even if current members leave?). If that’s true, does change the evaluation of the competence. Feels hard for me to confidently judge that, though my gut sense is Sam got more of what he wanted/common knowledge of his sway than others.
Styling of the headers in this post is off and makes it harder to read. Maybe the result of a bad copy/paste?
These recent events have me thinking the opposite: policy and cooperation approaches to making AI go well are doomed – while many people are starting to take AI risk seriously, not enough are, and those who are worried will fail to restrain those who aren’t (where not being risked in a consequence of humans often being quite insane when incentives are at play). The hope lies in somehow developing enough useful AI theory that leading labs adopt and resultantly build an aligned AI even though they never believed they were going to cause AGI ruin.
And so maybe let’s just get everyone to focus on the technical stuff. Actually more doable than wrangling other people to not build unsafe stuff.
Curated. Beyond the object level arguments for how to do plots here that are pretty interesting, I like this post for the periodic reminder/extra evidence that relatively “minor” details in how information is presented can nudge/bias interpretation and understanding.
I think the claims around bordering lines become strongly true if there were established convention, and more weakly so the way currently are. Obviously one ought to be conscious in reading and creating graphs for whether 0 is included.