LessWrong team member / moderator. I’ve been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I’ve been interested in improving my own epistemic standards and helping others to do so as well.
Raemon(Raymond Arnold)
Freedom is overall best (it syncs across your devices and can block apps on desktop), but self control had a different mechanism that was harder to circumvent
I think I became pretty significantly harder working. Here’s my actual history:
Early/mid-twenties: Working at an advertising firm, developing coding tools. On average I think I worked 2-4 hours most days, doing a lot of facebook/etc at my day job. I did sometimes work much more/harder when there was either a particularly interesting project or a client with a tight deadline.
I was frequently trying various hacks like Beeminder, accountability buddies, or Chrome extensions that “sort of” blocked distracting websites but were easy to circumvent.
Late twenties: Worked at a startup. I had some kind of “unlock” that bumped me up to more like 6-7 hour days, that came from a combination of:
the work itself being more meaningful along many axes
discovering the apps Self Control and Freedom.to (which block distracting websites).
I think both the previous pieces were necessary. Without the meaningful work, I think I would have been sufficiently motivated to route around Self Control (i.e. finding new distractions even if the other ones remained blocked, or just disabling it). Without Self Control, I think some bad habits would have made it hard to get invested in the meaningful work.
30ish. Worked at Spotify in the IT department. This work was a lot less intrinsically meaningful, but I think I had established a better set of habits for myself and I was able to find meaning in “leveling up at coding” even if I didn’t care much about the product.
I think I probably worked like 4-6 hour days here (while also doing random other stuff during the day that I cared about more facebook but which wasn’t related to my real job)
(Throughout all of this I’d periodically work more intensely on short projects I cared about, for like 1-2 months at a time)
Early 30s. I moved to Berkeley and joined LessWrong. I think I was probably doing 6-7 hours of “real work” during a day, although it gets tricky because there’s a lot of discussion/philosophizing involved which wasn’t “focused work” but was an actual part of the job.
Mid 30s. LessWrong re-orgs into Lightcone. We do a lot of types of work that is less cognitively demanding but is more physically demanding. It involves a lot of 12 hour days for weeks on end. It’s very draining/burnouty for me, although I think it would have felt a lot less so if they were more like 10 hour days and I felt like I had more control over them.
I “quit” the campus team, shift back to LessWrong work. I think I mostly work “6-7 real work hours” each day, but a couple times a year have months where I’m working more like 10-12 hour days 6 days a week (in situations where I’m particularly “in-flow”, or I care a lot about the outcome)
Most recently: when I did my Thinking Physics sprint recently, I was really only able to do like 4 hours of “thinking work” a day, and I felt completely wrecked in the evening. In some sense this is “the hardest I’ve ever did ‘thinking’ work” where I was constantly on the edge of my ability. I heard from a coworking that this felt similar to when they were doing “fulltime Research.”
Curated, both for the OP (which nicely lays out some open problems and provides some good links towards existing discussion) as well as the resulting discussion which has had a number of longtime contributors to LessWrong-descended decision theory weighing in.
FYI I think this is worth fleshing out into a top level post (esp. given that it’s ‘Pause Debate’ week).
I’m not actually sure it needs much fleshing out. I think the main bit here that feels unjustified, or insufficiently-justified for the strength of the claim, is:
That’s changed a bit lately, in part because a bunch of people seem to think that making technical progress on alignment is hopeless. I think this is just not an epistemically reasonable position to take: history is full of cases where people dramatically underestimated the growth of scientific knowledge, and its ability to solve big problems.
(It’s deliberate that there is one thumbs up in the top row and 2 in the bottom row of the list-view, because it seemed actually important to give people immediate access to the thumbs-up. Thumbs down felt vaguely important to give people overall but not important to put front-and-center)
No, if you look you’ll notice that the top row of the palette view is the same as the top row of the list view, and the second row of the palete-view is the same as the bottom row of the list view. The specific lines of code were re-used.
The actual historical process was: Jim constructed the List View first, then I spent a bunch of time experimenting with different combinations of list and palette views, then afterwards made a couple incremental changes for the List view that accidentally messed up the palette view. (I did spent, like, hours, trying to get the palette view to work, visually, which included inventing new emojis. It was hard because each line was trying to have a consistent theme, as well as the whole thing fitting in to a grid)
But yeah it does look like the “thanks” emoji got dropped by accident from the palette view and it does just totally solve the display problem to have it replace the thumbs-up.
I think it’s unlikely, but:
Why does it seem unlikely? (Note: I haven’t read the post or comments in full yet, if you think this is already covered somewhere I’ll go read that first)
This is mostly because it’s actually pretty annoying to get exactly even numbers of icons in each row. I agree it looks pretty silly but it’s a somewhat annoying design challenge to get it looking better.
One thing to note is that the LessWrong vote-weighting system is (in some ways) intended to be a poor man’s eigenkarma (i.e. it does a somewhat similar thing of weighting karma by trust)
There’s a few different ways that “canonical Eigenkarma” differs from LW upvote/strong-upvote power. What are the things you’re particular interested in here?
FYI we’ve since updated the system to only trigger based if there are enough unique downvoters on ‘net-negative comments’, which I think should reduce the false positive rate.
(Ie I think the reason it triggered in your case was that you also have some random downvotes on other upvoted comments)
I agree with that statement as worded, but you still seem to be presupposing a view of ‘mediation is good-by-default in this sort of situation’ that at least don’t think you’ve argued for.
Some thoughts:
First:
Overall, I’ve updated that it’s too confusing to have any rate limits that restrict commenters’ ability to comment on their own posts. (There are 8 rules that create commenting rate limits, and previously only 2 of them rate limited you on your own posts). I personally think the 2 rules were reasonable, but I think they’ve mostly resulted in people forming inaccurate beliefs about how the rate limit system works. (i.e. they don’t realize it’s a rare exception for rate limiting to affect your own posts)
So, I’ve just shipped a code-update that makes it so there are now 0 ways to get rate limited on your own posts. (Well, one exception for the universal “you can’t comment more than ‘every 8 seconds’” rule, but that’s a pretty different rule)
Two:
I do basically think it’s correct for you to have been rate limited, for the reasons John/Shminux and others have described. You argued poorly about a topic that we’ve covered a lot, in a fairly inflammatory way. This is precisely what the rate limits are there to nudge people away from.
I do think that people get more downvoted on some kinds of tribally-loaded topics, and I wish that were different. I’ve tried to tune the rate limits so that they do correctly rate-limit-people when they make bad arguments and get downvoted, but don’t excessively rate limit people when they get “excessively tribally downvoted”. I’m not sure if I’ve exactly succeeded at this, but I’ve been tracking the rate limits that get applied over the past month and I think mostly endorsed how things shook out.
Three:
More positively: fwiw, I have felt like your other recent posts seem like your doing a fairly reasonable thought process, given your current epistemic state / skills. (This is based on a cursory glance rather than reading them in detail, I don’t know that I think all your mental motions make sense, but, like, it looks like you’re trying to think through and argue about various object-level issues in a way that seems healthy)
- 18 Sep 2023 2:43 UTC; 4 points) 's comment on The commenting restrictions on LessWrong seem bad by (
Note that the more you believe that your commenters can tell whether some arguments are productive or not, and worth having more or less of on the margin
My actual belief is that commenters can (mostly) totally tell which arguments are productive… but… it’s hard to not end up having those unproductive arguments anyway, and the site gets worse.
Heh, I think this is the opposite of our respective roles in our previous conversation about trauma. (Where you were like “I tend to think of trauma as things that happened in the past that led to stuck memories that are strongly immune to updating.” and I was like “that seems different enough from standard usage you should probably find a new word?)
So, I’m pretty open to the “regret is different enough for most people than how I’m describing it that I should have a new word.” But, I also personally have thought of regret as fairly straightforwardly matching the way I described it. I don’t feel like I did much rationality-reimagining to end up with my description above. (i.e. most people might not say ‘the point of regret is to learn things for the future’, but I do think it sort of straightforwardly describes how people are using it’)
This… seems like it’s not engaging with what regret is for?
Like, there is definitely a sense in which everything is deterministic. But, like, the point of regret is to learn and do different things in the future.
I’m a bit surprised at your viewpoint here given other things I knew about you, though, and not sure if I’m missing something.
Can you say more details about what you mean here? I found the phrasing here a bit hard to parse.
I just wanted to add some context (that I thought of as “obvious background context”, but probably not everyone is tracking), that Eliezer wrote more about the “rule” here in the 8th post of the Inadequate Equilibria sequence:
I’ve now given my critique of modesty as a set of explicit doctrines. I’ve tried to give the background theory, which I believe is nothing more than conventional cynical economics, that explains why so many aspects of the world are not optimized to the limits of human intelligence in the manner of financial prices. I have argued that the essence of rationality is to adapt to whatever world you find yourself in, rather than to be “humble” or “arrogant” a priori. I’ve tried to give some preliminary examples of how we really, really don’t live in the Adequate World where constant self-questioning would be appropriate, the way it is appropriate when second-guessing equity prices. I’ve tried to systematize modest epistemology into a semiformal rule, and I’ve argued that the rule yields absurd consequences.
I was careful to say all this first, because there’s a strict order to debate. If you’re going to argue against an idea, it’s bad form to start off by arguing that the idea was generated by a flawed thought process, before you’ve explained why you think the idea itself is wrong. Even if we’re refuting geocentrism, we should first say how we know that the Sun does not orbit the Earth, and only then pontificate about what cognitive biases might have afflicted geocentrists. As a rule, an idea should initially be discussed as though it had descended from the heavens on a USB stick spontaneously generated by an evaporating black hole, before any word is said psychoanalyzing the people who believe it. Otherwise I’d be guilty of poisoning the well, also known as Bulverism.
But I’ve now said quite a few words about modest epistemology as a pure idea. I feel comfortable at this stage saying that I think modest epistemology’s popularity owes something to its emotional appeal, as opposed to being strictly derived from epistemic considerations. In particular: emotions related to social status and self-doubt.
Curated. I liked both the concrete array of ideas coming from someone who has a fair amount of context, and the sort of background models I got from reading each of said ideas.
Aside: can we taboo “NDA” in this discussion? It seems pretty fucked that it means both non-disparagement-agreement and non-disclosure-agreement and it’s annoying to track which one people are referring to.
Fwiw I take this as moderate but not overwhelming evidence. (I think I agree with the rest of your comment, just flagging this seemed slightly overstated)