Actually, I like Decoupling vs. Contextualising more too, especially as they become single words.
I’m really happy that you are writing a book on this topic. I mean, the Sequences and the other discussion on Less Wrong has given us a lot of tools with which to form our own opinion, but then we need to figure out how to balance this against the opinions of experts with more domain-specific knowledge. There’s a sense in which all the other knowledge isn’t of any use unless we know when to actually use it.
“Now, on the modest view, this was the unfairest test imaginable. Out of all the times that I’ve ever suggested that a government’s policy is suboptimal, the rare time a government tries my preferred alternative will select the most mainstream, highest-conventional-prestige policies I happen to advocate, and those are the very policy proposals that modesty is least likely to disapprove of.”
This is a pretty big deal, so I wanted to emphasise it. Let’s suppose you come up with 50 policies you think the government should implement. 10 get implemented and 8 work out well. Pretty good right? But what if 30 of your policies would have been utterly stupid and this is obvious to any of the experts? This effect could completely destroy your attempts at callibration.
Slack: Slack is one of the concepts that seems to have gained the most traction. Unfortunately, I don’t think we have a clear definition yet. Zvi defined Slack as the absence of binding constraints on behaviour. I suggested that this would make it merely a synonym for freedom and that his article seems to describe, “freedom provided by having spare resources”, but Zvi wasn’t happy with it and I don’t know if he’s settled on a precise definition yet.
I suspect that this idea is so popular because it gives people a word to describe the disadvantages of running a system near maximum capacity. Zvi explains how slack reduces stress, allows you to pursuing opportunities that arise, allows you to avoid bad trade-offs and allows long-term thinking. In another key article, Raemon explains how it can lead to fragile design decisions and making poor decisions that go unnoticed.
I’ve attended a CFAR workshop. I agree with you that Double Crux has all of these theoretical flaws, but it actually seems to work reasonably well in practise, even if these flaws make it kind of confusing. In practise you just kind of stumble through. I strongly agree that if the technique was rewritten so that it didn’t have these flaws, it would be much easier to learn as the stumbling bit isn’t the most confidence inspiring (this is when the in person assistance becomes important).One of the key elements that haven’t seen mentioned here is this separation between trying to persuade the other person and trying to find out where your point of view differs. When you are trying to convince the other person it is much easier to miss, for example, when there’s a difference in a core assumption. Double Crux lets you understand the broad structure of their beliefs so that you can at least figure out the right kinds of things to say later to persuade them that won’t be immediately dismissed.
I feel like the inside view and all-things-considered view needs to be its own post. The idea of maintaining two separate probabilities for different situations is pretty important, but many people won’t get this insight as this post is rather long. (Admittedly it’s slightly more complex in practise because the inside view might depend on judgements about other things which in turn have their own inside view and all-things-considered view).
I suspect there’s a practise effect here as well. Figuring out how to be assertive without being domineering or bossy is hard. People who have grown up being assertive will have had the opportunity to learn, but those who try to become assertive because they know its important for the workplace won’t have developed the judgement yet.
Instead of purely focusing on whether people will use these powers well, perhaps we should also talk about ways to nudge them towards responsibility?
What if authors had to give reasons for the first 10 or 20 comments that they deleted? This would avoid creating a long term burden, but it would also nudge them towards thinking carefully about which comments they would or would not delete at the start
Perhaps reign of terror moderation should require more karma than norm enforcing? This would encourage people to lean towards norm enforcing and to only switch to reign of terror if norm enforcing wasn’t working for them
I already posted this as a reply to a comment further up, but perhaps authors should only be able to collapse comments at first and then later be given delete powers. Again, it would nudge them towards the former, rather than the later
For the 2nd and 3rd idea to work, the delay system couldn’t be based purely on karma as many authors already have enough karma to gain these powers instantly. There should ideally be to some delay in gaining access to the higher level features even in this case
I clicked +1, but in way of feedback I would suggest trying to be more precise with your definitions. “The absence of binding constraints on behavior” sounds just like a synonym for freedom. If that was the concept that this article identified, it would be kind of pointless, but you’ve actually identified a new and useful concept.
This has a few advantages:
Firstly, it makes the article easier to understand. Someone people learn better by example, others better by explicit definition.
Secondly, it helps you make sure that what you have identified is indeed a single concept and not a few closely related ideas rolled into one.
Third, it allows you clarify the concept in your own head and pick more central examples to illustrate it.
Fourth, it helps set social norms by encouraging other people to carefully define their terms.
I would make an alternative definition as follows:
Firstly, we start off by assuming some kind of resource (ie. time, energy, money, social capital)
Now we can define Slack as keeping some of a resource spare so that you can spend it when opportunities come up (ie. to do things ethically/properly/just for fun/for personal growth, ect).
This is a more specific concept than freedom, it is related to the freedom provided by having spare resources.
There is also a typical paradigm of how people end up in situations without enough slack. Basically, they underestimate the future opportunity cost of either spending or committing to spend a particular resource now. For example, they spend a lot money at a casino, without realising that there is a chance that they may lose their job, in which case their excess money would suddenly become much more useful. Or they commit to too many projects because they want to be agreeable in the present, without realising how a lack of time for recuperation will wear them out when the work ends up more tiring than they expect. More broadly, it is deciding what you can spend or commit now, whilst ignoring the unknowns that might pop up in the future.
I don’t suppose you could clarify what the unresolved issues in decision theory are. What are the biggest issues that haven’t been solved for UDT or FDT? What is a co-ordination problem that hasn’t been solved? And what still isn’t known about counterfactuals?
I don’t suppose I could persuade you to write up a post with what you consider to be some of the most important insights from network theory? I’ve started to think myself that some of our models that we tend to use within the rationality community are overly simplistic.
I’m a huge fan of the Archipelago model, but I’m unsure how well our current structure is suited to it. On Reddit, you only have to learn one set of moderation norms per sub. On Less Wrong, learning one set of moderation norms per author seems like a much higher burden.
In fact, the Archipelago model itself, is one set of norms per group. If you don’t like the norms of a particular group, you just don’t go there. This is harder when there isn’t a clear separation and everyone’s posts are all mixed together.