New (unedited) post: The bootstrapping attitude
New (unedited) post: Exercise and nap, then mope, if I still want to
Prediction contests are an obvious one.
Also, perhaps, having people compete at newly designed games, so that everyone has the same amount of time to learn the rules and how to win, given the rules.
Perhaps we could design puzzles that intentionally have places where one would make a mistake, error, or wrong choice, and such errors are visible (to an observer who knows the puzzle) when made.
When I design a toaster oven, I don’t design one part that tries to get electricity to the coils and a second part that tries to prevent electricity from getting to the coils. It would be a waste of effort. Who designed the ecosystem, with its predators and prey, viruses and bacteria? Even the cactus plant, which you might think well-designed to provide water fruit to desert animals, is covered with inconvenient spines.
Well, to be fair, if want to design an image classifier, I might very well make one part that tries hard to categorize photos and another part that tries hard to miscategorize them.
If the other group or community is, as you say, much worse than it could be, helping to improve it from the inside makes things better for the people already involved, while going and starting your own group might leave them in the lurch.
Sure. When everyone (or at least a majority) in the initial group are on board with your reform efforts, you should often try to reform the group. But very often there will be a conflict of visions or a conflict of interests.
In general I think you should probably at least initially try to reform things, though if it doesn’t work well there’s a point where you might have to say “sorry, the time has come, we’re making our own group now”.
I certainly agree with this, though it seems plausible that we have different views of the point at which you should switch to the “found a splinter group” strategy.
...if you think both an urgent concern and a distant concern are possible, almost all of your effort goes into the urgent concern instead of the distant concern (as sensible critical-path project management would suggest).
This isn’t obvious to me. And I would be interested in a post laying out the argument, in general or in relation to AI.
In point of fact, doing important things often requires coordination, teamwork, and agreeing to compromises. If you insist on everything being exactly your way, you’ll have a harder time finding collaborators, and in many cases that will be fatal to a project—I do not say all, but many.
This is true and important, and the same or a very similar point to the one made in Your Price for Joining.
But that post has a different standard than the one given by the OP:
If the issue isn’t worth your personally fixing by however much effort it takes, and it doesn’t arise from outright bad faith, it’s not worth refusing to contribute your efforts to a cause you deem worthwhile. [emphasis mine]
Sometimes things are bad or (or much worse than they could be) in some group or community. When that’s the case, one can 1) try and change the community from the inside, or 2) get a group of his/her friends together to do [thing] the way they think they should do it, or 3) give up and accept the current situation.
When you’re willing to put in the work to make 2 happen, it sometimes results in a new healthier group. If (some) onlookers can distinguish between better and worse on the relevant axis, it will attract new members.
It seems to me that taking option 2, instead of option 1, is cooperative. You leave the other group doing it their way, in peace, and also create something good in the world in addition.
Granted, I think the situation may be importantly different in online communities, specifically because the activation energy for setting up a new online group is comparatively small. In that case, it is too easy to found a new group, and accordingly they splinter to regularly for any single group to be good.
Anyone have a citation for Drexler’s motivations?
This was great. Thank you!
new (boring) post on controlled actions.
This is relevant to my interests. Do you have a particular source that describes their “pitch”?
The every day world is roughly inexploitable, and very data rich. The regions you would expect rationality to do well in are the ones where there isn’t a pile of data so large even a scientist can’t ignore it. Fermi Paradox, AGI design, Interpretations of Quantum mechanics, Philosophical Zombies, ect.
I think I would add to this, “domains where there is lots of confusing/ conflicting data, where you have to filter the signal from the noise”. I’m thinking of fields where there are many competing academic positions like macroeconomics, or nutrition, or (of highest practical relevance) medicine.
Many of Scott Alexander’s posts, for instance are a wading into a confusing morass of academic papers and then using principles of good reasoning to figure out, as best he/we can, what’s actually going on.
This is a very important point, and I think it is worthy of being its own, titled, top-level post.
Val started (didn’t finish) a sequence once, but it looks like he removed the sequence-index from his blog:
In any case, I (who am not Val), would endorse that description.
Oh. I thought that the use of min( ) here, was immediately readable and transparent to me. The meaning of “the lesser of the two quantities” is less obvious, and the phrase is longer to say.
I remember seeing a thread on Less Wrong that started with someone hearing that Feynman had an IQ of 115, and being surprised, and then asking what’s up with that.
I can find the thread, now, but I remember mostly people saying that that number was false, and offering various explanations for why one might think that was Feynman’s IQ, including that the test in question was from his teen-aged years, and IQ often stabilizes later in life.
In any case, Feynman was named a Putnam Fellow (top five scorer) in 1939, which gives some context on his general mathematical ability (aside from being a ground-breaking, noble prize-winning, theoretical physicist).
Finally, I’m curious what people make of the last paper psychs listed (“Testing Sleep Consolidation in Skill Learning: A Field Study Using an Online Game”). They didn’t find any evidence for a sleep consolidation effect over and above non-sleep breaks.
This is a very surprising result, and I’m not sure what to make of it.
They give some possible reasons for that result in the discussion section, but none of them reduce my surprise much.
The link for the Robertson paper is broken for me. Can you post the full title?
...which found interference between a motor skill task and a verbal task is minimized...
A verbal task and a motor task can interfere with each other? I thought that interference only occurs between similar tasks.