I’m not sure about the “sleeping dragons”, though, since I can’t think of many cases where small groups created technologies that counterfactually wouldn’t have happened (or even would have happened in safer ways).
For technology this is possible; here we get into arguments about replacability and inventions that are “after their time” (that is, could feasibly have been built much earlier, but no one thought of them). Most such examples that I’m aware of involve particular disasters, where no one had really cared to solve problem X until problem X manifested in a way that hurt some inventor.
For policy / direct action, I think this is clearer; plausibly WWI wouldn’t have happened (or would have taken a different form) if the Black Hand hadn’t existed. There must have been many declarations of adversarial intent that turned out quite poorly for the speaker, since it put them on some enemy’s radar before they were ready.
Rather, I think the important thing is that meta-level discussions of how social sanctions work shouldn’t be generated by backward-chaining from an ambiguous case.
I think I disagree with this. When a social sanction is born of a particular case, I think it is quite important to actually have that case as a part of the discussion. First, this means the social alliances are in the open instead of hidden, second, this means that discussions over what principles actually bear on the situation becomes on-topic as well.
I think also it’s quite difficult for people to think about tradeoffs in the abstract; “should annoying people be allowed at meetups?” is different from “should we let Bob keep coming to meetups?“, and generally the latter is a more productive question.
The other option is making social sanctions preemptively, but there it’s not clear what violations might be possible or probable, and so not making rules until they’ve been violated seems sensible. (Of course, many rules have been violated before in human experience, such that in forming a new group you might import existing rules.)
Small feedback: this post is a mix of fundamentals and good introductions to key concepts, but also seems to assume a very high level of knowledge of the norms and current recent terminology and developments in the rationality community.
I’d also be interested in a list, or maybe some commentary on how well the links work as references. One of the issues I’m struggling with here is that even if someone has written up a good introduction to, say, internal double crux, there really are several inferential gaps between someone who hasn’t done it and someone who has, and probably it’s better understood after the reader has tried it a time or two. When IDC is fundamental to the point, there’s nothing to be done; they need to read up on that prereq first. When IDC is helpful to understanding the point, there’s a splitting effect; the person who already knows IDC gets their understanding strengthened but the person who doesn’t know IDC gets distracted.
I do try to be careful about having links to things, in part because it helps me notice when there isn’t an online description of the thing (which happened with “metacognitive blind spot,” which is referenced in a draft that’s going live later today).
This is only true if the tails are long to the right and not to the left, which seems true to Ben. Most projects tend to end up not pulling any useful levers whatever and just do nothing, but a few pull crucial levers and solve open problems or increase capacity for coordination.
For what it’s worth, I disagree with this; I think we have lots of examples of small groups of concerned passionate people changing the world for the worse (generally through unforeseen consequences, but sometimes through consequences that were foreseeable at the time) and lots of sleeping dragons that should not be awoken.
[Existential risk is sort of an example of a ‘heavy tail to the left,’ but this requires a bit of abuse of notation.]
Then, of course, there are the massive declines in poverty across Asia in particular. It’s difficult to assign credit for this, since it’s so tied up with globalisation, but to the extent that any small group was responsible, it was Asian governments and the policies of Deng Xiaoping, Lee Kuan Yew, Rajiv Gandhi, etc.
Tying in with the last point, I don’t think it’s the case that those specific people made good policy so much as unmade bad policy, and communism seems to me like an example of a left tail policy.
Have you played three-card Magic?
I had not! It looks like a much better distilled version of the thing I was pointing towards.
I mean in the work-environment sense, rather than the celebrity-and-endorsement-deals sense.
This has been my experience at tech companies; those perks are there for a reason. There has been relatively little in the way of ‘sleep coaching’ or similar things, but I definitely had access to a nutritionist, and the free lunches were somewhat optimized along this dimension, and so on.
Also this reminded me of Tradition Is Smarter Than You Are, linked by Kaj_Sotala, where unjustified rules passed down from generation to generation only recently became understandable as necessary to prevent long-term damage, or where divination is understood as the implementation of game-theoretically correct mixed strategies.
Does he actually begin by saying
That is the first non-title text on page 1, yes.
it seems clear that he’s arguing for focusing on the former rather than the latter.
I think this is neglectedness concerns, rather than actually thinking the former is more important than the latter. Both are part of being complete, but Peterson sees more people missing the former.
I should note that I definitely agree that many narratives are accurate and consistent with a world-as-place-of-things interpretation, and that some pressures towards accuracy are not new. But there are other pressures that are new—the development of materialist religions, for example, seems to mostly have resulted from materialist worldviews dominating supernaturalist worldviews, and I think Peterson is pointing to those new pressures in that section.
Which indicates that world-as-place-of-things is not a recent development, as Peterson seems to think
I can see a handful of different ways to interpret his statement, and don’t know which one Peterson is trying to point at.
One way I conceptualize this is that a lizard is able to perceive the world around it and navigate its environment, but likely doesn’t have a sense of what it would be like for there to be an environment without a lizard at the center of it. But for a physicist, imagining a world without a physicist at the center of it is the basic act of physics. In this view, whether the Inuit map-making counts as belief in ‘objective reality’ hinges on whether they viewed the maps as meaningful in the absence of Intuit to relate to the maps or traverse the territory.
His writings on alchemy seem somewhat relevant here; a compressed summary is that he viewed the alchemists as empiricists / engaged in the heroic project, but they had this incorrect belief that internal orientations were relevant to the outcomes of rituals. A quote:
Virtually every process undertaken by pre-experimental individuals—from agriculture to metallurgy—was accompanied by rituals designed to “bring about the state of mind” or “illustrate the procedure” necessary to the successful outcome desired. This is because the action precedes the idea. So ritual sexual unions accompanied sowing of the earth, and sacrificial rituals and their like abounded among miners, smiths, and potters. Nature had to be “shown what do to”; man led, not least, by example. The correct procedure could only be brought about by those who had placed themselves in the correct state of mind.
The process of discovering that this was false—that nature did not have to be shown what to do, and ‘just happened’ or followed deterministic dynamical laws—transmuted alchemy into chemistry. I think this is what he means by world-as-place-of-things and it likely is a recent development, whereas world-as-thing-that-can-be-perceived (and thus accurately mapped) is obviously an old development, possibly old enough that lizards have it.
When you live in such a harsh climate, you can’t get away with communicating only about what people should do, you also have to communicate about what’s true, or you die.
One thing that the post doesn’t highlight is the interaction between the “world as forum for action” and “world as place of things” views, where the implication is that typically the latter view informs the former view. (If medicine is more effective at healing the sick than prayer, then it seems adaptive for someone sick to generate more ‘should’-juice for medicine than prayer.) One view on the heroic myth is that it’s about someone taking on the ‘most important false belief’ of their culture, and changing it to a true belief (in a way that also allows for ‘absent’ beliefs to count as false).
I should make it clear that the “rather” in that sentence doesn’t mean it’s anti-optimizing for truth, just that truth is important to cultural transmission to the degree that it serves the core purpose of cultural transmission. It seems to me like cultures closer to the ‘survive’ end of the ‘survive—thrive’ spectrum should have ‘the importance of doing things right / believing true things / doing things by the book’ as important parts of their narratives, because that is an important part of properly integrating in society. Cultures closer to the ‘thrive’ end of the spectrum instead likely have their narratives focus more on the importance of self-expression and exploration, because that’s an important part of properly integrating into their society.
One contemporary example that comes to mind is the different mindsets promoted by different video games: games like XCOM or Dark Souls build cultures in which “don’t make mistakes” and “pay attention to the environment” and “git gud” are fundamental pieces of advice that are reinforced by the world, whereas games that are more exploratory or forgiving don’t promote the same sort of mindset or culture.
Thanks to Kurt Brown for some discussion of the draft, which led to improvements to the conclusion.
I wrote ~90% of this post on September 19th, and then today returned to finish it, with some revisions and additions. The primary reason it happened today as opposed to ‘eventually’ was because I was walking home, thinking about my plans for the week, wondering when I would write the series of posts I wanted to write instead of playing video games or working on a side project or handling other errands, and realized that I should do “do it for All Might,” a fictional character; from the outside, this might sound insane. From the inside, it was extremely compelling (notice that the post is published). I notice Jacobian’s post also discusses how he spend hours writing his post instead of playing video games, because of a deliberate decision to turn towards meaning.
Why mention this? One of the traps that leads to low-motivation states is one in which narratives that motivate action are relentlessly optimized for presentability or justification. External opinions often provide useful information—other people thinking that you shouldn’t murder is actually pretty good reason to not murder—but overreliance on them and underreliance on one’s own tastes leads to an evaporation of the self.
The seventy-five card deck size seems at first like it would pull us in the opposite direction. Each card is less likely to come up each game. That helps, but it also forces players to use more of the generically good cards to round their deck out, and makes it much harder to use a few quirky cards as the basis for a deck, since you can’t find them as reliably.
The bigger drawback here was making the climb to build a collection feel super steep. Not only do I need four copies of this legendary card, each of which takes weeks worth of games to afford, each copy is only one of seventy-five cards. That’s noticeably different from sixty, and miles away from Hearthstone where you have one copy of each legendary card, two of each other card, and the deck is thirty cards. Hearthstone’s decks are not as small as they look, since they don’t include lands, but it does make it feel like every card decision you make counts for a lot. In Eternal I did not feel that way.
Yeah, I’ve started thinking about the number of ‘active cards’ in a deck and how that plays into choice. One of the things that I really like about Commander is that the one-card-per-deck rule means you have about 60-70 active cards in the deck, and this impedes the “accumulate the pieces of my combo and then end the game” style of playing Magic in favor of the positional style of playing Magic.
The last card game I played seriously was the new Legend of the Five Rings LCG, which has 2 40 card decks (to oversimplify, one only has creatures and the other only has instants) and a max of 3 copies per card, which meant ~26 active cards in the deck, which of course was reduced by the “every deck should have X” cards, of which there were about 5.
I still don’t have a strong sense (from a game design angle) of what the right number should be. Deck design seems interesting only to the extent that a deck is a proposal about the strength of various joint mechanics (“Cards A, B, and C go together well”), but this suggests that the purest form of deckbuilding might actually involve quite small decks.
[This suggests a Magic format where you have some ‘base decks’ on offer, maybe shuffled, maybe not, and your ‘actual deck’ is your starting hand, that you get to choose entirely. If the base decks only contain the equivalent of forests and Grizzly Bears, then the question is something like “can you fit a game-ender into 7 cards, with enough disruption and counter-disruption that yours goes off first?“]
Well the obvious thing to do would be to add more heuristics to your paperclip maker.
I agree this is obvious. But do you have any reason to believe it will work? One of the core arguments here is that trying to constrain optimization processes is trying to constrain an intelligent opponent, because the optimization is performing search through a space of solutions much like an intelligent opponent is. This sort of ‘patch-as-you-go’ solution is highly inadequate, because the adversary always gets the first move and because the underlying problem making the search process an adversary hasn’t been fixed, so it will just seek out the next hole in the specification. See Security Mindset and Ordinary Paranoia.
Once you have all these pieces available to parties with sufficient budgets, it would be like having a way to order highly enriched plutonium from Granger. Then it would be possible to build a closed-loop, self improving system.
What is the word ‘then’ doing in this paragraph? I’m reading you as saying “yes, highly advanced artificial intelligence would be a major problem, but we aren’t there now or soon.” But then there are two responses:
1) How long will it take to do the alignment research? As mentioned in the dialogue, it seems like it may be the longest part of the process, such that waiting to start would be a mistake that delays the whole process and introduces significant risk. As a subquestion, is the alignment research something that happens by default as part of constructing capabilities? It seems to me like it’s quite easy to be able to build rockets without knowing how orbital mechanics work. Historically, orbital mechanics were earlier in the tech tree*, but I don’t think they were a prerequisite for rocket-building.
2) When will we know that it’s time? See There’s No Fire Alarm for Artificial General Intelligence.
*Here I mean ‘rockets that could escape Earth’s gravity well,’ since other rockets were made much earlier.
Mod notice: There’s a discussion going on in the Bay Area rationality community involving multiple users of LW that includes allegations of serious misconduct. We don’t think LW is a good venue to discuss the issue or conduct investigations, but we think it’s important for the safety and health of the LW community that we host links to a summary of findings once the discussion has concluded. If you’d like to discuss this policy, please send a private message to me and I’ll talk it over with the mod team. [Comments on this comment are disabled.]
Is ‘rationally’ in the title doing something that ‘skillfully’ wouldn’t?
Get Less Wrong known as a site where ideas are taken seriously and bullshit is not tolerated, where may be found not a graphomanic torrent of “content” but a scant few gems of true insight and well-tested intellectual innovations, and then “my essay on <topic> was posted on Less Wrong, and even they found no fault with it” becomes a point of pride, and “curated on Less Wrong” becomes a mark of distinction.
Where would you point to as a previous example of success in this regard? I don’t think the golden age of Less Wrong counts, as it seems to me the primary reason LessWrong was ever known as a place with high standards is because Eliezer’s writing and thinking were exceptional enough to draw together a group of people who found it interesting, and that group was a pretty high-caliber group. But it’s not like they came here because of the insightful comments; they came here for the posts, and read the comments because they happened to be insightful (and interested in a particular mode of communication over point-seeking status games). When the same commenters were around, but the good post-writers disappeared or slowed down, the site slowly withered as the good commenters stopped checking because there weren’t any good posts.
There have been a few examples of people coming to LessWrong with an idea to sell, essentially, which I think is the primary group that you would attract by having a reputation as a forum that only good ideas survive. I don’t recall many of them becoming solid contributors, but note that this is possibly a memory selection effect; when I think of “someone attracted to LW because of the prestige of us agreeing with them” I think of many people whose one-track focuses were not impressive, when perhaps someone I respect originally came to LW for those reasons and then had other interests as well.
With regards to the “solid logic” comment, do give us some credit for having thought through this issue and collected what data we can. From my point of view, having tried to sample the community’s impressions, the only people who have said the equivalent of “ah, criticism will make the site better, even if it’s annoying” are people who are the obvious suspects when post writers say the equivalent of “yeah, I stopped posting on Less Wrong because the comments were annoyingly nitpicky rather than focusing on the core of the point.”
I do want to be clear that ‘high-standards’ and ‘annoying’ are different dimensions, here, and we seem to be in a frustrating equilibrium where you see some features of your comments that make them annoying as actually good and thus perhaps something to optimize for (?!?), as opposed to a regrettable problem that is not worth the cost to fix given budgetary constraints. Perhaps an example of this is your comment in a parallel thread, where you suggest pedantically interpreting the word “impossible” makes conversations more smooth than doing interpretative labor to repair small errors in a transparent way. By the way I use the word “smooth”, things point in the opposite direction. [And this seems connected to a distinction between double crux and Stalnaker-style conversations, which is a post on my todo list that also hasn’t been written yet.]
Dynamic RSS feeds (or, to be more precise, the tagging and dynamic-listing infrastructure that would enable dynamic RSS feeds) would handily solve this problem as well.
Dynamic RSS feeds are the opposite of a solution to this problem; the mechanism that constructs a single conversational locus is broadcast, where everyone is watching the same 9 o’clock news, as opposed to decentralized communication, where different people are reading different blogs and can’t refer to particular bits of analysis and assume that others have come across it before. Contrast the experience of someone trying to discuss the previous night’s Monday Night Football game with another football fan and two gamers trying to discuss their previous night’s video gaming with each other; even if they happened to play the same game, they almost certainly weren’t in the same match.
The thing that tagging helps you do is say “this post is more interesting to people who care about life extension research than people who don’t”, but that means you don’t show it to people who don’t care about life extension, and so when someone chats with someone else about Sarah Constantin’s analysis of a particular line of research, the other person is more likely to say “huh?” than if they sometimes get writings about a topic that doesn’t natively interest them through a curated feed.
I agree the analogy is not perfect, but I do think it’s better than you’re suggesting; in particular, it seems to me like going to math grad school as opposed to doing other things that require high mathematical ability (like quantitative finance, or going to physics grad school, or various styles of programming) is related to “writing about rationality rather than doing other things with rationality.” Like, many of the most rational people I know don’t ever post on LW because that doesn’t connect to their goals; similarly, many of the most mathematically talented people I know didn’t go to math grad school, because they ran the numbers on doing it and they didn’t add up.
But to restate the core point, I was trying to get at the question of “who do you think is worthy of not being sarcastic towards?“, because if the answer is something like “yeah, using sarcasm on the core LW userbase seems proper” this seems highly related to the question of “is this person making LW better or worse?“.
By attracting better people, and expecting better of those who are here already. Some will not rise to that expectation. That is to be expected. We will not see further posts from them. That is to be welcomed.
I claim that we tried this, from about 2014 to 2016, and that the results were underwhelming. How will you attract better people, and from where? [This is a serious question, instead of just exasperation; we do actually have a budget that we could devote to attracting better people if there were promising approaches.]
I struggle to believe that you really think that “more good content”, period, no specifics, is what translates into avoiding human extinction.
As Benquo suggests, there are additional specifics that are necessary, that are tedious to spell out but I assumed easy to infer.
But my suggestion answers precisely this concern! How can you ask this question after I’ve addressed this matter in such detail?
Your explanation doesn’t suggest why authors would want to do step #2, or where we would get a class of dedicated curators who would rewrite their posts for them when they don’t do it themselves. [Noting also that it would be helpful if those curators were not just better at composition than the original authors, but also better at conceptual understanding, such that they could distill things effectively instead of merely summarizing and arranging the thoughts of others.]
Perhaps another angle on the problem: there is a benefit to having one conversational locus. Putting something on the frontpage of LessWrong makes it more likely that people who check LessWrong have read it, and moves us closer to the ‘having one conversational locus’ world. It seems to me like you’re making a claim of the form “the only things worth having in that primary conversational locus are the sorts of things where the author is fine handling my sarcastic criticisms”, and I disagree with that, because of the aforementioned models of how progress works.