It seems like sarahconstantin and Adams are talking about two completely different things. Adams is talking about writing internal reports or memos for efficient transfer of information. saraconstantin is talking about writing public-facing marketing materials. The incentives and aims of the two types of writing are completely different.
Similarly, “Business email” is not one thing. Writing an email to a client or prospective client, or writing an email to a coworker, or for that matter to a boss or a subordinate, will all have different requirements and look totally different.
I agree that Herbert thought the breeding program was necessary. But I also think he couched it as tragically necessary. Leto II’s horrific repression was similarly tragically necessary.
I think the questions provoked by Herbert’s concepts of Mentats and Bene Gesserit might actually be fruitful to think about.
If there were no meditation traditions on Earth, then we would have no reason to suspect that jhanas, or any other advanced states of meditative achievement, exist. If there were no musical instruments, we would have no reason to suspect that a human could use fingers or breath to manipulate strings or harmonics to create intricate, polyphonic, improvised melodies. If there were no arithmetic, we would view a person who could do rudimentary mental math to be a wizard. One can extend this line of thinking to many things—reading and writing, deep strategy games like chess, high-level physical sports, and perhaps even specific fields of knowledge.
So it is probably safe to say that we “know” that a human can’t be trained to do the things that Mentats do in Dune, but I don’t think it’s safe to say that we have any idea what humans could be trained to do with unpredictable avenues of development and 20,000 years of cultural evolution.
I guess I’m not really disagreeing with anything you said, but rather advocating that we take Herbert’s ideas seriously but not literally.
Thanks for the interview. This is great.
This is very cool to see. I just finished re-reading Dune. I wonder what signal prompted me to do that, and I wonder if it was the same signal that prompted you to write this.
I’ve been thinking a lot recently about rationalist advocacy and community. I don’t think that individuals unilaterally deciding to stop automating things is going to make a dent in the problem. This is a straightforward coordination problem. If you drop out of modern society, for whatever reason, society fills in the hole you left. The only way to challenge Moloch is to create an alternative social framework that actually works better, at least in some regards.
One thing that keeps cropping up in my thoughts/discussions about rationalist community is that the value-add of the community needs to be very clear and concrete. The metaphor or analogue of professional licensure might be appropriate—a “rationalist credential”, some kind of impossible-to-fake, difficult-to-earn token of mastery that denotes high skill level and knowledge, that then becomes symbolically associated with the movement. I mention this idea because the value-add of being a credentialed rationalist would then have to be weighed against whatever weird social restrictions that the community adopts—e.g., your suggestion of avoiding automation, or instituting some kind of fealty system. These ideas may be empirically, demonstrably good ideas (we don’t really know yet) but their cost in weirdness points can’t be ignored.
As an side—and I’m open to being corrected on this—I don’t think Herbert was actually advocating for a lot of the ideas he portrays. Dune and Frank Herbert explore a lot of ideas but don’t really make prescriptions. In fact, I think that Herbert is putting forth his universe as an example of undesirable stagnation, not some kind of demonstrated perfection. It would be cool to be a mentat or a Bene Gesserit, i.e. a member of a tribe focused on realizing human potential, but I don’t think he was saying with his books that the multi-millennial ideologically motivated political stranglehold of the Bene Gesserit was a good thing. I don’t think that Herbert thinks that feudalism is a good thing just because it’s the system he presents. Maybe I’m wrong.
I’ve used TMI as a meditation guide off and on for some time. One thing you might consider tracking is “generalized motivation”, or “energy level”, or something like that. You might have to measure this subjectively, by rating how motivated you feel, or you could keep track of your ability to objectively get things done. I find that too much* meditation results in an undesirable degree of affective flattening and a reduction in motivation and energy level. For these reasons, I actually don’t meditate currently.
*“Too much” may vary, but I think 20 minutes per day is a low enough level to avoid the negative side effects. Of course, at 20 minutes a day, you’re also not going to achieve the desirable outcomes.
I really like the idea of some kind of public, generalized Hippocratic Oath for online behavior. Needs an actual name, though, and needs somebody more visible than me to plant a Schelling stake in it.
It’s also good to just ask if your audience can hear you. Generally audiences will not tell you if you’re completely unintelligible due to the Bystander Effect.
Pop-filters are great if you can get them. They help with the issue of plosives blowing up the mike.
I would consider it corrigible for the AI to tell Petrov about the problem. Not “I can’t answer you” but “the texts I have on hand are inconclusive and unhelpful with respect to helping you solve your problem.” This is, itself, informative.
If you’re an expert in radar, and I ask you if you think something is a glitch or not, and you say you “can’t answer”, that doesn’t tell me anything. I have no idea why you can’t answer. If you tell me “it’s inconclusive”, that’s informative. The information is that you can’t really distinguish between a glitch and a real signal in this case. If I’m conservatively minded, then I’ll increase my confidence that it’s a glitch.
I have a guess as to how this would actually evolve.
While the median Christian is not particularly Christian, there probably are a good number of pretty excellent Christians, whose motivation for being thus is their commitment to the ideals that they profess. So it’s possible—even likely—that Christianity actually makes the world a little bit more “in the image of Christ” on the margin.
If you have a billion Christians, the number of “actually pretty good” Christians is likely to be pretty high.
Right now we probably have barely thousands of Rationalists who would identify as such. An organized attempt at increasing that number, with a formal aspiration to be better rationalists, would increase the number of “actually pretty good” rationalists, although the median rationalist might just be somebody who read 4% of the Sequences and went to two meetups. But that would still be a win.
My point was merely that you can found a club around an aspiration rather than an accomplishment. It’s better to have the accomplishment, of course, but not necessary.
I think there is something like a Platonic “ultimate textbook of human rationality” that may be written in the future, but we don’t actually know its contents. That’s why the visitor can’t give us the book. We have a dual problem: not only the challenge of spreading the ideas, but actually pinning down what the ideas are in the first place.
Actually, I think “pinning down” has entirely the wrong connotations, because human rationality seems more like a living and breathing process rather than a list of maxims chiseled in stone, and to a degree culturally dependent.
I will say that I don’t think you need to answer #0 concretely before you set out. We can guess at the contents of the Platonic rationality textbook, and then iterate as we converge upon it.
I wonder about this. Is the average Christian more “Christian” than the average non-Christian? (Do they do good works for strangers, love and forgive their enemies, and live lives of poverty and service, at rates significantly above the population average?) If not, does that really affect their ability to grow? Has it really affected their ability to grow, historically?
I think MIRI also employed a hybrid strategy. I will say, it seems much easier to deploy a “go big or go home” approach after you’ve already created a minimum viable organization, rather than attempting to poach thinkfluencers without even having that groundwork in place.
Regarding both follow-up questions, I have two answers:
Answer 1: I don’t intend for this to be a dodge, but I don’t think it really matters what I think. I don’t think it’s practical to construct “the perfect organization” in our imagination and then anticipate that its perfection will be realized.
I think what a rationality organization looks like in practice is a small group of essentially like-minded people creating a Schelling point by forming the initial structure, and then the organization evolves from there in ways that are not necessarily predictable, in ways that reflect the will of the people who have the energy to actually put into the thing.
What’s interesting is that when I say it that way, I realize that it sounds like a recipe for disaster. But also note that essentially no other organization on Earth has been formed in any other way.
Answer 2: I personally would create separate organizational branches for epistemic and instrumental focus, such that both could use the resources of the other, but neither would be constrained by the rules of the other. Either branch could use whatever policies are most suited to themselves. Think two houses of a congress. Either of the branches could propose policies to govern the whole organization, which could be accepted or vetoed by the other branch. There’s probably also a role for something like an elected executive branch, but at this point I am grasping well beyond my domain of expertise.
I felt like the OP was already quite long enough, and don’t have time now to write the full followup post that this question deserves, but in brief, the thrust would be that any rationalist organization deserving of the name would carefully choose its norms, structure and bylaws to reflect those of the most successful existing organizations (empiricism!), with care taken to exclude the aspects of those organizations that are inimical to group or individual rationality. Thus, even if stoning apostates has proven to be an empirically useful organizational strategy from the perspective of growth, it’s probably not something we want to emulate.
I’m not sure if we can actually offer an unfalsifiable signal that we are on the “true path”. I’m not sure if we even necessarily need or want to do that. In order to justify the existence of the “Don’t Shoot Yourself in the Foot Club”, you just need to demonstrate that not shooting yourself in the foot is better than the alternative, and I think we can do at least that, metaphorically.
Also, I actually suspect that any formal structure at all would probably be, on net, more of a good thing than a bad thing, in terms of growing the movement.
Noun. pseudointellectual (plural pseudointellectuals) A person who claims proficiency in scholarly or artistic activities while lacking in-depth knowledge or critical understanding. A person who pretends to be of greater intelligence than he or she in fact is.
I don’t think S.A. claims any proficiency or scholarly credentials that he doesn’t have. He doesn’t review books claiming to be some expert in reviewing books, and doesn’t write essays claiming to be setting down eternal truths. Rather, he is openly exploratory and epistemically careful.
I certainly don’t think he pretends to be smarter than he is. But of course, the use of this word in the original claim is probably an empty slur, meant to convey sentiment rather than content. I certainly hope the “pseudointellectual” part of the claim isn’t important to the argument, since I think even Alexander’s detractors would admit it is inaccurate.
Thus, one question in short form: “Given that a pseudointellectual is defined as one who claims proficiency while lacking in-depth knowledge and/or a person who pretends to greater intelligence than he possesses, do you actually believe Scott Alexander qualifies as a pseudointellectual? If so, could you elaborate on where specifically he has exaggerated his own proficiency, knowledge, or intelligence? If not, what did you actually mean by pseudointellectual?”
It’s one thing to accuse somebody of being systematically wrong, another thing to accuse them of being systematically deceptive. I don’t think my focus on this word choice can be trivially dismissed.
Also, it seems likely that if one of the roughly nine words in the quoted thesis was chosen carelessly, the underlying thought process will be likewise flimsy.
For some reason I’m having trouble finding a non-confrontational phrasing for this, but: Can I ask why you’re asking, first? Collating everything I’ve tried over the last two decades would take a large amount of work, I would probably miss many things, and besides, almost everything I tried was completely pointless. For example, I could go into detail about past chiropractic treatments, but why bother, since I only attempted that out of desperation, and in the end, it had no effect? This post was my attempt at outlining the few things that did seem to matter, prior to the new drug. (Actually, the standard botox treatment for migraines also helped, in a relative sense, but that wasn’t something that I would expect to generalize to most people.)
It’s cool to see this, I’m glad you got something out of my speculations.
I’m still pretty baffled about chronic pain. After ten or fifteen years of increasingly bad chronic migraine and neck pain, and having tried every treatment in the book, I recently started on the newly FDA approved drug which has very nearly cured the entire issue. The drug itself is a peptide which blocks a particular neurotransmitter receptor.
While I’m mostly happy about this beyond expression, I’m also retroactively frustrated by the fact that this “cure” is simply not something that one could ever approximate without the drug, and it doesn’t really tell me anything about what is wrong with me that makes me prone to these issues in the first place.
And if we are willing to ascribe moral weight to fruit flies, there must also be some corresponding non-zero moral weight to early-term human fetuses.