I’m Georgia. I crosspost some of my writings from eukaryotewritesblog.com.
eukaryote(Georgia Ray)
A brief authorial take—I think this post has aged well, although as with Caring Less (https://www.lesswrong.com/posts/dPLSxceMtnQN2mCxL/caring-less), this was an abstract piece and I didn’t make any particular claims here.
I’m so glad that A) this was popular B) I wasn’t making up a new word for a concept that most people already know by a different name, which I think will send you to at least the first layer of Discourse Hell on its own.
I’ve met at least one person in the community who said they knew and thought about this post a lot, well before they’d met me, which was cool.
I think this website doesn’t recognize the value of bad hand-drawn graphics for communicating abstract concepts (except for Garrabrant and assorted other AI safety people, whose posts are too technical for me to read but who I support wholly.) I’m guessing that the graphics helped this piece, or at least got more people to look at it.
I do wish I’d included more examples of spaghetti towers, but I knew that before posting it, and this was an instance of “getting something out is better than making it perfect.”
I’ve planned on doing followups in the same sort of abstract style as this piece, like methods I’ve run into for getting around spaghetti towers. (Modularization, swailing, documentation.) I hopefully will do that some day. If anyone wants to help brainstorm examples, hit me up and I may or may not get back to you.
This is a really touching tribute. I’m so sorry.
I don’t love this thread—your first comment reads like you’re correcting me on something or saying I got something important philosophically wrong, and then you just expand on part of what I wrote with fancier language. The actual “correction”, if there is one, is down the thread and about a single word used in a minor part of the article, which, by your own findings, I am using in a common way and you are using in an idiosyncratic way. …It seems like a shoehorn for your pet philosophical stance. (I suppose I do at least appreciate you confining the inevitable “What are Women Really” tie-ins to your own thread, because boy howdy, do I not want that here.)
To be clear, the expansion was in fact good, it’s the unsupported framing as a correction that I take issue with. This wouldn’t normally bother me enough to remark on, but it’s by far the top-rated comment, and you know everyone loves a first-comment correction, so I thought I should put it out there.- 16 May 2021 2:12 UTC; 9 points) 's comment on Containment Thread on the Motivation and Political Context for My Philosophy of Language Agenda by (
One cautionary note is that once you invoke this idea, I feel like you’re indicating willingness to pay the person some amount to do the thing, if you can both agree on a reasonable (cheerful or just satisfactory) number.
Like if I’m kind of inclined to bake you a cake for free, and you ask for my cheerful price and I tell you—even if you don’t take up the offer at my cheerful price, I’m definitely not going to make the cake for free now. That would be bad business.
I have taken the survey, please shower me in karma.
You are super right and that is exactly what happened—I checked the numbers and had made the order of magnitude three times larger. Thanks for the sanity checks and catch. It turns out this moves the midpoint up to 1432. Lemme fix the other numbers as well.
Update: Actually, it did nothing to the midpoint, which makes sense in retrospect (maybe?) but does change the “fraction of time” thing, as well as some of the Fermi estimates in the middle.
15% of experience has actually been experienced by living people, and 28% since Kane Tanaka’s birth. I’ve updated this here and on my blog.
If many info-hazards have already been openly published, the world may be considered saturated with info-hazards, as a malevolent agent already has access to so much dangerous information. In our world, where genomes of the pandemic flus have been openly published, it is difficult to make the situation worse.
I strongly disagree that we’re in a world of accessible easy catastrophic information right now.
This is based on a lot of background knowledge, but as a good start, Sonia Ben Ouagrham-Gormley makes a strong case that bioweapons groups historically have had very difficult times creating usable weapons even when they already have a viable pathogen. Having a flu genome online doesn’t solve any of the other problems of weapons creation. While biotechnology has certainly progressed since major historic programs, and more info and procedures of various kinds are online, I still don’t see the case for lots of highly destructive technology being easily available.
If you do not believe that we’re at that future of plenty of calamitous information easily available online, but believe we could conceivably get there, then the proposed strategy of openly discussing GCR-related infohazards is extremely dangerous, because it pushes us there even faster.
If the reader thinks we’re probably already there, I’d ask how confident they are. Getting it wrong carries a very high cost, and it’s not clear to me that having lots of infohazards publicly available is the correct response, even for moderately high certainty that we’re in “lots of GCR instruction manuals online” world. (For starters, publication has a circuitous path to positive impact at best. You have to get them to the right eyes.)
Other thoughts:
The steps for checking a possibly-dangerous idea before you put it online, including running it by multiple wise knowledgeable people and trying to see if it’s been discovered already, and doing analysis in a way that won’t get enormous publicity, seem like good heuristics for potentially risky ideas. Although if you think you’ve found something profoundly dangerous, you probably don’t even want to type it into Google.
Re: dangerous-but-simple ideas being easy to find: It seems that for some reason or other, bioterrorism and bioweapons programs are very rare these days. This suggests to me that there could be a major risk in the form of inadvertently convincing non-bio malicious actors to switch to bio—by perhaps suggesting a new idea that fulfils their goals or is within their means. We as humans are in a bad place to competently judge whether ideas that are obvious to us are also obvious to everybody else. So while inferential distance is a real and important thing, I’d suggest against being blindly incautious with “obvious” ideas.
(Anyways, this isn’t to say such things shouldn’t be researched or addressed, but there’s a vast difference between “turn off your computer and never speak of this again” and “post widely in public forums; scream from the rooftops”, and many useful actions between the two.)
(Please note that all of this is my own opinion and doesn’t reflect that of my employer or sponsors.)
- 3 Jul 2018 22:47 UTC; 7 points) 's comment on Informational hazards and the cost-effectiveness of open discussion of catastrophic risks by (EA Forum;
- 3 Jul 2018 22:49 UTC; 4 points) 's comment on Informational hazards and the cost-effectiveness of open discussion of catastrophic risks by (
Here’s something I believe: You should be trying really hard to write your LessWrong posts in such a way that normal people can read them.
By normal, I mean “people who are not immersed in LessWrong culture or jargon.” This is most people. I get that you have to use jargon sometimes. (Technical AI safety people: I do not understand your math, but keep up the good fight.) Or if your post is referring to another post, or is part of a series, then it doesn’t have to stand alone. (But maybe the series should stand alone?)
Obviously if you only want your post to be accessible to LWers, ignore this. But do you really want that?
If your post provides value to many people on LW, it will probably provide value to people off LW. And making it accessible suddenly means it can be linked and referred to in many other contexts.
Your post might be the first time someone new to the site sees particular terms.
Even if the jargon is decipherable or the piece doesn’t rely on the jargon, it still looks weird, and people don’t like reading things where they don’t know the words. It signals “this is not for me” and can make them feel dumb for not getting it.
(Listen, I was once in a conversation with a real live human being who dropped references to obscure classical literature every third sentence or so. This is the most irritating thing in the universe. Do not be that person.)
On a selfish level,
It enables the post to spread beyond the LW memeosphere, potentially bringing you honor and glory.
It helps you think and communicate better to translate useful ideas into and out of the original context they appear in.
If you’re not going to do this, you can at least: Link jargon to somewhere that explains it.
Thank you for coming to my TED talk.
Quick authorial review: This post has brought me the greatest joy from other sources referring to it, including Marginal Revolution (https://marginalrevolution.com/marginalrevolution/2018/10/funnel-human-experience.html) and the New York Times bestseller “The Uninhabitable Earth”. I was kind of hoping to supply a fact about the world that people could use in many different lights, and they have (see those and also like https://unherd.com/2018/10/why-are-woke-liberals-such-enemies-of-the-past/ )
An unintentional takeaway from this attention is solidifying my belief that if you’re describing a new specific concept, you should make up a name too. For most purposes, this is for reasons like the ones described by Malcolm Ocean here (https://malcolmocean.com/2016/02/sparkly-pink-purple-ball-thing/). But also, sometimes, a New York Times bestseller will cite you, and you’ll only find out as you set up Google alerts.
(And then once you make a unique name, set up google alerts for it. The book just cites “eukaryote” rather than my name, and this post rather than the one on my blog. Which I guess goes to show you that you can put anything in a book.)
Anyways, I’m actually a little embarrassed because my data on human populations isn’t super accurate—they start at the year 50,000 BCE, when there were humans well before that. But those populations were small, probably not enough to significantly influence the result. I’m not a historian, and really don’t want to invest the effort needed for more accurate numbers, although if someone would like to, please go ahead.
But it also shows that people are interested in quantification. I’ve written a lot of posts that are me trying to find a set of numbers, and making lots and lots of assumptions along the way. But then you have some plausible numbers. It turns out that you can just do this, and don’t need a qualification in Counting Animals or whatever, just supply your reasoning and attach the appropriate caveats. There are no experts, but you can become the first one.
As an aside, in the intervening years, I’ve become more interested in the everyday life of the past—of all of the earlier chunks that made up so much of the funnel. I read an early 1800′s housekeeping book, “The Frugal Housewife”, which advises mothers to teach their children how to knit starting at age 4, and to keep all members of the family knitting in their downtime. And it’s horrifying, but maybe that’s what you have to do to keep your family warm in the northeast US winter. No downtime that isn’t productive. I’ve taken up knitting lately and enjoy it, but at the same time, I love that it’s a hobby and not a requirement. A lot of human experience must have been at the razor’s edge of survival, Darwin’s hounds nipping at our heels. I prefer 2020.
If you want a slight taste of everyday life at the midpoint of human experience, you might be interested in the Society for Creative Anachronism. It features swordfighting and court pagentry but also just a lot of everyday crafts—sewing, knitting, brewing, cooking. If you want to learn about medieval soapmaking or forging, they will help you find out.
I have a proposal.
Nobody affiliated with LessWrong is allowed to use the word “signalling” for the next six months.
If you want to write something about signalling, you have to use the word “communication” instead. You can then use other words to clarify what you mean, as long as none of them are “signalling”.
I think this will lead to more clarity and a better site culture. Thanks for coming to my talk.
Oh, I think you’re over-extrapolating what I meant by arbitrary—like I say toward the end of the essay, trees are definitely a meaningful category. Categories being “a little arbitrary” doesn’t mean they’re not valuable—is there a clear difference between a tree and a shrub? Maybe, but I don’t know what it is if so, and it seems like plausibly not. The fruit example is even clearer—is a grape a berry? Is a pumpkin a fruit? Who cares? Probably lots of people, depending on the context? Most common human categories work like this around the edges if you try and pin them down—hence, a little arbitrary. Seems fine.
I’m standing by “weird.” That’s definitely weird. I don’t think of nature as going in for platonic forms! What’s going on here?! Weird as hell.
Hi, I’m pleased to see that this has been nominated and has made a lasting impact.
Do I have any updates? I think it aged well. I’m not making any particular specific claims here, but I still endorse this and think it’s an important concept.
I’ve done very little further thinking on this. I was quietly hoping that others might pick up the mantle and write more on strategies for caring less, as well as cases where this should be argued. I haven’t seen this, but I’d love to see more of it.
I’ve referred to it myself when talking about values that I think people are over-invested in (see https://eukaryotewritesblog.com/2018/05/27/biodiversity-for-heretics/), but not extensively.
Finally, while I’m generally pleased with this post’s reception, I think nobody appreciated my “why couldn’t we care less” joke enough.
Thanks! Honestly, I’m completely fine filling in whatever content people might expect when looking for “controversial biodiversity opinions on LessWrong” with controversial opinions on actual environmental biodiversity.
FWIW, I thought the ‘Doomsday phishing’ attack was absolutely brilliant. Hey! Sometimes people will deceive you about which things will end the world! May we all stay on our toes.
This is a good post, props for writing up a practical thing that people can refer to! This is potentially really useful information for people outside the community as well—lots of people struggle with SAD.
Two small changes I’d want to see before I show this to friends outside the community:
Take out the word “rationalist” in the first sentence. This sounds like a small nitpick but I think it’s huge—It’s early and prominent enough that it would likely turn off a casual reader who wasn’t already aware or fond of the community. (And the person being a rationalist isn’t relevant to the advice.) Replace it with “friend”, perhaps.
Add a picture, even just a crappy cell phone photo. How do you get the hooks to hang a cord from the ceiling?
End-of-2023 author retrospective:
Yeah, this post holds up. I’m proud of it. The Roman dodecahedron and the fox lady still sit proudly on my desk.
I got the oldest known example wrong, but this was addressed in the sequel post: Who invented knitting? The plot thickens. If you haven’t read the sequel, where I go looking for the origins of knitting, you will enjoy it. Yes, even if you’re here for the broad ideas about history rather than specifically knitting. (That investigation ate my life for a few months in there. Please read it. 🥺)I’m extremely pleased by the reception I got from this. People say “oh, Less Wrong won’t be interested in a post about knitting”. These people were not writing good enough posts about knitting. They probably also said that about tree phylogeny.* If you think something is interesting, you can explain why it’s interesting to other people and maybe they’ll agree with you.
I would say the challenge of writing this was maybe in sort of trusting myself that these freewheeling high-concept connections between alphabetization and knitting and bacterial evolution were worth explicitly relating to each other. On one hand I often sort of hate reading pieces based around the author holding up distantly-connected things and going “do you get it? do you get it??” …But on the other hand, sometimes they’re insightful, and man, sometimes there is a weird concept that’s really made clear by seeing a few disparate examples. So it’s worth trying and ultimately it is just a blog post and not a scientific paper, so “gesturing vaguely at an idea” is on par for the course. Evidently other people thought the connection was something too. Nice!
*Fact check: Nobody has ever said either of these things to me.
That’s a good expanded takeaway of part of it! (Obviously “weird and a little arbitrary” is kind of nebulous, but IME it’s a handy heuristic you’ve neatly formalized in this case.) To be clear, it doesn’t sound like we disagree?
I don’t think there’s much crossover. I hope you know that there are lots and lots of incentives for active deception and responding to deception in various parts of the natural world and evolutionary psychology—if you’re interested in the workings of and responses to deception, definitely read more about it. Like, the argument you make for females being interested in “people over things” could also explain the reverse—males are incentivized to deceive females, which you can do better the better you model people, right? I think you are observing something real about relevant preferences, but if that’s the extent of your understanding, I’d learn more about evolution and alternate explanations e.g. cultural pressure towards taking on emotional labor.
Anyhow, this example is narrow and specific to a human problem. As you say, the concern about AGI is mainly about intelligence significantly past humans, that do not share a basic substrate or set of biological imperatives. Like, even a person who I think might be lying to me can be modeled as fundamentally human—having limited amounts of information, limited physical strength, needing to eat, fearing death, etc. Heck, if I’m looking for a partner and am concerned that the partner is going to try to deceive me to get sex or whatever from me, I’m already aware of the threat!
The current environment you’re asking about people’s experience in is also pretty damn different from the ancestral environment evolved for—in as far as resource constraints, information ability, and I guess most other things—so I doubt that this example applies much.
I get that we all want understanding in a situation like this but let’s not go after people’s appearances, cripes. Most people look weird in one way or another and are gonna be fine to sit next to on a bus. Come on.