I’m Georgia. I crosspost some of my writings from eukaryotewritesblog.com.
eukaryote(Georgia Ray)
There’s no such thing as a tree (phylogenetically)
Spaghetti Towers
Naked mole-rats: A case study in biological weirdness
The funnel of human experience
Fiber arts, mysterious dodecahedrons, and waiting on “Eureka!”
Caring less
A point of clarification on infohazard terminology
Tiddlywiki for organizing notes and research
Global insect declines: Why aren’t we all dead yet?
How to make a giant whiteboard for $14 (plus nails)
Missing dog reasoning
A brief authorial take—I think this post has aged well, although as with Caring Less (https://www.lesswrong.com/posts/dPLSxceMtnQN2mCxL/caring-less), this was an abstract piece and I didn’t make any particular claims here.
I’m so glad that A) this was popular B) I wasn’t making up a new word for a concept that most people already know by a different name, which I think will send you to at least the first layer of Discourse Hell on its own.
I’ve met at least one person in the community who said they knew and thought about this post a lot, well before they’d met me, which was cool.
I think this website doesn’t recognize the value of bad hand-drawn graphics for communicating abstract concepts (except for Garrabrant and assorted other AI safety people, whose posts are too technical for me to read but who I support wholly.) I’m guessing that the graphics helped this piece, or at least got more people to look at it.
I do wish I’d included more examples of spaghetti towers, but I knew that before posting it, and this was an instance of “getting something out is better than making it perfect.”
I’ve planned on doing followups in the same sort of abstract style as this piece, like methods I’ve run into for getting around spaghetti towers. (Modularization, swailing, documentation.) I hopefully will do that some day. If anyone wants to help brainstorm examples, hit me up and I may or may not get back to you.
I have taken the survey, please shower me in karma.
I don’t love this thread—your first comment reads like you’re correcting me on something or saying I got something important philosophically wrong, and then you just expand on part of what I wrote with fancier language. The actual “correction”, if there is one, is down the thread and about a single word used in a minor part of the article, which, by your own findings, I am using in a common way and you are using in an idiosyncratic way. …It seems like a shoehorn for your pet philosophical stance. (I suppose I do at least appreciate you confining the inevitable “What are Women Really” tie-ins to your own thread, because boy howdy, do I not want that here.)
To be clear, the expansion was in fact good, it’s the unsupported framing as a correction that I take issue with. This wouldn’t normally bother me enough to remark on, but it’s by far the top-rated comment, and you know everyone loves a first-comment correction, so I thought I should put it out there.- 16 May 2021 2:12 UTC; 9 points) 's comment on Containment Thread on the Motivation and Political Context for My Philosophy of Language Agenda by (
You are super right and that is exactly what happened—I checked the numbers and had made the order of magnitude three times larger. Thanks for the sanity checks and catch. It turns out this moves the midpoint up to 1432. Lemme fix the other numbers as well.
Update: Actually, it did nothing to the midpoint, which makes sense in retrospect (maybe?) but does change the “fraction of time” thing, as well as some of the Fermi estimates in the middle.
15% of experience has actually been experienced by living people, and 28% since Kane Tanaka’s birth. I’ve updated this here and on my blog.
If many info-hazards have already been openly published, the world may be considered saturated with info-hazards, as a malevolent agent already has access to so much dangerous information. In our world, where genomes of the pandemic flus have been openly published, it is difficult to make the situation worse.
I strongly disagree that we’re in a world of accessible easy catastrophic information right now.
This is based on a lot of background knowledge, but as a good start, Sonia Ben Ouagrham-Gormley makes a strong case that bioweapons groups historically have had very difficult times creating usable weapons even when they already have a viable pathogen. Having a flu genome online doesn’t solve any of the other problems of weapons creation. While biotechnology has certainly progressed since major historic programs, and more info and procedures of various kinds are online, I still don’t see the case for lots of highly destructive technology being easily available.
If you do not believe that we’re at that future of plenty of calamitous information easily available online, but believe we could conceivably get there, then the proposed strategy of openly discussing GCR-related infohazards is extremely dangerous, because it pushes us there even faster.
If the reader thinks we’re probably already there, I’d ask how confident they are. Getting it wrong carries a very high cost, and it’s not clear to me that having lots of infohazards publicly available is the correct response, even for moderately high certainty that we’re in “lots of GCR instruction manuals online” world. (For starters, publication has a circuitous path to positive impact at best. You have to get them to the right eyes.)
Other thoughts:
The steps for checking a possibly-dangerous idea before you put it online, including running it by multiple wise knowledgeable people and trying to see if it’s been discovered already, and doing analysis in a way that won’t get enormous publicity, seem like good heuristics for potentially risky ideas. Although if you think you’ve found something profoundly dangerous, you probably don’t even want to type it into Google.
Re: dangerous-but-simple ideas being easy to find: It seems that for some reason or other, bioterrorism and bioweapons programs are very rare these days. This suggests to me that there could be a major risk in the form of inadvertently convincing non-bio malicious actors to switch to bio—by perhaps suggesting a new idea that fulfils their goals or is within their means. We as humans are in a bad place to competently judge whether ideas that are obvious to us are also obvious to everybody else. So while inferential distance is a real and important thing, I’d suggest against being blindly incautious with “obvious” ideas.
(Anyways, this isn’t to say such things shouldn’t be researched or addressed, but there’s a vast difference between “turn off your computer and never speak of this again” and “post widely in public forums; scream from the rooftops”, and many useful actions between the two.)
(Please note that all of this is my own opinion and doesn’t reflect that of my employer or sponsors.)
- 3 Jul 2018 22:47 UTC; 7 points) 's comment on Informational hazards and the cost-effectiveness of open discussion of catastrophic risks by (EA Forum;
- 3 Jul 2018 22:49 UTC; 4 points) 's comment on Informational hazards and the cost-effectiveness of open discussion of catastrophic risks by (
Here’s something I believe: You should be trying really hard to write your LessWrong posts in such a way that normal people can read them.
By normal, I mean “people who are not immersed in LessWrong culture or jargon.” This is most people. I get that you have to use jargon sometimes. (Technical AI safety people: I do not understand your math, but keep up the good fight.) Or if your post is referring to another post, or is part of a series, then it doesn’t have to stand alone. (But maybe the series should stand alone?)
Obviously if you only want your post to be accessible to LWers, ignore this. But do you really want that?
If your post provides value to many people on LW, it will probably provide value to people off LW. And making it accessible suddenly means it can be linked and referred to in many other contexts.
Your post might be the first time someone new to the site sees particular terms.
Even if the jargon is decipherable or the piece doesn’t rely on the jargon, it still looks weird, and people don’t like reading things where they don’t know the words. It signals “this is not for me” and can make them feel dumb for not getting it.
(Listen, I was once in a conversation with a real live human being who dropped references to obscure classical literature every third sentence or so. This is the most irritating thing in the universe. Do not be that person.)
On a selfish level,
It enables the post to spread beyond the LW memeosphere, potentially bringing you honor and glory.
It helps you think and communicate better to translate useful ideas into and out of the original context they appear in.
If you’re not going to do this, you can at least: Link jargon to somewhere that explains it.
Thank you for coming to my TED talk.
Quick authorial review: This post has brought me the greatest joy from other sources referring to it, including Marginal Revolution (https://marginalrevolution.com/marginalrevolution/2018/10/funnel-human-experience.html) and the New York Times bestseller “The Uninhabitable Earth”. I was kind of hoping to supply a fact about the world that people could use in many different lights, and they have (see those and also like https://unherd.com/2018/10/why-are-woke-liberals-such-enemies-of-the-past/ )
An unintentional takeaway from this attention is solidifying my belief that if you’re describing a new specific concept, you should make up a name too. For most purposes, this is for reasons like the ones described by Malcolm Ocean here (https://malcolmocean.com/2016/02/sparkly-pink-purple-ball-thing/). But also, sometimes, a New York Times bestseller will cite you, and you’ll only find out as you set up Google alerts.
(And then once you make a unique name, set up google alerts for it. The book just cites “eukaryote” rather than my name, and this post rather than the one on my blog. Which I guess goes to show you that you can put anything in a book.)
Anyways, I’m actually a little embarrassed because my data on human populations isn’t super accurate—they start at the year 50,000 BCE, when there were humans well before that. But those populations were small, probably not enough to significantly influence the result. I’m not a historian, and really don’t want to invest the effort needed for more accurate numbers, although if someone would like to, please go ahead.
But it also shows that people are interested in quantification. I’ve written a lot of posts that are me trying to find a set of numbers, and making lots and lots of assumptions along the way. But then you have some plausible numbers. It turns out that you can just do this, and don’t need a qualification in Counting Animals or whatever, just supply your reasoning and attach the appropriate caveats. There are no experts, but you can become the first one.
As an aside, in the intervening years, I’ve become more interested in the everyday life of the past—of all of the earlier chunks that made up so much of the funnel. I read an early 1800′s housekeeping book, “The Frugal Housewife”, which advises mothers to teach their children how to knit starting at age 4, and to keep all members of the family knitting in their downtime. And it’s horrifying, but maybe that’s what you have to do to keep your family warm in the northeast US winter. No downtime that isn’t productive. I’ve taken up knitting lately and enjoy it, but at the same time, I love that it’s a hobby and not a requirement. A lot of human experience must have been at the razor’s edge of survival, Darwin’s hounds nipping at our heels. I prefer 2020.
If you want a slight taste of everyday life at the midpoint of human experience, you might be interested in the Society for Creative Anachronism. It features swordfighting and court pagentry but also just a lot of everyday crafts—sewing, knitting, brewing, cooking. If you want to learn about medieval soapmaking or forging, they will help you find out.
I have a proposal.
Nobody affiliated with LessWrong is allowed to use the word “signalling” for the next six months.
If you want to write something about signalling, you have to use the word “communication” instead. You can then use other words to clarify what you mean, as long as none of them are “signalling”.
I think this will lead to more clarity and a better site culture. Thanks for coming to my talk.
This is a really touching tribute. I’m so sorry.