Philosophy PhD student. Interested in ethics, metaethics, AI, EA, disagreement/erisology. Former username Ikaxas
Vaughn Papenhausen
This is pretty similar in concept to the conlang toki pona, which is a language explicitly designed to be as simple as possible. It has less than 150 words. (“toki pona” means something like “good language” or “good speech” in toki pona)
Quoting a recent conversation between Aryeh Englander and Eliezer Yudkowsky
Out of curiosity, is this conversation publicly posted anywhere? I didn’t see a link.
Putting RamblinDash’s point another way: when Eliezer says “unlimited retries”, he’s not talking about a Groundhog Day style reset. He’s just talking about the mundane thing where, when you’re trying to fix a car engine or something, you try one fix, and if it doesn’t start, you try another fix, and if it still doesn’t start, you try another fix, and so on. So the scenario Eliezer is imagining is this: we have 50 years. Year 1, we build an AI, and it kills 1 million people. We shut it off. Year 2, we fix the AI. We turn it back on, it kills another million people. We shut it off, fix it, turn it back on. Etc, until it stops killing people when we turn it on. Eliezer is saying, if we had 50 years to do that, we could align an AI. The problem is, in reality, the first time we turn it on, it doesn’t kill 1 million people, it kills everyone. We only get one try.
Am I the only one who, upon reading the title, pictured 5 people sitting behind OP all at the same time?
The group version of this already exists, in a couple of different versions:
Yeah, that is definitely fair
My model of gears to ascension, based on their first 2 posts, is that they’re not complaining about the length for their own sake, but rather for the sake of people that they link this post to who then bounce off because it looks too long. A basics post shouldn’t have the property that someone with zero context is likely to bounce off it, and I think gears to ascension is saying that the nominal length (reflected in the “43 minutes”) is likely to have the effect of making people who get linked to this post bounce off it, even though the length for practical purposes is much shorter.
Pinker has a book about writing called The Sense of Style
There seems to be a conflict between putting “self-displays on social media” in the ritual box, and putting “all social signalling” outside it. Surely the former is a subset of the latter.
My understanding was that the point was this: not all social signalling is ritual. Some of it is, some of it isn’t. The point was: someone might think OP is claiming that all social signalling is ritual, and OP wanted to dispel that impression. This is consistent with some social signalling counting as ritual.
I think the idea is to be able to transform this:
- item 1 - item 2 - item 3
into this:
- item 3 - item 1 - item 2
I.e. it would treat bulleted lists like trees, and allow you to move entire sub-branches of trees around as single units.
This isn’t necessarily a criticism, but “exploration & recombination” and “tetrising” seem in tension with each other. E&R is all about allowing yourself to explore broadly, not limiting yourself to spending your time only on the narrow thing you’re “trying to work on.” Tetrising, on the other hand, is precisely about spending your time only on that narrow thing.
As I said, this isn’t a criticism; this post is about a grab bag of techniques that might work at different times for different people, not a single unified strategy, but it’s still interesting to point out the tension here.
Cool, thanks!
I think the point was that it’s a cause you don’t have to be a longtermist in order to care about. Saying it’s a “longtermist cause” can be interpreted either as saying that there are strong reasons for caring about it if you’re a longtermist, or that there are not strong reasons for caring about it if you’re not a longtermist. OP is disagreeing with the second of these (i.e. OP thinks there are strong reasons for caring about AI risk completely apart from longtermism).
Not a programmer, but I think one other reason for this is that at least in certain languages (I think interpreted languages, e.g. Python, is the relevant category here), you have to define a term before you can use it; the interpreter basically executes the code top-down instead of compiling it first, so it can’t just look later in the file to figure out what you mean. So
def brushTeeth(): putToothpasteOnToothbrush() ... def putToothpasteOnToothbrush(): ...
wouldn’t work, because you’re calling putToothpasteOnToothbrush() before you’ve defined it.
Fyi, the link to your site is broken for those viewing on greaterwrong.com; it’s interpreting “—a” as part of the link.
Maybe have a special “announcements” section on the frontpage?
The way I like to think about this is that the set of all possible thoughts is like a space that can be carved up into little territories and each of those territories marked with a word to give it a name.
Probably better to say something like “set of all possible concepts.” Words denote concepts, complete sentences denote thoughts.
I’m curious if you’re explicitly influenced by Quine for the final section, or if the resemblance is just coincidental.
Also, about that final section, you say that “words are grounded in our direct experience of what happens when we say a word.” While I was reading I kept wondering what you would say about the following alternative (though not mutually exclusive) hypothesis: “words are grounded in our experience of what happens when others say those words in our presence.” Why think the only thing that matters is what happens when we ourselves say a word?
Master: Now, is Foucault’s work the content you’re looking for, or merely a pointer.
Student: What… does that mean?
Master: Do you think that you think that the value of Foucault for you comes from the specific ideas he had, or in using him to even consider these two topics?
This put words to a feeling I’ve had a lot. Often I have some ideas, and use thinkers as a kind of handle to point to the ideas in my head (especially when I haven’t actually read the thinkers yet). The problem is that this fools me into thinking that the ideas are developed, either by me or by the thinkers. I like this idea of using the thinkers to notice topics, but then developing on the topics yourself, at least if the thinkers don’t take those topics in the direction you had in mind to take them.
On a different note, if you’re interested in Foucault’s methodology, some search terms would be “genealogy” and “conceptual engineering.” Here is a LW post on conceptual engineering, and here is a review of a recent book on the topic (which I believe engages with Foucault as well as Nietzsche, Hume, Bernard Williams, and maybe others; I haven’t actually read the full book yet, just this review). The book seems to be pretty directly about what you’re looking for: “history for finding out where our concepts and values come from, in order to question them.”
Yep, check out the Republic, I believe this is in book 5, or if it’s not in book 5 it’s in book 6.
I suspect this is getting downvoted because it is so short and underdeveloped. I think the fundamental point here is worth making though. I’ve used the existence proof argument in the past, and I think there is something to it, but I think the point being made here is basically right. It might be worth writing another post about this that goes into a bit more detail.