I can’t wait to see the Cooperate/Defect ratio. I, for one, chose to cooperate.
lalaithion
The common justification trotted out (that it’s necessary to include the theoretically-possible transman who somehow can get pregnant and apparently suffers no dysphoria from carrying a fetus to term) is completely daft.
This is as far as I can tell completely false. Plenty of trans men carry fetuses to term. Plenty of trans men carried fetuses to term before they came out as trans men. Plenty of trans men decide to carry fetuses to term after they come out as trans men. A couple of facts I believe about the world that may help you make sense of this:
Not everyone experiences dysphoria the same way and in the same amount. Someone may experience pregnancy as an extreme negative, but have no feelings around facial hair. Someone may desire facial hair very strongly, but have no strong opinions on pregnancy at all.
Some people want to have their own children very strongly, and are willing to suffer considerably to achieve that, even if it means feeling dysphoric for 9 months.
This is the general feeling I get from a lot of this post: it represents a good understanding of the anti-trans side of the debate, and a good understanding of the rationalist interpretation of semantics applied to the trans debate, but it lacks understanding of the experiences of trans people, and it also lacks awareness that it is missing that understanding.
If anyone identifies to me as a woman, the same question and more: What am I supposed to do with this information? What new information has this communicated? Why should I care? Why does it matter?
The most basic piece of information that is being communicated here is that, assuming you speak English, the person would like you to use female-gendered terms (she/her/hers, actress instead of actor, etc.) for her. You touch on the rest with
Perhaps the theory here is there is an expectation that the word woman will (intentionally or not) dredge up in people’s minds everything else tangentially associated with the concept.
and I’m not sure why you discard this as worthless or deceptive. Maybe a better way of framing this is to translate “I identify as a women” to “I believe you will do a better job of modeling my personality, desires, actions, and other ways of interacting with you if you use predictions from the ‘woman’ category you have in your mind instead of the ‘man’ category in your mind.”
Maybe you disagree that anyone in the world could be better modeled as a gender that was not their assigned gender at birth.
Likewise for nonbinary people. If someone tells you that they are nonbinary, they are telling you, “I would prefer for you to use gender-neutral terms to refer to me. If you associate me with your internal ‘man’ category or your internal ‘woman’ category, I believe you will make worse predictions of my actions than if you attempt to associate me with both or neither categories.”
This isn’t nearly as useless as telling someone your favorite shampoo brand. In case you were wondering, I prefer the most basic Pantene shampoo. Now you are able to predict things about how I buy shampoo better.
I am also nonbinary. Now you are able to predict things about how I interact with gender better.
If you use
cut
(orawk
orsed
for cutting), try https://github.com/sstadick/hckIf you use
less
orcat
for source files, try https://github.com/sharkdp/batIf you can’t ever remember the syntax for
xargs
(sorry, don’t have a 2nd other program), try https://www.gnu.org/software/parallel/If you’re using standard command like tools for munging CSVs (like
cut
,grep
,sed
, etc.), try https://github.com/BurntSushi/xsvIf you use
grep
(orag
) try https://github.com/BurntSushi/ripgrepEach of these programs only improves quality of life a little, but they make doing simple things without leaving the shell so much easier.
Yeah, this post makes me wonder if there are non-abusive employers in EA who are nevertheless enabling abusers by normalizing behavior that makes abuse popular. Employers who pay their employees months late without clarity on why and what the plan is to get people paid eventually. Employers who employ people without writing things down, like how much people will get paid and when. Employers who try to enforce non-disclosure of work culture and pay.
None of the things above are necessarily dealbreakers in the right context or environment, but when an employer does those things they are making it difficult to distinguish themself from an abusive employer, and also enabling abusive employers because they’re not obviously doing something nonstandard. This is highlighted by:
I relatedly think that the EA ecosystem doesn’t have reliable defenses against such predators.
If EAs want to have defenses, against these predators, they have to act in such a way that the early red flags here (not paid on time, no contracts just verbal agreements) are actually serious red flags by having non-abusive employers categorically not engage in them, and having more established EA employees react in horror if they hear about this happening.
That may have, in fact, been the point. I doubt many people bothered to check.
However, true randomness is really hard to get by and computers usually use routines that produce numbers that look random, but aren’t really.
People commonly repeat this, but it isn’t really true. It’s really easy to build a random number generator that gets its randomness from unpredictable physical processes; just take a digital camera with its lens cap on, and use the shot noise in the video output.
Additionally, in a post about computationally limited agents, CSPRNGs can pass all statistical tests that can be performed in the polynomial time of the seed.
In practice, this means that real random numbers are extremely easy to come by—which is why the entire modern digital world works.
Orthogonality in design states that we can construct an AGI which optimizes for any goal. Orthogonality at runtime would be an AGI design that would consist of an AGI which can switch between arbitrary goals while operating. Here, we are only really talking about the latter orthogonality
This should not be relegated to a footnote. I’ve always thought that design-time orthogonality is the core of the orthogonality thesis, and I was very confused by this post until I read the footnote.
There was a critical moment in 2006(?) where Hinton and Salakhutdinov(?) proposed training Restricted Boltzmann machines unsupervised in layers, and then ‘unrolling’ the RBMs to initialize the weights in the network, and then you could do further gradient descent updates from there, because the activations and gradients wouldn’t explode or die out given that initialization. That got people to, I dunno, 6 layers instead of 3 layers or something? But it focused attention on the problem of exploding gradients as the reason why deeply layered neural nets never worked, and that kicked off the entire modern field of deep learning, more or less.
Does anyone have a good summary of the pre-alexnet history of neural nets? This comment and others about ReLUs contradict what I had been taught in masters level CS AI/ML classes (in 2018), which is also what Ngo seems to have in his model, that neural nets were mostly hardware-limited throughout their winter.
Turing completeness is definitely the wrong metric for determining whether a method is a path to AGI. My learning algorithm of “generate a random Turing machine, test it on the data, and keep it if it does the best job of all the other Turing machines I’ve generated, repeat” is clearly Turing complete, and will eventually learn any computable process, but it’s very inefficient, and we shouldn’t expect AGI to be generated using that algorithm anytime in the near future.
Similarly, neural networks with one hidden layer are universal function approximators, and yet modern methods use very deep neural networks with lots of internal structure (convolutions, recurrences) because they learn faster, even though a single hidden layer is enough in theory to achieve the same tasks.
It’s worth noting that many of the people involved in AI risk have directly disagreed with this viewpoint, saying that their analysis yields much-larger-than-nonzero probabilities of AGI related X-Risk.
PPLs are a tool to bring complicated statistical modeling to the masses. Computers are capable of doing much more advanced statistical modeling than appears in every non-statistics paper, but most people don’t have the expertise to build them. PPLs allow you to write complicated statistical models and then evaluate them with state-of-the-art methods without having to build everything from scratch.
I think that one thing you’re missing is that lots of people… use gender as a very strong feature of navigating the world. They treat “male” and “female” as natural categories, and make lots of judgements based on whether someone “is” male or female.
You don’t seem to do that, which puts you pretty far along the spectrum towards gender abolition, and you’re right, from a gender abolition perspective there’s no reason to be trans (or to be worried about people using the restroom they prefer or wearing the clothes they prefer or taking hormones to alter their body in ways they prefer).
But I think you’re expecting that most people act this way, and they don’t! For example, there are lots of people who would be uncomfortable doing X with/to/around a feminine gay man, but wouldn’t be uncomfortable doing X with/to/around a trans woman, even if the two hypothetical people look very similar.
Some examples of X that I have seen include:
Women sleeping in the same room or tent as this person
Muslim women not wearing a headscarf in their presence
Women going to a bathroom or changing room together
Straight men or lesbian women being attracted to this person
I don’t really know how to explain this any more than I already have. To lay it out simply:
Here is this thing, gender.
Lots of people care about gender a lot
It’s a valid position to say “I don’t care about this thing and don’t understand why anyone else does”
Nevertheless, understanding that people do care will help you better understand why a lot of stuff around gender happens.
Note: I am not trying to convince you to care about gender! I am merely trying to explain some of the ways other people, both trans and cis, care about gender.
“Republicans (30%) are approximately three times as likely as Democrats (11%) to agree with the statement, “Because things have gotten so far off track, true American patriots may have to resort to violence in order to save our country.” Agreement with this statement rises to 40% among Republicans who most trust far-right news sources and is 32% among those who most trust Fox News. One in four white evangelicals (26%) also agree that political violence may be necessary to save the country”
It’s more than 1-5%; it’s a sizable minority.
Hmm, I disagree with the “one intuition” way of looking at finances. Yes, you can’t drop your expenses by more than 100%, and you can increase your income by more than 100%, but what you really care about is increasing the ratio of income to expenses. In this context, halving your expenses is equivalent to doubling your salary, and if you drop your expenses to zero, that’s equivalent to increasing your income to infinity.
It would be better if it was merely an organization that merely had contradictory goals (maybe a degrowth anarcho-socialist group? A hardcore anti-science christian organization?) but wasn’t organized around the dislike of our group specifically.
I suppose one of my recent complaints about the rationalist community, on LessWrong and elsewhere, is that I feel like I am having to adjust my identity matrix away from the identity. There are certain subjects that I keep seeing people being wrong about, in the sense that they make more mistakes in a given direction, and then when they are called out on those mistakes, they acknowledge them in a begrudging way, with the tone of “I was wrong, but it’s unfair to call me out on being wrong for this issue”.
I’m purposefully avoiding mentioning any specific examples here, but I will note that in this essay, there were at least two times when I felt like what you were saying was outright false or implied something that was false, in the same manner as the statements in the Lincoln and Marx article. Usually I’m willing to overlook these moments in your writing, using the translation matrix I have built up, but it felt like carelessness to do that in an article condemning people for doing that.
I understand that without giving examples, I don’t open the door to very much useful discussion. That’s because I don’t expect the comment section here to be able to discuss this issue without devolving into object-level discussion, and because I don’t expect to have the conversational bandwidth to respond to comments to this very much.
There are tactics I have available to me which are not oriented towards truthseeking, but instead oriented towards “raising my status at the expense of yours”. I would like to not use those tactics, because I think that they destroy the commons. I view “collaborative truth seeking” as a commitment between interlocutors to avoid those tactics which are good at status games or preaching to the choir, and focus on tactics which are good at convincing.
Additionally,
I can can just … listen to the counterarguments and judge them on their merits, without getting distracted by the irrelevancy of whether the person seems “collaborative” with me
I do not have this skill. When I perceive my partner in discourse as non-collaborative, I have a harder time honestly judging their counterarguments, and I have a harder time generating good counterarguments. This means discourse with someone who is not being collaborative takes more effort, and I am less inclined to do it. When I say “this should be a norm in this space”, I am partially saying “it will be easier for you to convince me if you adopt this norm”.
I think this is covered by preamble item −1: “None of this is about anything being impossible in principle.”
As other commenters have said, approximating integer ratios is important.
1:2 is the octave
2:3 is the perfect fifth
3:4 is the perfect fourth
4:5 is the major third
5:6 is the minor third
and it just so happens that these ratios are close to powers of the 12th root of 2.
2^(12/12) is the octave
2^(7/12) is the perfect fifth
2^(5/12) is the perfect fourth
2^(4/12) is the major third
2^(3/12) is the minor third
You can do the math and verify those numbers are relatively close.
It’s important to recognize that this correspondence is relatively recently discovered; it was developed independently in china in 1584 and in europe in 1605, and coexisted with other schemes for finding approximations of those ratios for hundreds of years, and there are still people who think that this system sucks and we should use a different one, because of minor differences in pitch. (Also, the Chinese system actually used 24th roots of 2, not 12th roots.) This system is called “Equal Temprament”, and there any many other tuning systems that make slightly different choices.
Why not just use the exact integer ratios instead of the approximate ones? Well, if you’re playing on a violin or singing, you can use exact integer ratios. But if you’re using a fixed-note instrument, like a guitar (with frets) or a piano, then you have to deal with the issue that if you go up 1 octave and 1 minor third from a note, and also go up three perfect fourths, you get two notes that are almost identical, but different enough to go out of tune. (This is called the Syntonic comma.) So which one do you put on the piano? If you choose one, the other will sound a little wrong. Or, you could choose the average, and they’ll both sound a little wrong.
Did it! I’m shocked that my digit ratio is so high. Like, I figured that it was pretty high, being a bisexual genderfluid “man” (assigned at birth, that is), but I didn’t expect it to be greater than 1. Also, it was much shorter than I expected.