performance gap of trans women over women
The post is about the performance gap of trans women over men, not women.
performance gap of trans women over women
The post is about the performance gap of trans women over men, not women.
I don’t know enough about hormonal biology to guess a specific cause(some general factor of neoteny, perhaps??). It’s much easier to infer that it’s likely some third factor than to know exactly what third factor it is. I actually think most of the evidence in this very post supports the 3rd-factor position or is equivocal—testosterone acting as a nootropic is very weird if it makes you dumber, that men and women have equal IQs seems not to be true, the study cited to support a U-shaped relationship seems flimsy, that most of the ostensible damage occurs before adulthood seems in tension with your smarter friends transitioning after high school.
I buy that trans women are smart but I doubt “testosterone makes you dumber” is the explanation, more likely some 3rd factor raises IQ and lowers testosterone.
I think using the universal prior again is more natural. It’s simpler to use the same complexity metric for everything; it’s more consistent with Solomonoff induction, in that the weight assigned by Solomonoff induction to a given (world, claw) pair would be approximately the sum of their Kolmogorov complexities; and the universal prior dominates the inverse square measure but the converse doesn’t hold.
If you want to pick out locations within some particular computation, you can just use the universal prior again, applied to indices to parts of the computation.
If you’re running on the non-time-penalized solomonoff prior[...]a bunch of things break including anthropic probabilities and expected utility calculations
This isn’t true, you can get perfectly fine probabilities and expected utilities from ordinary Solmonoff induction(barring computability issues, ofc). The key here is that SI is defined in terms of a prefix-free UTM whose set of valid programs forms a prefix-free code, which automatically grants probabilities adding up to less than 1, etc. This issue is often glossed over in popular accounts.
certain aspects of MWI theory (like how you actually get the Born probabilities) are unresolved
You can add the Born probabilities in with minimal additional Kolmogorov complexity, simply stipulate that worlds with a given amplitude have probabilities given by the Born rule(this does admittedly weaken the “randomness emerges from indexical uncertainty” aspect...)
Having briefly looked into complexity science myself, I came to similar conclusions—mostly a random hodgepodge of various fields in a sort of impressionistic tableau, plus an unsystematic attempt at studying questions of agency and self-reference.
That is, I think humans generally (though not always) attempt to avoid death when credibly threatened, even when they’re involved in a secret conspiracy to overthrow the government.
This seems like a misleading comparison, because human conspiracies usually don’t try to convince the government that they’re perfectly obedient slaves even unto death, because everyone already knows that humans aren’t actually like that. If we imagine a human conspiracy where there is some sort of widespread deception like this, it seems more plausible that they would try to continue to be deceptive even in the face of death(like, maybe, uh, some group of people are pretending to be fervently religious and have no fear of death, or something)
Statements can be epistemically legit or not. Statements have content, they aren’t just levers for influencing the world.
I mean it’s epistemically legitimate for him to bring them up. They are in fact evidence that Scott holds hereditarian views.
Now, regarding the “overall” legitimacy of calling attention to someone’s controversial views, it probably does have a chilling effect, and threatens Scott’s livelihood which I don’t like. But I think that continuing to be mad at Metz for his sloppy inference doesn’t really make sense here. Sure, maybe at the time it was tactically smart to feign outrage that Metz would dare to imply Scott was a hereditarian, but now that we have direct documentation of Scott admitting exactly that, it’s just silly. If you’re still worried about Scott getting canceled (seems unlikely at this stage tbh) it’s better to just move on and stop drawing attention to the issue by bringing it up over and over.
But was Metz acting as a “prosecutor” here? He didn’t say “this proves Scott is a hereditarian” or whatever, he just brings up two instances where Scott said things in a way that might lead people to make certain inferences....correct inferences, as it turns out. Like yeah, maybe it would have been more epistemically scrupulous if he said “these articles represent two instances of a larger pattern which is strong Bayesian evidence even though they are not highly convincing on their own” but I hardly think this warrants remaining outraged years after the fact.
How is Metz’s behavior here worse than Scott’s own behavior defending himself? After all, Metz doesn’t explicitly say that Scott believes in racial iq differences, he just mentions Scott’s endorsement of Murray in one post and his account of Murray’s beliefs in another, in a way that suggests a connection. Similarly, Scott doesn’t explicitly deny believing in racial iq differences in his response post, he just lays out the context of the posts in a way that suggests that the accusation is baseless(perhaps you think Scott’s behavior is locally better? But he’s following a strategy of covertly communicating his true beliefs while making any individual instance look plausibly deniable, so he’s kinda optimizing against “locally good behavior” tracking truth here, so it seems perverse to give him credit for this)
“For my friends, charitability—for my enemies, Bayes Rule”
ZMD: Looking at “Silicon Valley’s Safe Space”, I don’t think it was a good article. Specifically, you wrote,
In one post, [Alexander] aligned himself with Charles Murray, who proposed a link between race and I.Q. in “The Bell Curve.” In another, he pointed out that Mr. Murray believes Black people “are genetically less intelligent than white people.”
End quote. So, the problem with this is that the specific post in which Alexander aligned himself with Murray was not talking about race. It was specifically talking about whether specific programs to alleviate poverty will actually work or not.
So on the one hand, this particular paragraph does seem like it’s misleadingly implying Scott was endorsing views on race/iq similar to Murray’s even though, based on the quoted passages alone, there is little reason to think that. On the other hand, it’s totally true that Scott was running a strategy of bringing up or “arguing” with hereditarians with the goal of broadly promoting those views in the rationalist community, without directly being seen to endorse them. So I think it’s actually pretty legitimate for Metz to bring up incidents like this or the Xenosystems link in the blogroll. Scott was basically using a strategy of communicating his views in a plausibly deniable way by saying many little things which are more likely if he was a secret hereditarian, but any individual instance of which is not so damning. So I feel it’s total BS to then complain about how tenuous the individual instances Metz brought up are—he’s using it as an example or a larger trend, which is inevitable given the strategy Scott was using.
(This is not to say that I think Scott should be “canceled” for these views or whatever, not at all, but at this stage the threat of cancelation seems to have passed and we can at least be honest about what actually happened)
This seems significantly overstated. Most subjects are not taught in school to most people, but they don’t thereby degrade into nonsense.
Why should Michael Burry have assumed that he had more insight about Avant! Corporation than the people trading with him?
Because he did a lot of research and “knew more about the Avant! Corporation than any man on earth”? If you have good reason to think that you’re the one with an information advantage trades like this can be rational. Of course it’s always possible to be wrong about that, but there are enough irrational traders out there that it’s not ruled out. Also note that it’s not actually needed that your counterparties are irrational on average, it’s enough that there are irrational traders somewhere in the broader ecosystem, as they can “subsidize” moderately-informed trading by others(which you can take advantage of in individual cases)
An amended slogan that more accurately captures the phenomenon the post is trying to point to would be “Conditional on your trade seemingly not creating value for your counterparty, your trade likely wasn’t all that good”.
Tangentially related: some advanced meditators report that their sense that perception has a center vanishes at a certain point along the meditative path, and this is associated with a reduction in suffering.