Homepage: https://dynomight.net
Twitter: https://twitter.com/dynomight7
Homepage: https://dynomight.net
Twitter: https://twitter.com/dynomight7
I might not have described the original debate very clearly. My claim was that if Monty chose “leftmost non-car door” you still get the car 2⁄3 of the time by always switching and 1⁄3 by never switching. Your conditional probabilities look correct to me. The only thing you might be “missing” is that (A) occurs 2⁄3 of the time and (B) occurs only 1⁄3 of the time. So if you always switch your chance of getting the car is still (chance of A)*(prob of car given A) + (chance of B)*(prob of car given B)=(2/3)*(1/2) + (1/3)*(1) = (2/3).
One difference (outside the bounds of the original debate) is that if Monty behaves this way there are other strategies that also give you the car 2⁄3 of the time. For example, you could switch only in scenario B and not in scenario A. There doesn’t appear to be any way to exploit Monty’s behavior and do better than 2⁄3 though.
Just to be clear, when talking about how people behave in forums, I mean more “general purpose” places like Reddit. In particular, I was not thinking about Less Wrong where in my experience, people have always bent over backwards to be reasonable!
I have two thoughts related to this:
First, there’s a dual problem: Given a piece of writing that’s along the Pareto frontier, how do you make it easy for readers who might have a utility function aligned with the piece to find it.
Related to this, for many people and many pieces of writing, a large part of the utility they get is from comments. I think this leads to dynamics where a piece where the writing that’s less optimal can get popular and then get to a point on the frontier that’s hard to beat.
Done!
I loved this book. The most surprising thing to me was the answer that people who were there in the heyday give when asked what made Bell Labs so successful: They always say it was the problem, i.e. having an entire organization oriented towards the goal of “make communication reliable and practical between any two places on earth”. When Shannon left the Labs for MIT, people who were there immediately predicted he wouldn’t do anything of the same significance because he’d lose that “compass”. Shannon was obviously a genius, and he did much more after than most people ever accomplish, but still nothing as significant as what he did when at at the Labs.
I thought this was fantastic, very thought-provoking. One possibly easy thing that I think would be great would be links to a few posts that you think have used this strategy with success.
Thanks, I clarified the noise issue. Regarding factor analysis, could you check if I understand everything correctly? Here’s what I think is the situation:
We can write a factor analysis model (with a single factor) as
where:
is observed data
is a random latent variable
is some vector (a parameter)
is a random noise variable
is the covariance of the noise (a parameter)
It always holds (assuming and are independent) that
In the simplest variant of factor analysis (in the current post) we use in which case you get that
You can check if this model fits by (1) checking that is Normal and (2) checking if the covariance of x can be decomposed as in the above equation. (Which is equivalent to having all singular values the same except one).
The next slightly-less-simple variant of factor analysis (which I think you’re suggesting) would be to use where is a vector, in which case you get that
You can again check if this model fits by (1) checking that is Normal and (2) checking if the covariance of can be decomposed as in the above equation. (The difference is, now this doesn’t reduce to some simple singular value condition.)
Do I have all that right?
Thanks for pointing out those papers, which I agree can get at issues that simple correlations can’t. Still, to avoid scope-creep, I’ve taken the less courageous approach of (1) mentioning that the “breadth” of the effects of genes is an active research topic and (2) editing the original paragraph you linked to to be more modest, talking about “does the above data imply” rather than “is it true that”. (I’d rather avoid directly addressing 3 and 4 since I think that doing those claims justice would require more work than I can put in here.) Anyway, thanks again for your comments, it’s useful for me to think of this spectrum of different “notions of g”.
Thanks, very clear! I guess the position I want to take is just that the data in the post gives reasonable evidence for g being at least the convenient summary statistic in 2 (and doesn’t preclude 3 or 4).
What I was really trying to get at in the original quote is that some people seem to consider this to be the canonical position on g:
Factor analysis provides rigorous statistical proof that there is some single underlying event that produces all the correlations between mental tests.
There are lots of articles that (while not explicitly stating the above position) refute it at length, and get passed around as proof that g is a myth. It’s certainly true that position 5 is false (in multiple ways), but I just wanted to say that this doesn’t mean anything for the evidence we have for 2.
Can I check if I understand your point correctly? I suggested we know that g has many causes since so many genes are relevant and thus f you opened up a brain, you wouldn’t be able to “find” g in any particular place. It’s the product of a whole bunch of different genes, each of which is just coding for some protein, and they all interact in complex ways. If I understand you correctly, you’re pointing out that there could be a sort of “causal bottleneck” of sorts. For example, maybe all the different genes have complex effects, but all that really matters is how they affect neuronal calcium channel efficiency or something. Thus, if you opened up a brain, you could just check how efficient the calcium channels are and you’re done. Is that right?
If this is right, I do agree that I seem to be over-claiming a bit here. There’s nothing that precludes the possibility of a “bottleneck” as far as I know, (though it seems sorta implausible in my not-at-all-informed opinion)
I used python/matplotlib. The basic idea is to create a 3d plot like so:
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
Then you can add dots with something like this:
ax.scatter(X,Y,Z,alpha=.5,s=20,color='navy',marker='o',linewidth=0)
Then you save it to a movie with something like this:
def update(i, fig, ax):
ax.view_init(elev=20., azim=i)
return fig, ax
frames = np.arange(0, 360, 1)
anim = FuncAnimation(fig, update, frames=frames, repeat=True, fargs=(fig, ax))
writer = 'ffmpeg'
anim.save(fname, dpi=80, writer=writer, fps=30)
I’m sure this won’t actually run, but it gives you the basic idea. (The full code is a complete nightmare.)
Thanks for the reply. I certainly agree that “factor analysis” often doesn’t make that assumption, though it was my impression that it’s commonly made in this context. I suppose the degree of misleading-ness here depends on how often people assume isotropic noise when looking at this kind of data?
In any case, I’ll try to think about how to clarify this without getting too technical. (I actually had some more details about this at one point but was persuaded to remove them for the sake of being more accessible.)
if a trait is 80% heritable and you want to guess whether or not Bob has that trait then you’ll be 80% more accurate if you know whether or not Bob’s parents have the trait than if you didn’t have that information.
I think this is more or less correct for narrow-sense heritability (most commonly used when breeding animals) but not quite right for broad-sense heritability (most commonly used with humans). If you’re talking about broad-sense heritability, the problem is that you’d need to know not just if the parents have the trait, but also which genes Bob got or not from each parent, as well as the effect of dominant genes, epistatic interactions, etc.
Assuming you’re talking about broad-sense heritability, I think a better way of looking at it would be to say that you’ll be 80% more accurate if Bob has an identical twin raised by a random family and you know if that twin had the trait. This isn’t quite right either, but I think it’s valid if you assume that phenotypic traits are the sum of genetic effects and environmental effects and also that genetic effects are independent of environmental effects.
Of course, few people have identical twins raised by random families, and most phenotypes probably aren’t additive in genetic and environmental effects, and those effects probably aren’t independent! Which… is a lot of caveats if you want to know practical applications of heritability numbers.
On the other hand, there is some non-applied scientific value in heritability. For example, though religiosity is heritable, the specific religion people join appears to be almost totally un-heritable. I think it’s OK to read this in the straightforward way, i.e. as “genes don’t predispose us to be Christian / Muslim / Shinto / whatever”. I don’t have any particular application for that fact, but it’s certainly interesting.
Similarly, schizophrenia has sky-high heritability (like 80%) meaning that current environments don’t have a huge impact on where schizophrenia appears. That’s also interesting even if not immediately useful.
If you’re worried about computational complexity, that’s OK. It’s not something that I mentioned because (surprisingly enough...) this isn’t something that any of the doctors discussed. If you like, let’s call that a “valid cost” just like the medical risks and financial/time costs of doing tests. The central issue is if it’s valid to worry about information causing harmful downstream medical decisions.