Perhaps it will soon be the most Effective Altruism to raise your own superbabies. You’re creating the next best thing to friendly AGI, in a context where the rest of the world is neglecting this low-hanging fruit. You could shape the first generation of smarter-than-unedited-human intelligence.
Ashin
I think writing one of the best selling books of your century is extraordinary evidence you’ve understood something deep about human nature
Why do you think that? Many best selling books are worthless in rationally understanding human nature. Best selling positively correlating with good rationality would be an unlikely coincidence unless you have some good explanation.
For example, LW or SSC have been way more enlightening to me than Tolkien’s stereotypical conservatism wrapped in rationalized justifications. But sure, that could just be me, you do you.
At this point, looking up to Tolkien seems like a likely result of the Halo Effect. His quotes make him seem way too overconfident in his philosophically dubious worldviews. I think I have a lot more respect for any random rationalist blogger than one who is that sure of himself but obviously wrong in some examples that are clear to us.
This theory also generates the prediction that being ungenerous to one’s own kin should be attractive
Isn’t that what makes “Romeo and Juliet”-like stories “romantic”? When one forsakes one’s own genetic clan to elope with the mate, it signals extreme devotion to the mate and is therefore attractive.
In this comment, I merely want to focus on using the Challenger Disaster Reaction example of market capabilities. I think it’s a bad example.
I think it’s basically a very interesting story that would be good to tell people at a party or in a TED talk to introduce people to the idea of prediction markets by using an obviously extreme example.
I think it is a mistake to confuse good storytelling with good empirical evidence. I don’t think it’s valuable empirical evidence about the market being able to make these kinds of predictions.
At a quick naive approximation, there’s already a 25% chance that 1 out of 4 contractors of the mission would be coincidentally correctly identified as the one with most fault. Very low bar; shouldn’t cause you to update much in terms of bayesian evidence.
Even after extensive research after the events, analysts could not identify the mechanism by which the market was able to produce this result, if we assume it was not coincidence. The sources linked in this article quotes the researchers thusly:
”a perplexing situation that while markets appear to work in practice, we are not sure how they work in theory.”
Which is totally consistent with the null hypothesis of “win 2 coinflips”. As far as we can tell, it’s either insider info, or coincidence, because there is no causal mechanism to derive this result from public information!It is especially concerning, that this seems like an appeal to the mysticism of markets. It seems like one is handwaving the lack of identifiable causal mechanisms and saying “markets work because they’re basically magic, look at this magical-seeming example we found”
If it really is that efficient and we simply can’t figure out the mechanism because we’re too stupid, at least we would see tons of other examples. Can we get like another 100 examples like this one? I just find it suspicious that we’re giving so much hype to an example that happened in 1986. Surely there would be some other flashy examples since then as well? If that is the case, it would be more convincing to list all of them instead of focusing on this single example.
I realize that’s asking for a lot more effort, but that would be what it would take to actually make this convincing, because I think picking one example from 1986 that could have happened due to 2 coinflips is close to useless in convincing me. But worse, it is using the Dark Art of storytelling which may convince people when they shouldn’t be convinced.
2 votes
Overall karma indicates overall quality.
0 votes
Agreement karma indicates agreement, separate from overall quality.
I dunno, it’s not obviously the case, because I expect the average LWer to be more competent in this than the aggregate of an organization made up of the supposedly smartest and most capable medical scientists in the US. To be clear, not that individuals in that organization are stupider than the average LWer, but that the organization as a whole might not be nimble enough to thread the needle when it comes to this kind of policy-making.