It’s safer to read the text of a person’s blog than to engage in more high-bandwidth interactions that shape your personal life. The examples in the OP is that you should be extra conservative when it comes to things like starting a business venture or sharing personal secrets. Those are not concerns if you are reading only EY’s posts on the internet, therefore it is safe because his ability to impact you is limited.
Ashin
wtf is this
if this is fiction, it’s bad fiction by being uninteresting and obtuse, causing the reader to skip before they waste more time, and before understanding anything. Try writing better.
Never thought I’d leave generic internet hate comments like this on LW, but here we are with curated “fiction”-slop on LW.
Boo
I wouldn’t normally make replies like this but bad posts and bad curation deserve feedback.
If the costly signal hypothesis is true, it is really a self-sabotaging memeplex for those religious nuts, since the larger society should probably consider cost-benefit ratios and funding medical practitioners training in an obscure language meant to be costly[1] is probably not the way to buy the most QALYs. You don’t even have to be anti-religious, you just have to be EA to see that helping people who make themselves cheaper to help is more effective if you’re egalitarian.
- ^
So if it ever becomes cheap, like with the LLM suggestion, then it no longer works
- ^
I dunno, it’s not obviously the case, because I expect the average LWer to be more competent in this than the aggregate of an organization made up of the supposedly smartest and most capable medical scientists in the US. To be clear, not that individuals in that organization are stupider than the average LWer, but that the organization as a whole might not be nimble enough to thread the needle when it comes to this kind of policy-making.
Perhaps it will soon be the most Effective Altruism to raise your own superbabies. You’re creating the next best thing to friendly AGI, in a context where the rest of the world is neglecting this low-hanging fruit. You could shape the first generation of smarter-than-unedited-human intelligence.
I think writing one of the best selling books of your century is extraordinary evidence you’ve understood something deep about human nature
Why do you think that? Many best selling books are worthless in rationally understanding human nature. Best selling positively correlating with good rationality would be an unlikely coincidence unless you have some good explanation.
For example, LW or SSC have been way more enlightening to me than Tolkien’s stereotypical conservatism wrapped in rationalized justifications. But sure, that could just be me, you do you.
At this point, looking up to Tolkien seems like a likely result of the Halo Effect. His quotes make him seem way too overconfident in his philosophically dubious worldviews. I think I have a lot more respect for any random rationalist blogger than one who is that sure of himself but obviously wrong in some examples that are clear to us.
This theory also generates the prediction that being ungenerous to one’s own kin should be attractive
Isn’t that what makes “Romeo and Juliet”-like stories “romantic”? When one forsakes one’s own genetic clan to elope with the mate, it signals extreme devotion to the mate and is therefore attractive.
In this comment, I merely want to focus on using the Challenger Disaster Reaction example of market capabilities. I think it’s a bad example.
I think it’s basically a very interesting story that would be good to tell people at a party or in a TED talk to introduce people to the idea of prediction markets by using an obviously extreme example.
I think it is a mistake to confuse good storytelling with good empirical evidence. I don’t think it’s valuable empirical evidence about the market being able to make these kinds of predictions.
At a quick naive approximation, there’s already a 25% chance that 1 out of 4 contractors of the mission would be coincidentally correctly identified as the one with most fault. Very low bar; shouldn’t cause you to update much in terms of bayesian evidence.
Even after extensive research after the events, analysts could not identify the mechanism by which the market was able to produce this result, if we assume it was not coincidence. The sources linked in this article quotes the researchers thusly:
”a perplexing situation that while markets appear to work in practice, we are not sure how they work in theory.”
Which is totally consistent with the null hypothesis of “win 2 coinflips”. As far as we can tell, it’s either insider info, or coincidence, because there is no causal mechanism to derive this result from public information!It is especially concerning, that this seems like an appeal to the mysticism of markets. It seems like one is handwaving the lack of identifiable causal mechanisms and saying “markets work because they’re basically magic, look at this magical-seeming example we found”
If it really is that efficient and we simply can’t figure out the mechanism because we’re too stupid, at least we would see tons of other examples. Can we get like another 100 examples like this one? I just find it suspicious that we’re giving so much hype to an example that happened in 1986. Surely there would be some other flashy examples since then as well? If that is the case, it would be more convincing to list all of them instead of focusing on this single example.
I realize that’s asking for a lot more effort, but that would be what it would take to actually make this convincing, because I think picking one example from 1986 that could have happened due to 2 coinflips is close to useless in convincing me. But worse, it is using the Dark Art of storytelling which may convince people when they shouldn’t be convinced.
I think you’re probably correct in general strategy, but what about this:
Since AI safety is currently non-partisan, if one safety professional picks a political team, the audience that turns away from them is still not turned away from AI safety; there are other safety professionals who will pick the other political team. If there are safety professionals on both teams that you see tweets from no matter which sports team bubble you’re in, doesn’t that still bode well for AI safety?