Hahahaha that sounds like the worst value for money intervention I could possible do to become sexier. I’ve heard the surgery is super painful and debilitating when you do it.
rockthecasbah
This is helpful. I started taking creatine but got lazy about it, I’ll get back on it.
As far as strength training, I started getting great female attention before I put on much muscle. I’ve become much more time constrained because I work like 55 hours a week anyway, so I only work out once or twice a week. Thanks for the recomendation on the youtube channel.
Exactly, this is the ring everyone else is optimizing for. So it’s tough to get relative to the other interventions.
Bumble, Hinge and Tinder.
I averaged that last time I was single. Should be able to get back there.
There is a failure mode here of overinvesting in status signals and underinvesting in being a pillar of your friend group.
I already have a good “status” so it’s not a priority anyway, relative to the other areas.
That’s helpful, thank you.
Do you know a trustworthy and concise source about how to Keto? The time to find a non-terrible guide via google sucks.
Haha yeah status is sexy!
The main reason is just that status is ambiguous between a “trait” and a “proof”. Status is attractive partly because it mentally healthy, socially intelligent men will rise in status faster. But there’s also an element of status being intrinsically useful because it’s a resource to provide for a family.
The most efficient status-increasing interventions are all about presentation. Like I could get a white-house job to increase my status, but that would be super hard work. Earning the respect of my friends and advertising my career successes would also increase my status and is way easier. So I’ll address it in the “proofs” post.
This an interesting essay and seems compelling to me. Because I am insufferable, I will pick the world’s smallest nit.
The Wright Brothers took 4 years to build their first successful prototype. It took another 23 years for the first mass manufactured airplane to appear, for a total of 27 years of R&D.
That’s true but artisanal airplanes were produced in the hundreds of thousands before mass manufacture. 200k airplanes served in WW1 just 15 years in. So call it 15 years of R&D.
Apologies if this has been said, but the reading level of this essay is stunningly high. I’ve read rationality A-Z and I can barely follow passages. For example
This happens in practice in real life, it is what happened in the only case we know about, and it seems to me that there are deep theoretical reasons to expect it to happen again: the first semi-outer-aligned solutions found, in the search ordering of a real-world bounded optimization process, are not inner-aligned solutions. This is sufficient on its own, even ignoring many other items on this list, to trash entire categories of naive alignment proposals which assume that if you optimize a bunch on a loss function calculated using some simple concept, you get perfect inner alignment on that concept.
I think Yud means here is our genes had a base objective of reproducing themselves. The genes wanted their humans to make babies which were also reproductively fit. But “real-world bounded optimization process” produced humans that sought different things, like sexual pleasure and food and alliances with powerful peers. In the early environment that worked because sex lead to babies and food lead to healthy babies and alliances lead to protection for the babies. But once we built civilization we started having sex with birth control as an end in itself, even letting it distract us from the baby-making objectives. So the genes had this goal but the mesa-optimizer (humans) was only aligned in one environment. When the environment changed it lost alignment. We can expect the same to happen to our AI.
Okay, I think I get it. But there are so few people on the planet that can parse this passage.
Has someone written a more accessible version of this yet?
Apologies if this has been said, but the reading level of this essay is stunningly high. I’ve read rationality A-Z and I can barely follow passages. For example
This happens in practice in real life, it is what happened in the only case we know about, and it seems to me that there are deep theoretical reasons to expect it to happen again: the first semi-outer-aligned solutions found, in the search ordering of a real-world bounded optimization process, are not inner-aligned solutions. This is sufficient on its own, even ignoring many other items on this list, to trash entire categories of naive alignment proposals which assume that if you optimize a bunch on a loss function calculated using some simple concept, you get perfect inner alignment on that concept.
I think Yud means here is our genes had a base objective of reproducing themselves. The genes wanted their humans to make babies which were also reproductively fit. But “real-world bounded optimization process” produced humans that sought different things, like sexual pleasure and food and alliances with powerful peers. In the early environment that worked because sex lead to babies and food lead to healthy babies and alliances lead to protection for the babies. But once we built civilization we started having sex with birth control as an end in itself, even letting it distract us from the baby-making objectives. So the genes had this goal but the mesa-optimizer (humans) was only aligned in one environment. When the environment changed it lost alignment. We can expect the same to happen to our AI.
Okay, I think I get it. But there are so few people on the planet that can parse this passage.
Has someone written a more accessible version of this yet?
Okay, let’s do that backwards planning exercise.
In the long run, I want to do my research but live a low stress and financially comfortable lifestyle. The traditional academic path won’t achieve that because I will end up doing my research but leading a high-stress and financially fraught lifestyle. There are three possible solutions to the problem, in rough order of preference A Pick a research agenda that is lucrative, so that I can supplement my income with lucrative consulting gigs and have a strong exit option B Learn to code and get a data science job, then do my research as a hobby C Get a government job related to my field (intelligence or aid)
Path A seems like the best one for both personal and EA reasons. Right now I split my time between writing on foreign investment and cabinet formation. But only the foreign investment work might pay the bills, the cabinet work ends with me in the brutal academia rat race. However, the foreign investment research might or might not succeed depending on contextual factors like competition, my ability to build a brand and the value of academic prestige in the field. So I should first try and figure out if the investment-academia path is satisfying.
I want to find out if that works over the next 6 months or so while in my academic program.
If the returns are too small and the competition too stressful, I should pivot toward a programming career. It’s a well-payed 40-hour industry, and I can do my research as a hobby for 8 hours a week. That sounds like a lovely life too. So if I pick that, I would deemphasize my research and focus on coding skills for interviews and building career capital there.
I’m satisfied with that plan. The next question is, how do I stick to it? More on this later.
Just got Jason Brennan’s book. It’s very helpful!
That’s a good question Barry.
Yes I could do a 3 paper very easily. I just finished a first article on expropriation and successions crises, it has a shot for a top journal. I’m working on a next one on succession crises and appointments. My professors tend to say that this isn’t enough, that I need a special incredible dissertation where everything is laser focused on one topic and tightly linked. They also say that 90% of students take more than 5 years. I’m honestly confused.
Thanks for sending the link. I go to Dr. Brennan’s school, so I can read the book then talk to him. Good idea!
| They’re almost as horrified as people who’ve tweeted for years about sex and astrology and pineal glands are to discover that half their mutuals are actually LessWrongers.
I cracked up at that
Thanks! An error in my markdown was causing most paragraph breaks no to appear. Fixed.
Jesus christ. If I made that kind of money I could literally retire in a decade and then do whatever I want
I learned to code in R pretty well during my PhD, and I do enjoy it. It’s usually relaxing, solving the problem feels good when you get it. I’m better than my colleagues at debugging and problem solving our code (data engineering mainly)
To be clear, you are talking about the salary for software engineers. Is that a better ladder than data scientists or data engineers? (my skills are closer to either of those fields currently)
I did quite a bit of research on it after this. It turns out there really isn’t good data, the best is from the APSA but is full of holes. I did a tweet thread on it a while back.
I do have more publications than my competitors. Unfortunately, I have been repeatedly told in my program that publications do not matter and only dissertations matter. Kind of sucks, but what can you do. Publishing is definitely a signal of value, so I have the skills to do a good dissertation. It just sucks that what I like doing (papers) isn’t rewarded.
The real kicker here is that even if I get the tenure track job, it’s just not that great. For tenure track the average pay is 75k (for non-tenure 60k). More importantly, the tenure process is 6-8 years and very stressful. So I would be on the treadmill of competition from 27 (now) to 38. I doubt I want that level of stress for that long.
So probably not my best option but we’ll see.
Also it’s not that bad. I just finished a masters for free, I learned the classic causal inference methods. I can apply for sweet government jobs, government consulting, or learn to code.
Honestly, I wouldn’t choose being a professor over my other options even if I could skip there right now. The low salary and location suck. I feel kind of stupid for not realizing this earlier, but I was idealistic at the start.
Does anyone have a good piece on hedging investments for AI risk? Would love a read, thanks!