https://mentalengineering.info/
Trans rights! End all suffering!
Apparently the left-leaning stuff I wrote on here got censored and only the shit I now disagree with remains.
https://mentalengineering.info/
Trans rights! End all suffering!
Apparently the left-leaning stuff I wrote on here got censored and only the shit I now disagree with remains.
I have taken the survey. :)
I recently prevented myself from taking on at least 10 micromorts of risk (and an increased copay) by noticing a medical error before undergoing a procedure.
While I’d rather not go into the exact details of my medical history, I’ll say that the procedure that I was supposed to (and did) undergo didn’t require anesthesia, and was less invasive than the procedure that the hospital staff would have performed otherwise, which would have required anesthesia. I’m using 10 micromorts as an estimate of the risk I avoided taking on, because a quick Google search suggests that that’s the average (though ostensibly age-independent) risk incurred by undergoing anesthesia.
The surgeon/diagnostician who was in charge of my case later claimed that he would have noticed the error before he began working on me. Regardless, taking an active role in my medical care felt nice.
One time, Yvain mentioned on SlateStarCodex that it was surprising that he didn’t have more conservative acquaintances. The reason for that was that people who are similar along certain axes tend to cluster together. So, when I say that you’re a beautiful person, you can be sure that that’s true, because everyone with a connection to this site holds a resilient spark within themselves that sings of hope for the future of humankind.
Awesomeness clusters here.
You are deserving of friendship and love. Do you know how uncaring the world is? The world has not praised you for being good at changing your mind. It has valued neither your intelligence, nor your ability to have an impact, to the degree to which these things ought to be valued. You are valuable. Remember this.
It is plausible that a few LessWrong readers have information which would let them create a portfolio which would, on average, perform better than the market. For the large majority of us, though, knowing about overconfidence bias and the law of large numbers should be enough to convince us that putting most of our savings in an index fund is a good idea.
I started keeping a diary about a month ago. The two initial reasons I had for adopting this habit were that, first of all, I thought that I would enjoy writing, and second of all, I wanted to have something relaxing to do for half an hour before my bedtime every evening, because I often have trouble getting to sleep at night.
I have found that I generally end up writing about my day-to-day social interactions in my journals. One really nice benefit of keeping a journal that I hadn’t expected to reap was that writing has helped me weakly precommit to performing certain actions that help me improve at being sociable. For example, a few weeks back, there were a couple nights where I wrote about how I felt bad about how a new transfer student to my school didn’t seem to know anyone in the class which we had together. A couple days after writing about this, I ended up asking him to hang out with me, which was something that I normally would have been too shy to do.
Another thing that I learned is that writing about your problems can help you digest them in ways which are helpful to you. On a meta- level, I think that writing about my social interactions with others has helped me realize that I want to spend more time with my friends, at the expense of spending less time reading through e.g. posts on Reddit. Looking back on things, it is painfully obvious to me that spending time with my friends is much better than spending time on random internet sites, though I hadn’t explicitly realized that I had been failing to spend time with my friends until I ended up writing about the fact that this was the case.
Actually, before I had even started journaling, I had known that thinking about problems by writing about them or making diagrams was, in general, a helpful thing to do—after all, plenty of people benefit from drawing pictures when stuck on, say, math problems. However, it wasn’t previously obvious to me that problems other than math and science problems could be analyzed by writing about them or drawing diagrams that represented the problem. Basically, I found a way (which was previously unknown to me) to identify and solve problems in my life.
Regardless of whether or not advancedatheist has been abusing the voting system, I’d like him to stop posting about involuntary celibacy (incel) entirely on LW. Though I sympathize with his plight—people don’t ever deserve to be in a state of mental strife, or experience anything that feels like suffering—his posts on incel mostly don’t attract quality replies, and probably scare people off. Moreover, he hasn’t stopped posting about this despite having been consistently downvoted.
Are there any appropriate forums where he might be able to post about incel to a more receptive audience? Don’t neoreactionaries tend to be sympathetic to incel folks?
To answer your questions:
SENS has a page that might help answer the first question you posed above.
You could email Audbrey de Grey and ask for ideas. (The page I have linked above seems to suggest that he is highly open to receiving emails from intelligent people who are interested in doing anti-aging research, so don’t let the fact that he’s internet-famous prevent you from sending him a note).
In response to 2, I would say that it seems like you are already highly skilled, such that you could dive in and tackle any problem(s) you decide to start working on immediately. People gain skills by working on hard problems, so it doesn’t seem necessary for you to take additional time to explicitly hone your skill set before starting on any project(s) that you want to work on.
I support the idea of having a recurring ‘Instrumental Rationality Questions’ thread.
GiveWell reanalyzed the data it based its recommendations on, but hasn’t published an after-the-fact retrospective of long-run results. I asked GiveWell about this by email. The response was that such an assessment was not prioritized because GiveWell had found implementation problems in VillageReach’s scale-up work as well as reasons to doubt its original conclusion about the impact of the pilot program.
This seems particularly horrifying; if everyone already knows that you’re incentivized to play up the effectiveness of the charities you’re recommending, then deciding to not check back on a charity you’ve recommended for the explicit reason that you know you’re unable to show that something went well when you predicted it would is a very bad sign; that should be a reason to do the exact opposite thing, i.e. going back and actually publishing an after-the-fact retrospective of long-run results. If anyone was looking for more evidence on whether or not they should take GiveWell’s recommendations seriously, then, well, here they are.
Wow, thanks!
Maybe this means that I can emigrate to emigrate to Equestria someday. Yay!
Do you have any papers or other resources on why freezing one’s cells would be a good idea for transhumanists? I think that we’d be interested in hearing you elaborate on why you think any given method of freezing cells would be worthwhile, which isn’t something that you’ve discussed in the above post.
To be fair, your readers can Google things, too—but in general, it is really nice when people who make posts give readers a bit of background knowledge on the topic the post is about, especially when the topic (freezing cells) is something that isn’t commonly discussed on LW.
I think that there can be a difference between being Frodo’s Sam, and being a real-life hero’s personal assistant/sidekick/support. In the former case, Sam is fighting orcs, hiking through treacherous mountain passes, dealing with Sméagol, etc., which is quite similar to what Frodo is doing; in the latter case, the job of the secretary/personal assistant would be much different from the job of the real-life hero. I would be happy to be Frodo’s Sam, but lukewarm about being, say, Bostrom’s personal assistant.
I agree that SENS is likely the best place to send donations to promote longevity research.
Actually, it’s a shame that longevity research doesn’t get mentioned by the Effective Altruism movement very often. I’m just now casually wondering if there might be enough value in having a Givewell-like nonprofit evaluation organization focused on longevity research to justify creating such an organization. Note that Animal Charity Evaluators is an animal-based Givewell-like nonprofit evaluation organization—which means that this sort of thing has been done before.
This having been said, Aubrey de Grey already seems incentivized to fund the most cost-effective anti-aging research first, so directly funding SENS might be everyone’s best bet.
I think that Merlin and Alicorn should be praised for Merlin’s good behavior. :)
I was happy with the Berkeley event overall.
Next year, I suspect that it would be easier for someone to talk to the guardian of a misbehaving child if there was a person specifically tasked to do so. This could be one of the main event organizers, or perhaps someone directly designated by them. Diffusion of responsibility is a strong force.
How much money would it take to engineer biological immortality for at least half of the world’s population, within 20 years, with 99% confidence?
More than the entire world’s GDP.
Really, though, 99 % confidence is too big of a number to be throwing around when we’re talking about problems that are this hard to solve. Also, things like “how is the money being spent” matter a lot, too.
If LW was archived without a proper replacement, I’d either move my posts to a website like FiMFiction (which wouldn’t be a very good alternative), or, more likely, stop posting and commenting on rationality-related stuff entirely.
There’s actually a noteworthy passage on how prediction markets could fail in one of Dominic’s other recent blog posts I’ve been wanting to get a second opinion on for a while:
NB. Something to ponder: a) hedge funds were betting heavily on the basis of private polling [for Brexit] and b) I know at least two ‘quant’ funds had accurate data (they had said throughout the last fortnight their data showed it between 50-50 and 52-48 for Leave and their last polls were just a point off), and therefore c) they, and others in a similar position, had a strong incentive to game betting markets to increase their chances of large gains from inside knowledge. If you know the probability of X happening is much higher than markets are pricing, partly because financial markets are looking at betting markets, then there is a strong incentive to use betting markets to send false signals and give competitors an inaccurate picture. I have no idea if this happened, and nobody even hinted to me that it had, but it is worth asking: given the huge rewards to be made and the relatively trivial amounts of money needed to distort betting markets, why would intelligent well-resourced agents not do this, and therefore how much confidence should we have in betting markets as accurate signals about political events with big effects on financial markets?
I completed the survey, huzzah!