Comply’s foam tips: they replace the more common tips for in-ear earphones and isolate you from the outside world much more than noise canceling. It’s basically having earphones + earplugs, thanks to the foam. If you live in a noisy environment they may radically change your life for the better. You need to learn the correct procedure to fit them properly (it’s easy, you can find videos on how to do it for earplugs. It is the same procedure). I recommend them for watching movies or reading/studying while listening to nature sounds.
A gaming console: I bought a PS4 a couple of years ago and it has been one of the best decisions in the last four years. This is valid if you already plan to allocate some time to gaming and if you manage not to get addicted to it.
A beach chair or a big chair with the same inclination to read more comfortably and not fall asleep (this is the problem if you read on the bed). I can’t recommend a specific chair, because I don’t know exactly where you can find my own.
Tablet: much better for reading anything on the internet. I find it strains my eyes much less. I can’t recommend a particular one… I owned two and both were great.
A Kindle e-reader: much cheaper to read, easier due to less weight and having many books in the same place. As a result you will probably read more.
A big desk with adjustable height with a big chair with adjustable height.
I probably have left something out.
I really like this post. I think it is probably also relevant from an Effective Altruism standpoint (you identify a tractable and neglected approach which might have a big impact). I think you should probably crosspost this on the EA Forum, and think about if your other articles on the topic are apt to be published there. What do you think?
If you read my profile both here and on the EA Forum you’ll find a lot of articles in which I’m trying to evaluate aging research. I’m making this suggestion because I think you are adding useful pieces.
Absolutely no one had thought of that in the YouTube comment section under his interview with JRE
He would probably say that he doesn’t care (he works for others, not for himself) and that alchool doesn’t affect him, since people already kind of noted this and the answers were these. But tbh, this whole thing is not that interesting to me, and I would classify it as weak evidence for what he belives or not. Usually it is mainly gossip.
Wow, ok, thank you. This is useful information. I didn’t take your ADHD/ADD hypothesis seriously to be honest, but now that you specify the nature of the test to diagnose it, it makes much more sense. I will research more and get tested.
No, my experience is the gameplays I have seen. From what I’ve seen it seems very easy to communicate (via voice chat) and interact with environments, which are also very customizable. I don’t know anything beyond this.
Thank you, I think I will try to pay attention if some “flickering” happens. It is a possibility.
It’s uncanny how sometimes we all arrive at the same conclusions privately
Regarding “If a survey is performed, most people in the United States will say that curing aging is undesirable. 85%”. One similar survey has already been done. The result depends if you specify that an unlimited lifespan would be in health and not in increasing frailty. If you do, > 40% of respondents opt for unlimited lifespan, otherwise 1%. https://www.frontiersin.org/articles/10.3389/fgene.2015.00353/full
Would it be possible and cost-effective to release video courses at a much lower cost?
I know this conversation is very old and Holden has matured his outlook on the subject (see Open Philanthropy’s grants to aging research, and Open Philanthropy’s analysis of aging research, although still dismissive of SENS), but I still want to point out what I think were the mistakes he made here.
Holden didn’t seem to get how different in scope the SENS’ plan is from the kind of research that a single brilliant researcher can bring forward in the traditional way. SENS needs a plethora of different therapies that would require an entire NIA for themselves to be developed… and this would be enough only for the first phases of research and not for clinical trials. I don’t get how he could be confused about this. Quoting Holden:
You [Aubrey] state that you have a high-expected-value plan that the academic world can’t recognize the value of because of shortcomings such as “balkanisation” and risk aversion. I believe it may be true that the academic world has such problems to a degree; however, I also believe that there are a lot of extremely talented people in academia and that they often (though not necessarily always) find ways to move forward on promising work.
Also, I’m confused about why Holden put so much weight on Dario Amodei’s opinion over Aubrey’s. Dario is an AI researcher.
[...] And as my summary of our conversation shows, he [Dario] acknowledges that the world of biomedical research may have certain suboptimal incentives, but didn’t seem to think that these issues are leaving specific, visible outstanding research programs on the table the way that your email implies. [...]
Thankfully, the Open Phil Holden obviously doesn’t think this is the case.
About life extension see SENS Research Foundation as an example of specific org very focused on the moonshot, if you don’t already know it.
Submission (for low bandwidth Oracle)
Any question such that a correct answer to it should very clearly benefit both humanity and the Oracle. Even if the Oracle has preferences we can’t completely guess, we can probably still say that such questions could be about the survival of both humanity and the Oracle, or about the survival of only the Oracle or its values. This because even if we don’t know exactly what the Oracle is optimising for, we can guess that it will not want to destroy itself, given the vast majority of its possible preferences. So it will give humanity more power to protect both, or only the Oracle.
Example 1: let’s say we discover the location of an alien civilisation, and we want to minimise the chances of it destroying our planet. Then we must decide what actions to take. Let’s say the Oracle can only answer “yes” or “no”. Then we can submit questions such as if we should take a particular action or not. This kind of situation I suspect falls within a more general case of “use Oracle to avoid threat to entire planet, Oracle included” inside which questions should be safe.
Example 2: Let’s say we want to minimise the chance that the Oracle breaks down due to accidents. We can ask him what is the best course of action to take given a set of ideas we come up with. In this case we should make sure beforehand that nothing in the list makes the Oracle impossible or too difficult to shut down by humans.
Example 3: Let’s say we become practically sure that the Oracle is aligned with us. Then we could ask it to choose the best course of action to take among a list of strategies devised to make sure he doesn’t become misaligned. In this case the answer benefits both us and the Oracle, because the Oracle should have incentives not to change values itself. I think this is more sketchy and possibly dangerous, because of the premise: the Oracle could obviously pretend to be aligned. But given the premise it should be a good question, although I don’t know how useful it is as a submission under this post (maybe it’s too obvious or too unrealistic given the premise).
The definition of LEV I used in the previous post is: “Longevity Escape Velocity (LEV) is the minimum rate of medical progress such that individual life expectancy is raised by at least one year per year if medical interventions are used”. So it doesn’t lead to an unbounded life expectancy. In fact, with a simplified calculation, in the first post I calculated life expectancy after LEV to be approximately 1000 years. 1000 years is what comes up using the same idea as your hydra example (risk of death flat at the risk of death of a young person), but in reality it should be slightly less, because in the calculation I left out the part when risk of death starts falling just after hitting LEV. We are not dealing with infinite utilities.
The main measure of impact I gave in the post comes from these three values and some corrections:
1000 QALYs: life expectancy of a person after hitting LEV
36,500,000 deaths/year due to aging
Expected number of years LEV is made closer by (by a given project examined)