The audio is very appreciated, I ended up listening instead of reading.
emanuele ascani
Thanks a lot for writing this.
These disagreements mainly concern the relative power of future AIs, the polarity of takeoff, takeoff speed, and, in general, the shape of future AIs. Do you also have detailed disagreements about the difficulty of alignment? If anything, the fact that the future unfolds differently in your view should impact future alignment efforts (but you also might have other considerations informing your view on alignment).
You partially answer this in the last point, saying: “But, equally, one could view these theses pessimistically.” But what do you personally think? Are you more pessimistic, more optimistic, or equally pessimistic about humanity’s chances of surviving AI progress? And why?
I really like this post. I think it is probably also relevant from an Effective Altruism standpoint (you identify a tractable and neglected approach which might have a big impact). I think you should probably crosspost this on the EA Forum, and think about if your other articles on the topic are apt to be published there. What do you think?
If you read my profile both here and on the EA Forum you’ll find a lot of articles in which I’m trying to evaluate aging research. I’m making this suggestion because I think you are adding useful pieces.
This is utterly deranged and I’m not sure if it was meant as a joke or not, but fuck I enjoyed it, and holy shit that WebMD link is absolutely crazy. Thanks for posting.
In all seriousness: I suspect we should explore such crazy ideas at least intellectually, just because we never know where the mind could turn after having considered them.
This reminds me of the sentiment Eliezer expresses here:
When someone politely presents themselves with a careful argument, does your cultural software tell you that you’re supposed to listen and make a careful response, or make fun of the other person and then laugh about how they’re upset? What about when your own brain tries to generate a careful argument? Does your cultural milieu give you any examples of people showing how to really care deeply about something (i.e. debate consequences of paths and hew hard to the best one), or is everything you see just people competing to be loud in their identification?
I know this conversation is very old and Holden has matured his outlook on the subject (see Open Philanthropy’s grants to aging research, and Open Philanthropy’s analysis of aging research, although still dismissive of SENS), but I still want to point out what I think were the mistakes he made here.
Holden didn’t seem to get how different in scope the SENS’ plan is from the kind of research that a single brilliant researcher can bring forward in the traditional way. SENS needs a plethora of different therapies that would require an entire NIA for themselves to be developed… and this would be enough only for the first phases of research and not for clinical trials. I don’t get how he could be confused about this. Quoting Holden:
You [Aubrey] state that you have a high-expected-value plan that the academic world can’t recognize the value of because of shortcomings such as “balkanisation” and risk aversion. I believe it may be true that the academic world has such problems to a degree; however, I also believe that there are a lot of extremely talented people in academia and that they often (though not necessarily always) find ways to move forward on promising work.
Also, I’m confused about why Holden put so much weight on Dario Amodei’s opinion over Aubrey’s. Dario is an AI researcher.
[...] And as my summary of our conversation shows, he [Dario] acknowledges that the world of biomedical research may have certain suboptimal incentives, but didn’t seem to think that these issues are leaving specific, visible outstanding research programs on the table the way that your email implies. [...]
Thankfully, the Open Phil Holden obviously doesn’t think this is the case.
Thanks for your service, Mingyuan. 10⁄10.
True. Thanks for the good tip. I might actually implement it now that the weather and temperature are nicer.
Berkeley people have it good. At least they are doing this together. Imagine being a Berkeley person at heart and being in a completely anti-Berkeley environment.
He would probably say that he doesn’t care (he works for others, not for himself) and that alchool doesn’t affect him, since people already kind of noted this and the answers were these. But tbh, this whole thing is not that interesting to me, and I would classify it as weak evidence for what he belives or not. Usually it is mainly gossip.
Terence Tao even talked about this in his Google+ profile.
Quote from Second Comment: “In his first TED talk in 2005 Aubrey’s message was that we have 90% chance for robust mouse rejuvenation in 10 years if $100 million per year would be invested philanthropically. We’re now in 2021, 16 years from his talk, funding overall is much greater than $100 million per year, although it’s not just philanthropic.
“Although it’s not just philanthropic”.
You can’t say that Aubrey de Grey’s prediction is wrong by invalidating a piece of the antecedent in the implication. Also: he meant $100M to SENS. Currently, SENS has 20 times less than that.
I’m disappointed by the downvotes and by the answers. I don’t see any problem with this question, and the concept it points at is useful to think about.
Why not S&P500?
Another angle to view this is “coming up with ideas is compulsory if you want to optimize literally everything”. Bonus: when you practice holding off proposing solutions, ideas are usually better.
This is an interesting comment, I think you bring up good points.
One reason why I didn’t focus much on crowdfunding is that the money that goes in there is not really LEAF’s, and it’s just one among many focuses they have. If an EA decides to give money to LEAF (through the recurring campaign, or through a grant, for example) that money will probably not go to a crowdfunding campaign, and would probably not make much of an impact on how they decide who to crowdfund. It would go to their other projects. When donating to a campaign you donate to the specific org who benefits from the project of the campaign and not to LEAF. LEAF, unlike other orgs like Open Phil for example, doesn’t make grants directly, but only organizes campaigns so that people can bring money to a project.
You probably already knew all in the paragraph above, so: I think your point is correct. Where exactly they bring money by choosing who to finance is important in order to ascertain if the research which wouldn’t otherwise have happened is actually making an impact (an impact at all, given the characteristics of this field, yes). A plus to them from my POV is that they seem internally sympathetic to SENS’ approach (it’s obvious by reading their introductory articles), although they also financed different approaches (one campaign is for a project involving NMN supplementation led by David Sinclair, a couple of others on biomarkers...). But I admit it’s not much and a more detailed look would be ideal. For now, if you are more concerned about the science than YouTube/internet advocacy, policy influencing, etc. it is probably best to donate directly to orgs doing specific scientific research.Not being able to evaluate much by looking at crowdfunding alone I followed the methodology of trying to gauge the ratio donations : money brought to the field, which I’ve seen used a lot for evaluating advocacy charities inside EA.
Maybe we’ll be able to ascertain their decision-making regarding crowdfunding better (although probably not a lot better) after the interview, since the first question is about that.
Do you keep up with news of any kind? If so, how? Don’t you have fear of missing out something important which you should act upon (both good news or bad news or not even news but simply information)? Not necessarily politics or general news of course.
My risk should be from 19% to 82% probability in the next six months. This, if I always remain in the house. In order to avoid that, I should put my life on hold and get a full-time job I dislike. And people call me exaggerated and crazy both IRL and online. Long-term consequences of Covid are what worry me the most. Idk how to deal with this tbh. Genuinely asking.
I really really love this initiative. Reading LW in book form is just better for me. Online I get distracted and I read stuff as procrastination instead of deliberate effort. I’ve read the first two books of the sequences and HPMOR on Kindle and the experience is not even comparable with reading with a browser.