I have wavered a bit about whether to post this comment, or maybe make it a DM, or maybe not at all. I hope this does not feel like I’m doing some kind of personal attack. But tbh (as someone else pretty young who feels quite adrift right now) I find this post somewhat baffling. It is of course much easier to feel like “you will be okay” when you are a professor at Harvard who also has a well paid job at one of the companies riding the peak of the AI wave. You probably have more savings right now than I would accumulate with a decade more of “things as normal”, and you’re also attached to organisations that either already have a lot of institutional power or stand to gain much more by leading the development and deployment of a radical transformative technology.
By choosing not to work for AI capabilities labs (if we have the capability to get hired there, which I do not claim to be true for me), people who have relatively little career or financial capital are not only loosing out on prestige or fame. They are also losing out on security and power in a terrible job market and a world that seems increasingly both politically and socially dysfunctional. In this position and on this forum, for someone who has instead accepted that bargain and the accompanying risk of harm (whether you think your contribution is net-positive or not) to then tell us that “we will be fine” feels like being told by someone on a hill that we will be fine as a tsunami bears down on our seaside village. Perhaps the tsunami will be stronger than expected and drown everyone on the hill as well. But either way I would not want to be on the beach right now.
P.S. I do however endorse not acting based on panic, nihilism, or despair, and cultivating an attitude towards chance/randomness that allows for unexpected good outcomes as well as unexpected bad outcomes. Also, I understand why people would decide to work for a lab, given the circumstances surrounding capital, the emergent myth of the technology being crafted, and the clearly important and non-replaceable role powerful AI systems have in our information ecosystem already. Still, that doesn’t change my analysis regarding the feelings of powerlessness and helplessness.
Thank you for writing this and I do not feel attacked at all. You are right that I am in a position of material comfort right now.
I would say that if your main focus is existential risk, then the analogy would be more like someone that is standing on a 2 inch mound of sand in the beach saying that we will be fine. I don’t think there is any “hill” for true existential risk.
If you ware talking about impact on the job market, then I agree that while it’s always been the case that 51 year old tenured professors (or formerly tenured, I just gave up on tenure) are more settled than young students, the level of uncertainty is much higher these days. If that is the risk you are most worried about, I am not sure why you would choose to forgo working in an AI capability lab but I respect that choice.
Thank you for the reply and for your sincerity. I think my response as to “why would you not work at a capabilities lab” is something like “I worry about both the pragmatic and the existential risks quite a lot”, but that is more of a personal thought.
I have wavered a bit about whether to post this comment, or maybe make it a DM, or maybe not at all. I hope this does not feel like I’m doing some kind of personal attack. But tbh (as someone else pretty young who feels quite adrift right now) I find this post somewhat baffling. It is of course much easier to feel like “you will be okay” when you are a professor at Harvard who also has a well paid job at one of the companies riding the peak of the AI wave. You probably have more savings right now than I would accumulate with a decade more of “things as normal”, and you’re also attached to organisations that either already have a lot of institutional power or stand to gain much more by leading the development and deployment of a radical transformative technology.
By choosing not to work for AI capabilities labs (if we have the capability to get hired there, which I do not claim to be true for me), people who have relatively little career or financial capital are not only loosing out on prestige or fame. They are also losing out on security and power in a terrible job market and a world that seems increasingly both politically and socially dysfunctional. In this position and on this forum, for someone who has instead accepted that bargain and the accompanying risk of harm (whether you think your contribution is net-positive or not) to then tell us that “we will be fine” feels like being told by someone on a hill that we will be fine as a tsunami bears down on our seaside village. Perhaps the tsunami will be stronger than expected and drown everyone on the hill as well. But either way I would not want to be on the beach right now.
P.S. I do however endorse not acting based on panic, nihilism, or despair, and cultivating an attitude towards chance/randomness that allows for unexpected good outcomes as well as unexpected bad outcomes. Also, I understand why people would decide to work for a lab, given the circumstances surrounding capital, the emergent myth of the technology being crafted, and the clearly important and non-replaceable role powerful AI systems have in our information ecosystem already. Still, that doesn’t change my analysis regarding the feelings of powerlessness and helplessness.
Thank you for writing this and I do not feel attacked at all. You are right that I am in a position of material comfort right now.
I would say that if your main focus is existential risk, then the analogy would be more like someone that is standing on a 2 inch mound of sand in the beach saying that we will be fine. I don’t think there is any “hill” for true existential risk.
If you ware talking about impact on the job market, then I agree that while it’s always been the case that 51 year old tenured professors (or formerly tenured, I just gave up on tenure) are more settled than young students, the level of uncertainty is much higher these days. If that is the risk you are most worried about, I am not sure why you would choose to forgo working in an AI capability lab but I respect that choice.
I did not talk about these other risks in this piece mostly because I felt like this is not what most lesswrong people are worried about, but see also this tweet https://x.com/boazbaraktcs/status/2006768877129302399?s=20
Thank you for the reply and for your sincerity. I think my response as to “why would you not work at a capabilities lab” is something like “I worry about both the pragmatic and the existential risks quite a lot”, but that is more of a personal thought.