I don’t think it’s obvious that even if you count those, capacity-building has been strongly net-negative to date, but I do think it’s pretty plausible.
Like, if you were to count the costs as broadly as “all the labs are downstream of capacity-build work” then you also need to count the benefits as broadly. And a broadly known public track record of being concerned about these problems for a long time, and that it’s motivated by altruism, and that you tried to solve the problem for a long time, and being one of the few memetic centers in the world that people draw on to figure out what to do about this whole AI situation is quite valuable, possibly more valuable than the acceleration-effects of things like Deepmind, OpenAI and Anthropic.
(that said, my actual take here is that the biggest issue with most capacity-building work is that it actively undermines the things that other capacity-building work has been highly successful at, so that ultimately some capacity building work is predictably extremely good for the world, and some is predictably extremely bad for the world).
the biggest issue with most capacity-building work is that it actively undermines the things that other capacity-building work has been highly successful at, so that ultimately some capacity building work is predictably extremely good for the world, and some is predictably extremely bad for the world
Wait what does this mean? Is there some kind of dichotomy I’m not aware of?
Maybe? I am not saying the dichotomy is common-knowledge, but I feel pretty confident predicting which capacity-building work will be quite bad in-expectation and which will be quite good (this doesn’t mean there isn’t variance within those categories with many orgs or people having sign-flipped impact from their reference class, but that I am happy to register predictions at the class level with like reasonably-high confidence).
I would then like to know which is which (DM is okay if you feel that would be somewhat controversial, it’s also alright if you want to keep your opinions to yourself)
Sorry, I am not saying there is a classifier here that is like one sentence long. At a high level I think “is it largely funneling people into places where the incentives will point towards building more powerful AI systems and/or becoming personally more powerful, or is it putting people into positions where their primary incentives are to help other people make sense of what is going with some grounding in accuracy of their beliefs” is the best short classifier I have, but I didn’t intend to communicate there is some super short description of the classifier!
I don’t think it’s obvious that even if you count those, capacity-building has been strongly net-negative to date, but I do think it’s pretty plausible.
Like, if you were to count the costs as broadly as “all the labs are downstream of capacity-build work” then you also need to count the benefits as broadly. And a broadly known public track record of being concerned about these problems for a long time, and that it’s motivated by altruism, and that you tried to solve the problem for a long time, and being one of the few memetic centers in the world that people draw on to figure out what to do about this whole AI situation is quite valuable, possibly more valuable than the acceleration-effects of things like Deepmind, OpenAI and Anthropic.
(that said, my actual take here is that the biggest issue with most capacity-building work is that it actively undermines the things that other capacity-building work has been highly successful at, so that ultimately some capacity building work is predictably extremely good for the world, and some is predictably extremely bad for the world).
Wait what does this mean? Is there some kind of dichotomy I’m not aware of?
Maybe? I am not saying the dichotomy is common-knowledge, but I feel pretty confident predicting which capacity-building work will be quite bad in-expectation and which will be quite good (this doesn’t mean there isn’t variance within those categories with many orgs or people having sign-flipped impact from their reference class, but that I am happy to register predictions at the class level with like reasonably-high confidence).
I would then like to know which is which (DM is okay if you feel that would be somewhat controversial, it’s also alright if you want to keep your opinions to yourself)
Sorry, I am not saying there is a classifier here that is like one sentence long. At a high level I think “is it largely funneling people into places where the incentives will point towards building more powerful AI systems and/or becoming personally more powerful, or is it putting people into positions where their primary incentives are to help other people make sense of what is going with some grounding in accuracy of their beliefs” is the best short classifier I have, but I didn’t intend to communicate there is some super short description of the classifier!
No worries, thanks for elaborating