maybe I should have commented, was that it seemed to model things as like, there’s different types of people; different types of people work on different things
I do think people are in fact importantly different here. I think there exist unhealthy and inaccurate ways to think about it, but you need to contend with it somehow.
The way I normally think of this is “people have talent coefficients, which determine the rate at which they improve at various skills. You might have a basketball talent-coefficient of ”.1“, a badmitton talent coefficient of ”.5”, a drawing coefficient of “1” (this happens to be roughly true for me personally). So, an hour spent deliberate practicing drawing will result in 10x as much skill gain as an hour practicing basketball.
This is further complicated by “Learning is lumpy. The first 20 hours spent learning a thing typically has more low-hanging fruit than hours 21-50.” (but also you can jump around to related skillsets gaining different types of related skills).
Also, the most determining factor (related to but not quite-the-same as talent coefficients) is “how much do you enjoy various things?”. If you really like programming, you might be more motivated to spend hundreds of hours on it even if your rate-of-skill-gain is low, and that may result in you becoming quite competent.
The problem with doing original research is that feedback loops are often kinda bad, which makes it hard to improve.
This is all to say: different people are going to be differently suited for different things. The exact math of how that shakes out is somewhat complex. If you are excited to put a lot of hours in, it may be worth even if you don’t seem initially great at it. But there are absolutely some people who will struggle so long and hard with something that it just really doesn’t make sense to make a career out of it (especially when there are alternative careers worth pursuing).
Obviously different people are better or worse at doing and learning different things, but the implication that one is supposed to make a decision that’s like “work on this, or work on that” seems wrong. Some sort of “make a career out of it” decision is maybe an unfortunate necessity in some ways for legibility and interoperability, but one can do things on the side.
I don’t think the kind of work we’re talking about here is really possible without something close to ‘making a career if it’ - at least being a sustained, serious hobby for years.
My beliefs here are based on hearing from various researchers over the years what timescale good research takes. I’ve specifically heard that it’s hard to evaluate research output for less than 6 months of work, and that 1-2 years is honestly more realistic.
John Wentworth claims, after a fair amount of attempting to train researchers and seeing how various research careers have gone, that people have about 5 years worth of bad ideas they need to get through before they start producing actually possibly-good-ideas. I’ve heard secondhand from another leading researcher that a wave of concentrated effort they oversaw from the community didn’t produce any actually novel results. My understanding is Eliezer thinks there basically been no progress on the important problems.
My own epistemic-status here is secondhand, and there may be other people who disagree with the above take. but my sense is that there’s been a lot of “try various ways of recruiting and training researchers over the years”, and that it’s at least nontrivial to get meaningful work done.
I think the amount of investment into a serious hobby is basically similar to a career change, so I don’t really draw a distinction. It’s enough investment, and has enough of a track-record of burnout, that I think it’s totally worth strategizing about based on your own aptitudes.
(To be clear, I think “try it out for a month and see if it feels good to you” is a fine thing to do, my comments here are mostly targeted at people who are pushing themselves to do it out of consequentialism reasoning/obligation)
I think we agree that pushing oneself is very fraught. And we agree that one is at least fairly unlikely to push the boundaries of knowledge about AI alignment without “a lot” of effort.(Though maybe I think this a bit less than you? I don’t think it’s been adequately tested to take brilliant minds from very distinct disciplines and have them think seriously about alignment. How many psychologists, how many top-notch philosophers, how many cognitive scientists, how many animal behaviorists have seriously thought about alignment? Might there be relatively low-hanging fruit from the perspective of those bodies of knowledge?)
What I’m saying here is that career boundaries are things to be minimized, and the referenced post seemed to be career-boundary-maxing. One doesn’t know what would happen if one made even a small hobby of AI alignment; maybe it would become fun + interesting / productive and become a large hobby. Even if the way one is going to contribute is not by solving the technical problem, it still helps quite a lot with other methods of helping to understand about the technical problem. So in any case, cutting off that exploration because one is the wrong type of guy is stupid, and advocating for doing that is stupid.
I do think people are in fact importantly different here. I think there exist unhealthy and inaccurate ways to think about it, but you need to contend with it somehow.
The way I normally think of this is “people have talent coefficients, which determine the rate at which they improve at various skills. You might have a basketball talent-coefficient of ”.1“, a badmitton talent coefficient of ”.5”, a drawing coefficient of “1” (this happens to be roughly true for me personally). So, an hour spent deliberate practicing drawing will result in 10x as much skill gain as an hour practicing basketball.
This is further complicated by “Learning is lumpy. The first 20 hours spent learning a thing typically has more low-hanging fruit than hours 21-50.” (but also you can jump around to related skillsets gaining different types of related skills).
Also, the most determining factor (related to but not quite-the-same as talent coefficients) is “how much do you enjoy various things?”. If you really like programming, you might be more motivated to spend hundreds of hours on it even if your rate-of-skill-gain is low, and that may result in you becoming quite competent.
The problem with doing original research is that feedback loops are often kinda bad, which makes it hard to improve.
This is all to say: different people are going to be differently suited for different things. The exact math of how that shakes out is somewhat complex. If you are excited to put a lot of hours in, it may be worth even if you don’t seem initially great at it. But there are absolutely some people who will struggle so long and hard with something that it just really doesn’t make sense to make a career out of it (especially when there are alternative careers worth pursuing).
Obviously different people are better or worse at doing and learning different things, but the implication that one is supposed to make a decision that’s like “work on this, or work on that” seems wrong. Some sort of “make a career out of it” decision is maybe an unfortunate necessity in some ways for legibility and interoperability, but one can do things on the side.
I don’t think the kind of work we’re talking about here is really possible without something close to ‘making a career if it’ - at least being a sustained, serious hobby for years.
How do you know that? How would anyone know that without testing it?
My beliefs here are based on hearing from various researchers over the years what timescale good research takes. I’ve specifically heard that it’s hard to evaluate research output for less than 6 months of work, and that 1-2 years is honestly more realistic.
John Wentworth claims, after a fair amount of attempting to train researchers and seeing how various research careers have gone, that people have about 5 years worth of bad ideas they need to get through before they start producing actually possibly-good-ideas. I’ve heard secondhand from another leading researcher that a wave of concentrated effort they oversaw from the community didn’t produce any actually novel results. My understanding is Eliezer thinks there basically been no progress on the important problems.
My own epistemic-status here is secondhand, and there may be other people who disagree with the above take. but my sense is that there’s been a lot of “try various ways of recruiting and training researchers over the years”, and that it’s at least nontrivial to get meaningful work done.
How does that imply that one has to “pick a career”? If anything that sounds like a five-year hobby is better than a two year “career”.
It’s hard but not impossible to put 10k hours of deliberate practice into a hobby
I think the amount of investment into a serious hobby is basically similar to a career change, so I don’t really draw a distinction. It’s enough investment, and has enough of a track-record of burnout, that I think it’s totally worth strategizing about based on your own aptitudes.
(To be clear, I think “try it out for a month and see if it feels good to you” is a fine thing to do, my comments here are mostly targeted at people who are pushing themselves to do it out of consequentialism reasoning/obligation)
I think we agree that pushing oneself is very fraught. And we agree that one is at least fairly unlikely to push the boundaries of knowledge about AI alignment without “a lot” of effort.(Though maybe I think this a bit less than you? I don’t think it’s been adequately tested to take brilliant minds from very distinct disciplines and have them think seriously about alignment. How many psychologists, how many top-notch philosophers, how many cognitive scientists, how many animal behaviorists have seriously thought about alignment? Might there be relatively low-hanging fruit from the perspective of those bodies of knowledge?)
What I’m saying here is that career boundaries are things to be minimized, and the referenced post seemed to be career-boundary-maxing. One doesn’t know what would happen if one made even a small hobby of AI alignment; maybe it would become fun + interesting / productive and become a large hobby. Even if the way one is going to contribute is not by solving the technical problem, it still helps quite a lot with other methods of helping to understand about the technical problem. So in any case, cutting off that exploration because one is the wrong type of guy is stupid, and advocating for doing that is stupid.