For a concrete example, consider Devansh. Devansh came to me last year and said something to the effect of, “Hey, wait, it sounds like you think Eliezer does a sort of alignment-idea-generation that nobody else does, and he’s limited here by his unusually low stamina, but I can think of a bunch of medical tests that you haven’t run, are you an idiot or something?” And I was like, “Yes, definitely, please run them, do you need money”.
I’ve always wondered about things in this general area. Higher levels of action that improve the productivity of alignment researchers (well not just researchers, anyone in the field) seems like a very promising avenue to explore.
For example, I know that for me personally, “dealing with dinner” often takes way longer than I hope, consumes a lot of my time, and makes me less productive. That’s a problem that could easily be solved with money (which I’m working towards). Do alignment researchers also face that problem? If so it seems worth solving.
Continuing that thought, some people find cooking to be relaxing and restorative but what about things like cleaning, paperwork, and taxes? Most people find that to be somewhat stressful, right? And reducing stress helps with productivity, right? So maybe some sort of personal assistant a la The 4-Hour Work Week for alignment researchers would make sense.
And for medical stuff, some sort of white glove membership like what Tim Urban describes + resurrecting something like MetaMed to be available as a service for higher-impact people like Eliezer also sounds like it’d make sense.
Or basically anything else that can improve productivity. I was gonna say “at a +ROI” or something, but I feel like it almost always will be. Improved productivity is so valuable, and things like personal assistants are relatively so cheap. It reminds me of something I heard once about rich businesspeople needing private yachts: if the yacht leads to just one more closed deal at the margin then it paid for itself and so is easily worth it. Maybe alignment researchers should be a little more “greedy” in that way.
A different way to improve productivity would be through better pedagogy. Something I always think back to is that in dath ilan “One hour of instruction on a widely-used subject got the same kind of attention that an hour of prime-time TV gets on Earth”. I don’t get the sense that AI safety material is anywhere close to that level. Bringing it to that point would mean that researchers—senior, junior, prospective—would have an easier time going through the material, which would improve their productivity.
I’m not sure how impactful it would be to attract new researchers vs empowering existing ones, but if attracting new researchers is something that would be helpful I suspect that career guidance sorts of things would really yield a lot of new researchers.
Well, I had “smart SWE at Google who is interested in doing alignment research” in mind here. Another angle is recruiting top mathematicians and academics like Terry Tao. I know that’s been discussed before and perhaps pursued lightly, but I don’t get the sense that it’s been pursued heavily. Being able to recruit people like Terry seems incredibly high impact though. At the very least it seems worth exploring the playbooks of people in related fields like executive recruiters and look for anything actionable.
Probably more though. If you try to recruit an individual like Terry there’s an X% chance of having a Y% impact. OTOH, if you come across a technique regarding such recruitment more generally, then it’s an X% chance of finding a technique that has a Y% chance of working on Z different people. Multiplying by Z seems kinda big, and so learning how to “do recruitment” seems pretty worthwhile.
A lot of this stuff requires money. Probably a lot of it. But that’s a very tractable problem, I think. And maybe establishing that ~$X would yield ~Y% more progress would inspire donors. Is that something that has been discovered before in other fields?
Or maybe funding is something that is already in abundance? I recall hearing that this is the case and that the limitation is ideas. That never made sense to me though. I see a lot of things like those white glove medical memberships that seem obviously worthwhile. Are all the alignment researchers in NYC already members of The Lanby for $325/month? Do they have someone to clean their apartments? If not and if funding truly is abundant, then I “feel shocked that everyone’s dropping the ball”.
There are people who have above zero chance of helping that don’t get upskilling grants or research grants.
There are several AI Safety orgs that are for profit in order to get investment money, and/or to be self sufficient, because given their particular network, it was easier to get money that way (I don’t know the details of their reasoning).
I would be more efficient if I had some more money and did not need to worry about budgeting in my personal life.
I don’t know to what extent this is due to the money not existing, or it’s due to grant evaluation is hard, and there are some reason to not give out money to easily.
Cooking is a great example. People eat every day; even small costs (both time and money) are multiplied by 365. Rationalists in Bay Area are likely to either live together, or work together, so the distribution could also be trivial: bring your lunch box to work. So if you are bad at research, but good at cooking, you could contribute indirectly by preparing some tasty and healthy meals for the researchers.
(Possible complications: some people would want vegan meals, or paleo meals, could have food allergies, etc. Still, if you cooked for 80% of them, that could make them more productive.)
Or generally, thinking about things, and removing trivial inconveniences. Are people more likely to exercise during a break, if you bring them some weights?
Another angle is recruiting top mathematicians and academics like Terry Tao. I know that’s been discussed before and perhaps pursued lightly, but I don’t get the sense that it’s been pursued heavily.
Yeah, the important thing, if he was approached and refused, would be to know why. Then maybe we can do something about it, and maybe we can’t. But if we approach 10 people, hopefully we will be able to make at least one of them happy somehow.
Or generally, thinking about things, and removing trivial inconveniences. Are people more likely to exercise during a break, if you bring them some weights?
Ah, great point. That makes a lot of sense. I was thinking about things that are known to be important like exercise and sleep but wasn’t really seeing ways to help people with that but trivial inconveniences seem like a problem that people have and is worth solving. I’d think the first step would be either a) looking at existing research/findings for what these trivial inconveniences are likely to be or maybe b) user interviews.
Yeah, the important thing, if he was approached and refused, would be to know why. Then maybe we can do something about it, and maybe we can’t.
Yes, absolutely. It reminds me a little bit of Salesforce. Have a list of leads; talk to them; or the ones that don’t work out add notes discussing why; over time go through the notes and look for any learnings or insights. (I’m not actually sure if salespeople do this currently.)
I “feel shocked that everyone’s dropping the ball”.
Maybe not everyone The Productivity Fund (nonlinear.org) Although this project has been “Coming soon!” for several months now. If you want to help with the non-dropping of this ball, you could check in with them to see if they could use some help.
I’ve always wondered about things in this general area. Higher levels of action that improve the productivity of alignment researchers (well not just researchers, anyone in the field) seems like a very promising avenue to explore.
For example, I know that for me personally, “dealing with dinner” often takes way longer than I hope, consumes a lot of my time, and makes me less productive. That’s a problem that could easily be solved with money (which I’m working towards). Do alignment researchers also face that problem? If so it seems worth solving.
Continuing that thought, some people find cooking to be relaxing and restorative but what about things like cleaning, paperwork, and taxes? Most people find that to be somewhat stressful, right? And reducing stress helps with productivity, right? So maybe some sort of personal assistant a la The 4-Hour Work Week for alignment researchers would make sense.
And for medical stuff, some sort of white glove membership like what Tim Urban describes + resurrecting something like MetaMed to be available as a service for higher-impact people like Eliezer also sounds like it’d make sense.
Or basically anything else that can improve productivity. I was gonna say “at a +ROI” or something, but I feel like it almost always will be. Improved productivity is so valuable, and things like personal assistants are relatively so cheap. It reminds me of something I heard once about rich businesspeople needing private yachts: if the yacht leads to just one more closed deal at the margin then it paid for itself and so is easily worth it. Maybe alignment researchers should be a little more “greedy” in that way.
A different way to improve productivity would be through better pedagogy. Something I always think back to is that in dath ilan “One hour of instruction on a widely-used subject got the same kind of attention that an hour of prime-time TV gets on Earth”. I don’t get the sense that AI safety material is anywhere close to that level. Bringing it to that point would mean that researchers—senior, junior, prospective—would have an easier time going through the material, which would improve their productivity.
I’m not sure how impactful it would be to attract new researchers vs empowering existing ones, but if attracting new researchers is something that would be helpful I suspect that career guidance sorts of things would really yield a lot of new researchers.
Well, I had “smart SWE at Google who is interested in doing alignment research” in mind here. Another angle is recruiting top mathematicians and academics like Terry Tao. I know that’s been discussed before and perhaps pursued lightly, but I don’t get the sense that it’s been pursued heavily. Being able to recruit people like Terry seems incredibly high impact though. At the very least it seems worth exploring the playbooks of people in related fields like executive recruiters and look for anything actionable.
Probably more though. If you try to recruit an individual like Terry there’s an X% chance of having a Y% impact. OTOH, if you come across a technique regarding such recruitment more generally, then it’s an X% chance of finding a technique that has a Y% chance of working on Z different people. Multiplying by Z seems kinda big, and so learning how to “do recruitment” seems pretty worthwhile.
A lot of this stuff requires money. Probably a lot of it. But that’s a very tractable problem, I think. And maybe establishing that ~$X would yield ~Y% more progress would inspire donors. Is that something that has been discovered before in other fields?
Or maybe funding is something that is already in abundance? I recall hearing that this is the case and that the limitation is ideas. That never made sense to me though. I see a lot of things like those white glove medical memberships that seem obviously worthwhile. Are all the alignment researchers in NYC already members of The Lanby for $325/month? Do they have someone to clean their apartments? If not and if funding truly is abundant, then I “feel shocked that everyone’s dropping the ball”.
Funding is not truly abundant.
There are people who have above zero chance of helping that don’t get upskilling grants or research grants.
There are several AI Safety orgs that are for profit in order to get investment money, and/or to be self sufficient, because given their particular network, it was easier to get money that way (I don’t know the details of their reasoning).
I would be more efficient if I had some more money and did not need to worry about budgeting in my personal life.
I don’t know to what extent this is due to the money not existing, or it’s due to grant evaluation is hard, and there are some reason to not give out money to easily.
Cooking is a great example. People eat every day; even small costs (both time and money) are multiplied by 365. Rationalists in Bay Area are likely to either live together, or work together, so the distribution could also be trivial: bring your lunch box to work. So if you are bad at research, but good at cooking, you could contribute indirectly by preparing some tasty and healthy meals for the researchers.
(Possible complications: some people would want vegan meals, or paleo meals, could have food allergies, etc. Still, if you cooked for 80% of them, that could make them more productive.)
Or generally, thinking about things, and removing trivial inconveniences. Are people more likely to exercise during a break, if you bring them some weights?
Sometimes money alone is not enough, because you still have the principal-agent problem.
Yeah, the important thing, if he was approached and refused, would be to know why. Then maybe we can do something about it, and maybe we can’t. But if we approach 10 people, hopefully we will be able to make at least one of them happy somehow.
Ah, great point. That makes a lot of sense. I was thinking about things that are known to be important like exercise and sleep but wasn’t really seeing ways to help people with that but trivial inconveniences seem like a problem that people have and is worth solving. I’d think the first step would be either a) looking at existing research/findings for what these trivial inconveniences are likely to be or maybe b) user interviews.
Yes, absolutely. It reminds me a little bit of Salesforce. Have a list of leads; talk to them; or the ones that don’t work out add notes discussing why; over time go through the notes and look for any learnings or insights. (I’m not actually sure if salespeople do this currently.)
Maybe not everyone
The Productivity Fund (nonlinear.org)
Although this project has been “Coming soon!” for several months now. If you want to help with the non-dropping of this ball, you could check in with them to see if they could use some help.