I thought some about the AI Safety camp for the LTFF. I mostly evaluated the research leads they listed and the resulting teams directly, for the upcoming program (which was I think the virtual one in 2023).
I felt unexcited about almost all the research directions and research leads, and the camp seemed like it was aspiring to be more focused on the research lead structure than past camps, which increased the weight I was assigning to my evaluation of those research directions. I considered for a while to fund just the small fraction of research lead teams I was excited about, but it was only a quite small fraction, and so recommended against funding it.
It did seem to me that the quality of research leads was very marketly worse by my lights than past years, so I didn’t feel comfortable just doing an outside-view on the impact of past camps (as the ARB report seems to do). I feel pretty good about the past LTFF grants to the past camps but my expectations for post-2021 camps were substantially worse than earlier camps, looking at the inputs and plans, so my expectation of the value of it substantially changed.
Before, we could only personally go on and share with donors the following:
“His guess, he replied, was that he was not currently super interested in most of the projects we found RLs for, and not super interested in the “do not build uncontrollable AI” area.”
[or “AI non-safety” stream, as we called it at the time]
That was still better than nothing. And overall, I appreciate the honesty and openness with which you have shared your views over the years.
I thought some about the AI Safety camp for the LTFF. I mostly evaluated the research leads they listed and the resulting teams directly, for the upcoming program (which was I think the virtual one in 2023).
I felt unexcited about almost all the research directions and research leads, and the camp seemed like it was aspiring to be more focused on the research lead structure than past camps, which increased the weight I was assigning to my evaluation of those research directions. I considered for a while to fund just the small fraction of research lead teams I was excited about, but it was only a quite small fraction, and so recommended against funding it.
It did seem to me that the quality of research leads was very marketly worse by my lights than past years, so I didn’t feel comfortable just doing an outside-view on the impact of past camps (as the ARB report seems to do). I feel pretty good about the past LTFF grants to the past camps but my expectations for post-2021 camps were substantially worse than earlier camps, looking at the inputs and plans, so my expectation of the value of it substantially changed.
Good to have more details on your views here.
That’s useful.
Before, we could only personally go on and share with donors the following:
“His guess, he replied, was that he was not currently super interested in most of the projects we found RLs for, and not super interested in the “do not build uncontrollable AI” area.” [or “AI non-safety” stream, as we called it at the time]
That was still better than nothing. And overall, I appreciate the honesty and openness with which you have shared your views over the years.