In that case, I think your original statement is very misleading (suggesting as it does that OP/CG funded, and actively chose to fund, Mechanize) and you should probably edit it. It doesn’t seem material to the point you were trying to make anyway—it seems enough to argue that Mechanize had used bad arguments in the past, regardless of the purpose for (allegedly) doing so.
I absolutely don’t want it to sound like openphil knowingly funded Mechanize, that’s not correct. will edit. But I do assign high probability materially openphil funds eventually ended up supporting early work on their startup through their salaries, it seems that they were researching the theoretical impacts of job automation at epoch ai but couldn’t find anything that they were directly doing job automation research. That would look like trying to build a benchmark of human knowledge labor for AI, but I couldn’t find information they were definitely doing something like this. This is from their time at EpochAI discussing job automation: https://epochai.substack.com/p/most-ai-value-will-come-from-broad I still think OpenPhil is a little on the hook for funding organizations that largely produce benchmarks of underexplored AI capability frontiers (FrontierMath). Identifying niches in which humans outperform AI and then designing a benchmark is going to accelerate capabilities and make the labs hillclimb on those evals.
In that case, I think your original statement is very misleading (suggesting as it does that OP/CG funded, and actively chose to fund, Mechanize) and you should probably edit it. It doesn’t seem material to the point you were trying to make anyway—it seems enough to argue that Mechanize had used bad arguments in the past, regardless of the purpose for (allegedly) doing so.
I absolutely don’t want it to sound like openphil knowingly funded Mechanize, that’s not correct. will edit. But I do assign high probability materially openphil funds eventually ended up supporting early work on their startup through their salaries, it seems that they were researching the theoretical impacts of job automation at epoch ai but couldn’t find anything that they were directly doing job automation research. That would look like trying to build a benchmark of human knowledge labor for AI, but I couldn’t find information they were definitely doing something like this. This is from their time at EpochAI discussing job automation: https://epochai.substack.com/p/most-ai-value-will-come-from-broad
I still think OpenPhil is a little on the hook for funding organizations that largely produce benchmarks of underexplored AI capability frontiers (FrontierMath). Identifying niches in which humans outperform AI and then designing a benchmark is going to accelerate capabilities and make the labs hillclimb on those evals.