A few weeks back, under a post from ControlAI about what they could do with $50 million per year, I commented that I could probably make a substantial contribution to alignment theory for less than 1% of that. It got heavily downvoted, which wasn’t surprising, but I found it ironic to think that if true, it might be the most important comment I ever made here.
Today I was reminded of this, upon reading a LinkedIn post by a Stanford undergraduate (Zachary Hsu) who says: “The field [of AI safety] is swamped with billions of undeployed dollars because there isn’t enough talent to absorb it.” (He goes on to list a few reasons for this.)
So now I can repeat my claim that I could do something worthwhile, this time by using less than .001% of what’s said to be available… I have started a series of posts here, in which I have begun to sketch how I would approach the biggest problem there is, the alignment of superintelligence. I’d work on those topics all day every day, but existing pressures are such that several weeks passed before I could find time to do one of those posts, and even then it had to be written in a rush when I should have been sleeping.
I just find it remarkable how many good things that could easily happen, never get to happen. It’s so pervasive that last month I wrote of a “Swiss-cheese model of deficiencies in the world’s institutions and cultures, that leads to lost opportunities at every level”. It would be interesting to have a general model of the causes.
The field [of AI safety] is swamped with billions of undeployed dollars
This is simply not true, and a harmful myth; there are several hundred million dollars of high value projects that could start within a few months if the money was available.
Thanks for speaking up! This is his post. As you can see, he presents a slide by @Ryan Kidd as evidence. I could just say, someone from Anthropic says this is a myth. But I’d like to know more! Does the claim of a funding overhang for AI safety depend on a particular contentious interpretation of certain data?
A few weeks back, under a post from ControlAI about what they could do with $50 million per year, I commented that I could probably make a substantial contribution to alignment theory for less than 1% of that. It got heavily downvoted, which wasn’t surprising, but I found it ironic to think that if true, it might be the most important comment I ever made here.
Today I was reminded of this, upon reading a LinkedIn post by a Stanford undergraduate (Zachary Hsu) who says: “The field [of AI safety] is swamped with billions of undeployed dollars because there isn’t enough talent to absorb it.” (He goes on to list a few reasons for this.)
So now I can repeat my claim that I could do something worthwhile, this time by using less than .001% of what’s said to be available… I have started a series of posts here, in which I have begun to sketch how I would approach the biggest problem there is, the alignment of superintelligence. I’d work on those topics all day every day, but existing pressures are such that several weeks passed before I could find time to do one of those posts, and even then it had to be written in a rush when I should have been sleeping.
I just find it remarkable how many good things that could easily happen, never get to happen. It’s so pervasive that last month I wrote of a “Swiss-cheese model of deficiencies in the world’s institutions and cultures, that leads to lost opportunities at every level”. It would be interesting to have a general model of the causes.
This is simply not true, and a harmful myth; there are several hundred million dollars of high value projects that could start within a few months if the money was available.
Thanks for speaking up! This is his post. As you can see, he presents a slide by @Ryan Kidd as evidence. I could just say, someone from Anthropic says this is a myth. But I’d like to know more! Does the claim of a funding overhang for AI safety depend on a particular contentious interpretation of certain data?