$4M/yr is what I wildly guess a community spending on the order of 100M/yr should be putting into agent village.
4% of alignment spending on this seems like clearly way too much.
-
The main hoped-for benefits are “teach the scientific community new things” and “plausibly going viral repeatedly”.
For the first one, it seems like one more exploration among many others, on par with @janus’ for instance.
For the second one, as you put it, “more hype” is not what is missing in AI. People see Midjourney getting better over the years, people see ChatGPT getting better over the years, and companies are optimising quite hard for flashy demonstrations.
-
I guess I just dislike this type of “theory of change that relies on too many unwarranted assumptions to be meaningful, but somehow still manages to push capabilities and AI hype, and makes grabs for attention and money” in general.
This quote from the “Late 2027” ideal success illustrates what I mean by “too many unwarranted assumptions”:
The AI Village has also grown to 100+ concurrent agents by now, with three paid human employees and hundreds of cult followers and a million fans. Ironically they are basically doing grassroots advocacy for an AI development pause or slowdown, on the grounds that an uncontrolled or power-centralized intelligence explosion would be bad for both humans and currently-existing AIs such as themselves.
If that’s where the value is in the best case, I would just put the money into “grassroots advocacy for an AI development pause or slowdown” directly.
If the pitch is “let the AIs do the grassroots advocacy by themselves because it’s more efficient”, then I would suggest instead doing it directly and thinking of where AIs could help in an aligned way.
If the pitch is “let’s invest money into AI hype, and then leverage it for grassroots advocacy for an AI development pause or slowdown”, I would not recommend it because the default optimisation target of hype depletes many commons. Were I to recommend it, I would recommend doing it more directly, and likely checking with people who have run successful hype campaigns instead.
-
I would recommend asking people doing grassroots advocacy how much they think a fun agency demo would help, and more seriously how much they’d be willing to pay (either in $$ or in time).
ControlAI (where I advise), but also PauseAI or possibly even MIRI now with their book tour and more public appearances.
-
There’s another thing that I dislike, but is harder to articulate. Two quotes that make it more salient:
“So, just a couple percent.” (to justify the $4M spend)
“I’m not too worried about adding to the hype; it seems like AI companies have plenty of hype already” (to justify why it’s ok to do more AI hype)
This looks to me how we die by a thousand cuts, a lack of focus, and a lack of coordination.
Fmpov, there should be a high threshold for the alignment community to seriously consider a project that is not just fully working on core problems. Like alignment (as opposed to evals, AGI but safe, AGI but for safety), on extinction risks awareness (like the CAIS statement, AI2027) or on pause advocacy (as opposed to job loss, meta-crisis, etc.).
We should certainly have a couple of meta-projects, that is close to the ops budget of NGO. Like 10-15% of the budget on coordination tools (LW, regranters, etc.). But by bulk, we should do the obvious thing.
So, would you say the same thing about METR then? Would you say it shouldn’t get as much funding as it does?
I don’t feel strongly about the $4M figure. I do feel like I expect to learn about as much from agent village over the next few years as I’ll learn from METR, and that’s very high praise because METR builds the most important benchmarks for evaluating progress towards AGI imho.
I must admit, I’m biased here and it’s possible that I’ve made a big mistake / gotten distracted from doing the obvious things as you suggest.
Re: the specific thing about AI agents advocating for a pause: I did not say that’s where most of the value was. Just a funny thing that might happen and might be important, among many such possible things.
4% of alignment spending on this seems like clearly way too much.
-
The main hoped-for benefits are “teach the scientific community new things” and “plausibly going viral repeatedly”.
For the first one, it seems like one more exploration among many others, on par with @janus’ for instance.
For the second one, as you put it, “more hype” is not what is missing in AI. People see Midjourney getting better over the years, people see ChatGPT getting better over the years, and companies are optimising quite hard for flashy demonstrations.
-
I guess I just dislike this type of “theory of change that relies on too many unwarranted assumptions to be meaningful, but somehow still manages to push capabilities and AI hype, and makes grabs for attention and money” in general.
This quote from the “Late 2027” ideal success illustrates what I mean by “too many unwarranted assumptions”:
If that’s where the value is in the best case, I would just put the money into “grassroots advocacy for an AI development pause or slowdown” directly.
If the pitch is “let the AIs do the grassroots advocacy by themselves because it’s more efficient”, then I would suggest instead doing it directly and thinking of where AIs could help in an aligned way.
If the pitch is “let’s invest money into AI hype, and then leverage it for grassroots advocacy for an AI development pause or slowdown”, I would not recommend it because the default optimisation target of hype depletes many commons. Were I to recommend it, I would recommend doing it more directly, and likely checking with people who have run successful hype campaigns instead.
-
I would recommend asking people doing grassroots advocacy how much they think a fun agency demo would help, and more seriously how much they’d be willing to pay (either in $$ or in time).
ControlAI (where I advise), but also PauseAI or possibly even MIRI now with their book tour and more public appearances.
-
There’s another thing that I dislike, but is harder to articulate. Two quotes that make it more salient:
“So, just a couple percent.” (to justify the $4M spend)
“I’m not too worried about adding to the hype; it seems like AI companies have plenty of hype already” (to justify why it’s ok to do more AI hype)
This looks to me how we die by a thousand cuts, a lack of focus, and a lack of coordination.
Fmpov, there should be a high threshold for the alignment community to seriously consider a project that is not just fully working on core problems. Like alignment (as opposed to evals, AGI but safe, AGI but for safety), on extinction risks awareness (like the CAIS statement, AI2027) or on pause advocacy (as opposed to job loss, meta-crisis, etc.).
We should certainly have a couple of meta-projects, that is close to the ops budget of NGO. Like 10-15% of the budget on coordination tools (LW, regranters, etc.). But by bulk, we should do the obvious thing.
So, would you say the same thing about METR then? Would you say it shouldn’t get as much funding as it does?
I don’t feel strongly about the $4M figure. I do feel like I expect to learn about as much from agent village over the next few years as I’ll learn from METR, and that’s very high praise because METR builds the most important benchmarks for evaluating progress towards AGI imho.
I must admit, I’m biased here and it’s possible that I’ve made a big mistake / gotten distracted from doing the obvious things as you suggest.
Re: the specific thing about AI agents advocating for a pause: I did not say that’s where most of the value was. Just a funny thing that might happen and might be important, among many such possible things.