If many reasonable people knew that humanity had six months until superintelligence as if by fiat (as if by the task becomes magically easier each month for anyone, perhaps because the relevant research ideas are now in many many people’s heads), and this is well understood by nearly everyone, I feel like human coordination issues open up and lots of new possibilities emerge that were foreclosed in the past for lack of a well understood payoff structure.
Think of it like negotiating with reality, on behalf of the human species, to get the best possible outcome.
If some chunk of reality makes humanity an offer we don’t like, it seems reasonable to want to be able to check other parts of reality for other (hopefully better) offers.
Also, if there’s only like one or two “chunks” of reality where such an offer can be found, and both the chunks of reality know that humanity has very few other options, they can give humanity pretty mediocre choices and smirk at us as we begin to realize how little mobility we have.
So if you know that you’re going to get at least one success in the next six months and think that a crazy six month crash development program by a team of less than 10 people could be the cause of that outcome, then it sorta makes sense to fund maybe 10k programmers to go off to various isolated cabins in the woods.
Each little team gets some computer hardware and a very filtered internet connection that is monitored by the people whose job it is to negotiate with reality in general.
Organize the teams a bit, so they cannot become a unified bloc, but with enough (monitored) internal communication that teams that make a lot of progress in the first month or two (who want to merge and develop a complimentary system after that) can somehow find each other and go faster during month 3 and 4...
If promising results are coming from 20 different cabins after 4 months, and some of the teams are asking for way more electricity and GPUs (or whatever) then the people organizing the larger project can hope to have enough options (with enough variety on enough dimensions) that they could maybe slow some teams down, speed others up, and study all the options and generally try to gain some room to maneuver… to negotiate with the 20 incipient godlings, plus the cognitive parents of these godlings, while those entities are all still “babies” with intelligible architectures (because they were created by small human teams operating according to theories intelligible within at least the team) at the same time.
I would be more hopeful for humanity’s chances in that kind of development/negotiating context than the context that will probably actually happen.
I am not fond of my phrasing here either. If I had more time I’d have written something shorter and better.
It seems worth calling attention to the importance of causes of urgency. The options humanity has are different if the causes of the urgency are things like “ambient research memes are good enough now” versus “finally some company owns enough data centers”.
If many reasonable people knew that humanity had six months until superintelligence as if by fiat (as if by the task becomes magically easier each month for anyone, perhaps because the relevant research ideas are now in many many people’s heads), and this is well understood by nearly everyone, I feel like human coordination issues open up and lots of new possibilities emerge that were foreclosed in the past for lack of a well understood payoff structure.
Think of it like negotiating with reality, on behalf of the human species, to get the best possible outcome.
If some chunk of reality makes humanity an offer we don’t like, it seems reasonable to want to be able to check other parts of reality for other (hopefully better) offers.
Also, if there’s only like one or two “chunks” of reality where such an offer can be found, and both the chunks of reality know that humanity has very few other options, they can give humanity pretty mediocre choices and smirk at us as we begin to realize how little mobility we have.
So if you know that you’re going to get at least one success in the next six months and think that a crazy six month crash development program by a team of less than 10 people could be the cause of that outcome, then it sorta makes sense to fund maybe 10k programmers to go off to various isolated cabins in the woods.
Each little team gets some computer hardware and a very filtered internet connection that is monitored by the people whose job it is to negotiate with reality in general.
Organize the teams a bit, so they cannot become a unified bloc, but with enough (monitored) internal communication that teams that make a lot of progress in the first month or two (who want to merge and develop a complimentary system after that) can somehow find each other and go faster during month 3 and 4...
If promising results are coming from 20 different cabins after 4 months, and some of the teams are asking for way more electricity and GPUs (or whatever) then the people organizing the larger project can hope to have enough options (with enough variety on enough dimensions) that they could maybe slow some teams down, speed others up, and study all the options and generally try to gain some room to maneuver… to negotiate with the 20 incipient godlings, plus the cognitive parents of these godlings, while those entities are all still “babies” with intelligible architectures (because they were created by small human teams operating according to theories intelligible within at least the team) at the same time.
I would be more hopeful for humanity’s chances in that kind of development/negotiating context than the context that will probably actually happen.
I’m not fond of your phrasing—humanity negotiating with reality—but the idea you sketch has interesting features.
I am not fond of my phrasing here either. If I had more time I’d have written something shorter and better.
It seems worth calling attention to the importance of causes of urgency. The options humanity has are different if the causes of the urgency are things like “ambient research memes are good enough now” versus “finally some company owns enough data centers”.