You appear to be correct. I will withdraw my comment.
GPT-4 is expected to have about 10^14 parameters and be ready in a few years. And, we already know that GPT-3 can write code. The following all seem (to me at least) like very reasonable conjectures:
(i) Writing code is one of the tasks at which GPT-4 will have (at least) human level capability.
(ii) Clones of GPT-4 will be produced fairly rapidly after GPT-4, say 1-3 years.
(iii) GPT-4 and its clones will have a significant impact on society. This will show up in the real economy.
(iv) GPT-4 will be enough to shock governments into paying attention. (But as we have seen with climate change governments can pay attention to an issue for a long time without effectively doing anything about it.)
(v) Someone is going to ask for GPT-4 (clone) to produce code that generates AGI. (Implicitly, if not explicitly.)
I have absolutely no idea whether GPT-4 will succeed at this endeavor. But if not, GPT-5 should be available a few years later....
(And, of course, this is just one pathway.)
“That’s one lesson you could take away. Another might be: governments will be very willing to restrict the use of novel technologies, even at colossal expense, in the face of even a small risk of large harms.”
Governments cooperate a lot more than Eliezer seemed to be suggesting they do. One example is banning CFCs in response to the ozone hole. There was also significant co-operation between central banks in mitigating certain consequences of the 2008 financial system crash.
However, I would tend to agree that there is virtually zero chance of governments banning dangerous AGI research because:
(i) The technology is incredibly militarily significant; and
(ii) Cheating is very easy.
(Parenthetically, this also has a number of other implications which make limiting AGI to friendly or aligned AGI highly unlikely, even if it is possible to do that in a timely fashion.)
In addition, as computing power increases, the ease of conducting such research increases so the number of people and situations that the ban has to cover increases over time. This means that an effective ban would require a high degree of surveillance and control which is incompatible with how at least some societies are organized. And beyond the capacity of other societies.
(The above assumes that governments are focused and competent enough to understand the risks of AGI and react to them in a timely manner. I do not think this is likely.)
Interesting. We are in somewhat the same boat. Fully vaccinated adults with a two year old. I think where we come out is as follows.
(1) The risk to kids of COVID over the short term are clearly lower than for adults. Over the long term, it is presently unknown.
(2) It is highly likely (>90%) that we will be able to vaccinate young children by next year, so any risk reducing measures we take will be temporary. (Also, see (5).)
(3) The risk from outdoor activities and from vaccinated people are very low. Therefore, we are fine with outdoor activities masked or not and with socializing with fully vaccinated people.
(4) There are limited gains from indoor activities with unvaccinated people, so we will not bring our daughter indoors with unmasked unvaccinated people or unnecessarily indoors with people whose vaccine status is unknown.
(5) COVID prevalence here is dropping, whether for reasons of increased vaccination or otherwise. If, due to increased vaccination, those rates stay down, we can relax these restrictions.
The more interesting question is where else do we see something similar occurring?
For example, historically income in retirement was usually discussed in terms of expected value. More recently, we’ve begun to see discussions about retirement focusing on the probability of running out of money. Are there other areas where people focus on expected outcomes as opposed to the probability of X occurring?
The bigger problem here is that as noted in the post, (0) it is always faster to do things in a less secure manner. If you assume (1) multiple competitors trying to build AI (and if this is not your assumption, I would like to hear a basis for it), (2) at least some who believe that the first AI created will be in a position of unassailable dominance (this appears to be the belief of at least some and include, but not necessarily be limited to, those who believe in a high likelihood of a hard takeoff), (3) some overlap between the groups described in 1 and 2 (again, if you don’t think this is going to be the case, I would like to hear a basis for it) and (4) varying levels of conern about the potential damage caused by an unfriendly AI (even if you believe that as we get closer to developing AI, the average and minimum level of concern will rise, variance is likely), the first AI to be produced is likely to be highly insecure (i.e. with non-robust friendliness).
“If you want to outperform—if you want to do anything not usually done—then you’ll need to conceptually divide our civilization into areas of lower and greater competency. ”
The idea quoted above seems wrong in practice. You don’t need to conceptually divide our civilization into areas of comptency—you need to see what is actually being done in the area in which you want to outperform: in particular, (i) whether your proposed activity/solution has already been tried or assessed; and (ii) the degree to which existing evidence says it won’t or will work.
Also, if civilizational competence is intended to cover something beyond an efficient market, it would make sense to use a different example.