When the idea of AGI was first discovered “a few generations” ago, the world government of dath ilan took action to orient their entire civilization around solving the alignment problem, including getting 20% of their researchers to do safety research, and slowing the development of multiple major technologies. AGI safety research has been ongoing on dath ilan for generations.
This seems wrong. There’s a paper that notes ‘this problem wasn’t caught by NASA. It was an issue of this priority, in order to catch it, we would have had to go through all the issues at that level of priority (and the higher ones)‘. This seems like it applies. Try ‘x-risk’ instead of AGI. Not researching computers (risk: AI?), not researching biotech* (risk: gain of function?), etc. You might think ‘they have good coordination!’ Magically, yes—so they’re not worried about biowarfare. What about (flesh eating) antibiotic resistant bacteria?
Slowing research across the board in order to deal with unknown unknowns might seem unnecessary, but at what point can you actually put brakes on?
This seems wrong. There’s a paper that notes ‘this problem wasn’t caught by NASA. It was an issue of this priority, in order to catch it, we would have had to go through all the issues at that level of priority (and the higher ones)‘. This seems like it applies. Try ‘x-risk’ instead of AGI. Not researching computers (risk: AI?), not researching biotech* (risk: gain of function?), etc. You might think ‘they have good coordination!’ Magically, yes—so they’re not worried about biowarfare. What about (flesh eating) antibiotic resistant bacteria?
Slowing research across the board in order to deal with unknown unknowns might seem unnecessary, but at what point can you actually put brakes on?