Many people seem unsure of the size of the effects of political advocacy even in the short run. So this seems a little like saying, here’s an NP-complete problem and for good measure take another problem in an even harder complexity class. It makes you want to throw up your hands and run.
My current guess is that AI policy is not a good idea because the ground is shifting underneath us as we walk—by the time a self-sufficient machine civilization got even close, human politics (if we were still human) would already be very different.
Edit: So when we think about the US government right before AI, it’s like ancient Rome or Han China and nuclear weapons. There might well not be a US government.
Many people seem unsure of the size of the effects of political advocacy even in the short run. So this seems a little like saying, here’s an NP-complete problem and for good measure take another problem in an even harder complexity class. It makes you want to throw up your hands and run.
My current guess is that AI policy is not a good idea because the ground is shifting underneath us as we walk—by the time a self-sufficient machine civilization got even close, human politics (if we were still human) would already be very different.
Edit: So when we think about the US government right before AI, it’s like ancient Rome or Han China and nuclear weapons. There might well not be a US government.