Agreed. Particularly on polarization being the big risk. Strategizing on how to avoid polarization when public concern hits seems like the highest priority. I don’t have good ideas, even though I think about polarization a lot.
My nomination for policy is something vague but intuitive like “no AI that can take a whole job” or something like that, since that’s also the type of AI that can solve new problems and ultimately take over.
Of course we’ll be close enough by then that someone will be able to make human-level AI with self-directed learning pretty quickly in a jurisdiction that doesn’t outlaw it (or in secret, or in a government program); but laws would reduce proliferation, which seems useful to shift the odds somewhat.
Agreed. Particularly on polarization being the big risk. Strategizing on how to avoid polarization when public concern hits seems like the highest priority. I don’t have good ideas, even though I think about polarization a lot.
My nomination for policy is something vague but intuitive like “no AI that can take a whole job” or something like that, since that’s also the type of AI that can solve new problems and ultimately take over.
Of course we’ll be close enough by then that someone will be able to make human-level AI with self-directed learning pretty quickly in a jurisdiction that doesn’t outlaw it (or in secret, or in a government program); but laws would reduce proliferation, which seems useful to shift the odds somewhat.