It seems very puzzling to me that almost no one is working on increasing AI and/or human philosophical competence in these ways, or even publicly expressing the worry that AIs and/or humans collectively might not be competent enough to solve important philosophical problems that will arise during and after the AI transition. Why is AI’s moral status (and other object level problems like decision theory for AIs) considered worthwhile to talk about, but this seemingly more serious “meta” problem isn’t?
FWIW, this sort of thing is totally on my radar and I’m aware of at least a few people working on it.
My sense is that it isn’t super leveraged to work on right now, but nonetheless the current allocation on “improving AI conceptual/philosophical competence” is too low.
FWIW, this sort of thing is totally on my radar and I’m aware of at least a few people working on it.
My sense is that it isn’t super leveraged to work on right now, but nonetheless the current allocation on “improving AI conceptual/philosophical competence” is too low.
Interesting. Who are they and what approaches are they taking? Have they said anything publicly about working on this, and if not, why?