AI doom seems to fit the category “races to the bottom with unintended consequences” (and it isn’t the only existential risk in that category). As such, its desperate urgency is downstream from the metacrisis (or the meaning crisis as John Vervaeke called it). Resolving or mitigating the metacrisis would give much-needed breathing room for studying AI alignment and exacerbating the metacrisis would seem to increase AI risk further.
I personally happened to fall into studying the metacrisis rather than AI, and it is my estimate that the metacrisis is more solvable and has aspects to it that seem relevant to understanding cognitive agency and intelligence in general. The linkage is such that I believe both problems merit attention and may benefit from cross pollination.
ETA: As this seems to potentially have more general relevance, I crossposted it to the open thread here. Feel free to reply to whichever seems more on-topic for the direction you’re taking.
What I mean by the metacrisis is the idea that a large fraction of the variety of missteps, unintended consequences and crises unfolding around the world are downstream from a single nexus of underlying causes. Resolving these problems one-by-one won’t help us much overall if the underlying causes are not addressed; more will come, some of them as unintended consequences of prior solutions. It’s a crisis of crises in one sense, in another sense the apparent crises are just symptoms of one larger hard-to-see, slowly-unfolding crisis.
AI doom seems to fit the category “races to the bottom with unintended consequences” (and it isn’t the only existential risk in that category). As such, its desperate urgency is downstream from the metacrisis (or the meaning crisis as John Vervaeke called it). Resolving or mitigating the metacrisis would give much-needed breathing room for studying AI alignment and exacerbating the metacrisis would seem to increase AI risk further.
I personally happened to fall into studying the metacrisis rather than AI, and it is my estimate that the metacrisis is more solvable and has aspects to it that seem relevant to understanding cognitive agency and intelligence in general. The linkage is such that I believe both problems merit attention and may benefit from cross pollination.
ETA: As this seems to potentially have more general relevance, I crossposted it to the open thread here. Feel free to reply to whichever seems more on-topic for the direction you’re taking.
What I mean by the metacrisis is the idea that a large fraction of the variety of missteps, unintended consequences and crises unfolding around the world are downstream from a single nexus of underlying causes. Resolving these problems one-by-one won’t help us much overall if the underlying causes are not addressed; more will come, some of them as unintended consequences of prior solutions. It’s a crisis of crises in one sense, in another sense the apparent crises are just symptoms of one larger hard-to-see, slowly-unfolding crisis.