Looking back, it appears that much of my intellectual output could be described as legibilizing work, or trying to make certain problems in AI risk more legible to myself and others. I’ve organized the relevant posts and comments into the following list, which can also serve as a partial guide to problems that may need to be further legibilized, especially beyond LW/rationalists, to AI researchers, funders, company leaders, government policymakers, their advisors (including future AI advisors), and the general public.
Power corrupts (or reveals) (AI-granted power, e.g., over future space colonies or vast virtual environments, corrupting human values, or perhaps revealing a dismaying true nature)
Having written all this down in one place, it’s hard not to feel some hopelessness that all of these problems can be made legible to the relevant people, even with a maximum plausible effort. Perhaps one source of hope is that they can be made legible to future AI advisors. As many of these problems are philosophical in nature, this seems to come back to the issue of AI philosophical competence that I’ve often talked about recently, which itself seems largely still illegible and hence neglected.
Perhaps it’s worth concluding on a point from a discussion between @WillPetillo and myself under the previous post, that a potentially more impactful approach (compared to trying to make illegible problems more legible), is to make key decisionmakers realize that important safety problems illegible to themselves (and even to their advisors) probably exist, therefore it’s very risky to make highly consequential decisions (such as about AI development or deployment) based only on the status of legible safety problems.
Problems I’ve Tried to Legibilize
Looking back, it appears that much of my intellectual output could be described as legibilizing work, or trying to make certain problems in AI risk more legible to myself and others. I’ve organized the relevant posts and comments into the following list, which can also serve as a partial guide to problems that may need to be further legibilized, especially beyond LW/rationalists, to AI researchers, funders, company leaders, government policymakers, their advisors (including future AI advisors), and the general public.
Philosophical problems
Probability theory
Decision theory
Beyond astronomical waste (possibility of influencing vastly larger universes beyond our own)
Interaction between bargaining and logical uncertainty
Metaethics
Metaphilosophy: 1, 2
Problems with specific philosophical and alignment ideas
Utilitarianism: 1, 2
Solomonoff induction
“Provable” safety
CEV
Corrigibility
IDA (and many scattered comments)
UDASSA
UDT
Human-AI safety (x- and s-risks arising from the interaction between human nature and AI design)
Value differences/conflicts between humans
“Morality is scary” (human morality is often the result of status games amplifying random aspects of human value, with frightening results)
Positional/zero-sum human values, e.g. status
Distributional shifts as a source of human safety problems
Power corrupts (or reveals) (AI-granted power, e.g., over future space colonies or vast virtual environments, corrupting human values, or perhaps revealing a dismaying true nature)
Intentional and unintentional manipulation of / adversarial attacks on humans by AI
Meta / strategy
AI risks being highly disjunctive, potentially causing increasing marginal return from time in AI pause/slowdown (or in other words, surprisingly low value from short pauses/slowdowns compared to longer ones)
Risks from post-AGI economics/dynamics, specifically high coordination ability leading to increased economy of scale and concentration of resources/power
Difficulty of winning AI race while being constrained by x-safety considerations
Likely offense dominance devaluing “defense accelerationism”
Human tendency to neglect risks while trying to do good
The necessity of AI philosophical competence for AI-assisted safety research and for avoiding catastrophic post-AGI philosophical errors
The problem of illegible problems
Having written all this down in one place, it’s hard not to feel some hopelessness that all of these problems can be made legible to the relevant people, even with a maximum plausible effort. Perhaps one source of hope is that they can be made legible to future AI advisors. As many of these problems are philosophical in nature, this seems to come back to the issue of AI philosophical competence that I’ve often talked about recently, which itself seems largely still illegible and hence neglected.
Perhaps it’s worth concluding on a point from a discussion between @WillPetillo and myself under the previous post, that a potentially more impactful approach (compared to trying to make illegible problems more legible), is to make key decisionmakers realize that important safety problems illegible to themselves (and even to their advisors) probably exist, therefore it’s very risky to make highly consequential decisions (such as about AI development or deployment) based only on the status of legible safety problems.