Technical research trajectory, mostly; I see paths through current technical alignment research which might be able to pull a rabbit out of a hat, camp A style. Also some chance of slowdown but most of my success probability comes from the possible futures where current technical research hunches pan out and let us know important attributes of a learning system that let us be sure that running it results in mostly-good outcomes for most minds’ preferences, in some cev-ish sense. Mostly this depends on wizard power, not command power.
Technical research trajectory, mostly; I see paths through current technical alignment research which might be able to pull a rabbit out of a hat, camp A style. Also some chance of slowdown but most of my success probability comes from the possible futures where current technical research hunches pan out and let us know important attributes of a learning system that let us be sure that running it results in mostly-good outcomes for most minds’ preferences, in some cev-ish sense. Mostly this depends on wizard power, not command power.