I agree with the idea of failure being overdetermined.
But another factor might be that those failures aren’t useful because they relate to current AI. Current AI is very different from AGI or superintelligence, which makes both failures and successes less useful...
Though I know very little about these examples :/
Edit: I misread, Max H wasn’t trying to say that successes are more important to failures, just that failures aren’t informative.
Yeah, but, there’s already a bunch of arguments about whether prosaic ML alignment is useful (which people have mostly decided whatever they believe about) and the OP is interesting because it’s a fairly separate reason to be skeptical about a class of research.
I agree with the idea of failure being overdetermined.But another factor might be that those failures aren’t useful because they relate to current AI. Current AI is very different from AGI or superintelligence, which makes both failures and successes less useful...Though I know very little about these examples :/Edit: I misread, Max H wasn’t trying to say that successes are more important to failures, just that failures aren’t informative.
Yeah, but, there’s already a bunch of arguments about whether prosaic ML alignment is useful (which people have mostly decided whatever they believe about) and the OP is interesting because it’s a fairly separate reason to be skeptical about a class of research.