I think it would be an easier challenge to align 100 small ones (since solutions would quite possibly transfer across).
I think it would be a bigger victory to align the one big one.
I’m not sure from the wording of your question whether I’m supposed to assume success.
I’m relatively a fan of their approach (although I haven’t spent an enormous amount of time thinking about it). I like starting with problems which are concrete enough to really go at but which are microcosms for things we might eventually want.
I actually kind of think of truthfulness as sitting somewhere on the spectrum between the problem Redwood are working on right now and alignment. Many of the reasons I like truthfulness as medium-term problem to work on are similar to the reasons I like Redwood’s current work.