My reading is that Noosphere89 thinks that Lightcone has helped in bringing in/​upskiling a number of empirical/​prosaic alignment researchers. In worlds where alignment is relatively easy, this is net positive as the alignment benefits are higher than the capabilities costs, while in worlds where alignment is very hard, we might expect the alignment benefits to be marginal while the capabilities costs continue to be very real.
My reading is that Noosphere89 thinks that Lightcone has helped in bringing in/​upskiling a number of empirical/​prosaic alignment researchers. In worlds where alignment is relatively easy, this is net positive as the alignment benefits are higher than the capabilities costs, while in worlds where alignment is very hard, we might expect the alignment benefits to be marginal while the capabilities costs continue to be very real.