It’s been a long time since I read Superintelligence but I’m pretty certain it never mentioned infinite loss. And the part about having to make a probability very close to zero, wasn’t this in the context of discussing very long timescales (e.g., the possibility of surviving for billions of years)? In that context, it’s easy to calculate that unless you drive down the per-year-extinction probability to almost zero, you’ll go extinct eventually.
It’s been a long time since I read Superintelligence but I’m pretty certain it never mentioned infinite loss. And the part about having to make a probability very close to zero, wasn’t this in the context of discussing very long timescales (e.g., the possibility of surviving for billions of years)? In that context, it’s easy to calculate that unless you drive down the per-year-extinction probability to almost zero, you’ll go extinct eventually.
I believe the infinite loss here is referring to extinction.