Leaving this comment just so you know—you are not alone in the assessment.
A lot of reasoning on the topic, when stripped down to the core, looks like “there is nonzero chance of extinction event with AGI, any nonzero probability multiplied by infinite loss is infinite loss, the only way to survive is to make probability exactly zero, either with full alignment (whatever that term supposed to exactly mean) or just not doing AGI”, which a very bad argument and essentially Pascal’s wager.
And yes, there are a lot of articles here “why this isn’t Pascal’s wager” that do not really work to prove their point unless you already agree with it.
It’s worth noting that many of the people involved in AI risk have directly disagreed with this viewpoint, saying that their analysis yields much-larger-than-nonzero probabilities of AGI related X-Risk.
It’s been a long time since I read Superintelligence but I’m pretty certain it never mentioned infinite loss. And the part about having to make a probability very close to zero, wasn’t this in the context of discussing very long timescales (e.g., the possibility of surviving for billions of years)? In that context, it’s easy to calculate that unless you drive down the per-year-extinction probability to almost zero, you’ll go extinct eventually.
No. The arguments look like a relatively small amount of ambiguous evidence, but what evidence is there doesn’t look good.
“If I thought the chance of AGI doom was smaller than the chance of asteroid doom, I would be working on asteroid deflection” is a common sentiment. People aren’t claiming tiny probabilities. They are claiming that its a default failure mode. Something that will happen (Or at least more likely than not) unless specifically prevented.
I am not sure if this dichotomy is a helpful one but we can see Templarrr as stating that there is a theoretic ‘failing’ which need not be mutually exclusive with the pragmatic ‘usefulness’ of a theory. Both of you can be right and that would still mean that it is worthwhile to think up how to ameliorate/solve the theoretical problems posed and not devalue (or discontinue) the work being done in the pragmatic domain.
I am not sure if this dichotomy is a helpful one but we can see Templarrr as stating that there is a theoretic ‘failing’ which need not be mutually exclusive with the pragmatic ‘usefulness’ of a theory.
That was what I was also trying to say, in a very pithy way : )
The problem with “Pascal’s wager” is not that the value gain/loss is too big, but that the probability is so tiny that without that big gain/loss no one would care.
If I say “you need this surgery, or there is a 50% chance you will die this year”, this is not Pascal’s wager, even if you value your life extremely highly. If I say “unless you eat this magical pill, you will die this year, and although the probability of the pill actually being magical is less than 1:1000000000, this is the only life you have, so you better buy this pill from me”, that would be Pascal’s wager.
People who believe that AGI is a possible extinction level, they believe the probability of that is… uhm, greater than 10%, to put it mildly. So it is outside the Pascal’s wager territory.
any nonzero probability multiplied by infinite loss is infinite loss
For real numbers.
Infinite is not a real number.
From infinite numbers, infinitesimal numbers may be derived.
And once there are infinitesimal numbers, the statement is no longer true, for an infinite loss times a nonzero infinitesimal probability is a finite loss.
Leaving this comment just so you know—you are not alone in the assessment. A lot of reasoning on the topic, when stripped down to the core, looks like “there is nonzero chance of extinction event with AGI, any nonzero probability multiplied by infinite loss is infinite loss, the only way to survive is to make probability exactly zero, either with full alignment (whatever that term supposed to exactly mean) or just not doing AGI”, which a very bad argument and essentially Pascal’s wager.
And yes, there are a lot of articles here “why this isn’t Pascal’s wager” that do not really work to prove their point unless you already agree with it.
It’s worth noting that many of the people involved in AI risk have directly disagreed with this viewpoint, saying that their analysis yields much-larger-than-nonzero probabilities of AGI related X-Risk.
much-larger-than-zero
It’s been a long time since I read Superintelligence but I’m pretty certain it never mentioned infinite loss. And the part about having to make a probability very close to zero, wasn’t this in the context of discussing very long timescales (e.g., the possibility of surviving for billions of years)? In that context, it’s easy to calculate that unless you drive down the per-year-extinction probability to almost zero, you’ll go extinct eventually.
I believe the infinite loss here is referring to extinction.
No. The arguments look like a relatively small amount of ambiguous evidence, but what evidence is there doesn’t look good.
“If I thought the chance of AGI doom was smaller than the chance of asteroid doom, I would be working on asteroid deflection” is a common sentiment. People aren’t claiming tiny probabilities. They are claiming that its a default failure mode. Something that will happen (Or at least more likely than not) unless specifically prevented.
All these criticism can be true, and AGI can still be an existential threat.
I am not sure if this dichotomy is a helpful one but we can see Templarrr as stating that there is a theoretic ‘failing’ which need not be mutually exclusive with the pragmatic ‘usefulness’ of a theory. Both of you can be right and that would still mean that it is worthwhile to think up how to ameliorate/solve the theoretical problems posed and not devalue (or discontinue) the work being done in the pragmatic domain.
That was what I was also trying to say, in a very pithy way : )
The problem with “Pascal’s wager” is not that the value gain/loss is too big, but that the probability is so tiny that without that big gain/loss no one would care.
If I say “you need this surgery, or there is a 50% chance you will die this year”, this is not Pascal’s wager, even if you value your life extremely highly. If I say “unless you eat this magical pill, you will die this year, and although the probability of the pill actually being magical is less than 1:1000000000, this is the only life you have, so you better buy this pill from me”, that would be Pascal’s wager.
People who believe that AGI is a possible extinction level, they believe the probability of that is… uhm, greater than 10%, to put it mildly. So it is outside the Pascal’s wager territory.
For real numbers.
Infinite is not a real number.
From infinite numbers, infinitesimal numbers may be derived.
And once there are infinitesimal numbers, the statement is no longer true, for an infinite loss times a nonzero infinitesimal probability is a finite loss.