IMO it’s unclear what kind of person would be influenced by this. It requires the reader to a) be amenable to arguments based on quantitative probabilistic reasoning, but also b) overlook or be unbothered by the non sequitur at the beginning of the letter. (It’s obviously possible for the appropriate ratio of spending on causes A and B not to match the magnitude of the risks addressed by A and B.)
I also don’t understand where the numbers come from in this sentence:
In order to believe that AI risk is 8000 times less than military risk, you must believe that an AI catastrophe (killing 1 in 10 people) is less than 0.001% likely.
By a very high standard, all kinds of reasonable advice are non-sequitur. E.g. a CEO might explain to me “if you hire Alice instead of Bob, you must also believe Alice is better for the company than Bob, you can’t just like her more,” but I might think “well that’s clearly a non-sequitur, just because I hire Alice instead of Bob doesn’t imply Alice is better for the company than Bob. Since maybe Bob is a psychopath who would improve the company’s fortunes by committing crime and getting away with it, so I hire Alice instead.”
X doesn’t always imply Y, but in cases where X doesn’t imply Y there has to be an explanation.
In order for the reader to agree that AI risk is far higher than 1/8000th the military risk, but still insist that 1/8000th the military budget is still justified, he would need a big explanation, e.g. the marginal benefit of spending 10% more on the military reduces military risk by 10%, but the marginal benefit of spending 10% more on AI risk somehow only reduces AI risk by 0.1%, since AI risk is far more independent of countermeasures.
It’s hard to have such drastic differences, because one needs to be very certain that AI risk is unsolvable. If one was uncertain of the nature of AI risk, and there existed plausible models where spending a lot reduces the risk a lot, then these plausible models dominate the expected value of risk reduction.
Thank you for pointing out that sentence, I will add a footnote for it.
If we suppose that military risk for a powerful country (like the US) is lower than the equivalent of a 8% chance of catastrophe (killing 1 in 10 people) by 2100, then 8000 times less would be a 0.001% chance of catastrophe by 2100.
I will also add a footnote for the marginal gains.
Thank you, this is a work in progress, as the version number suggests :)
IMO it’s unclear what kind of person would be influenced by this. It requires the reader to a) be amenable to arguments based on quantitative probabilistic reasoning, but also b) overlook or be unbothered by the non sequitur at the beginning of the letter. (It’s obviously possible for the appropriate ratio of spending on causes A and B not to match the magnitude of the risks addressed by A and B.)
I also don’t understand where the numbers come from in this sentence:
Hi,
By a very high standard, all kinds of reasonable advice are non-sequitur. E.g. a CEO might explain to me “if you hire Alice instead of Bob, you must also believe Alice is better for the company than Bob, you can’t just like her more,” but I might think “well that’s clearly a non-sequitur, just because I hire Alice instead of Bob doesn’t imply Alice is better for the company than Bob. Since maybe Bob is a psychopath who would improve the company’s fortunes by committing crime and getting away with it, so I hire Alice instead.”
X doesn’t always imply Y, but in cases where X doesn’t imply Y there has to be an explanation.
In order for the reader to agree that AI risk is far higher than 1/8000th the military risk, but still insist that 1/8000th the military budget is still justified, he would need a big explanation, e.g. the marginal benefit of spending 10% more on the military reduces military risk by 10%, but the marginal benefit of spending 10% more on AI risk somehow only reduces AI risk by 0.1%, since AI risk is far more independent of countermeasures.
It’s hard to have such drastic differences, because one needs to be very certain that AI risk is unsolvable. If one was uncertain of the nature of AI risk, and there existed plausible models where spending a lot reduces the risk a lot, then these plausible models dominate the expected value of risk reduction.
Thank you for pointing out that sentence, I will add a footnote for it.
If we suppose that military risk for a powerful country (like the US) is lower than the equivalent of a 8% chance of catastrophe (killing 1 in 10 people) by 2100, then 8000 times less would be a 0.001% chance of catastrophe by 2100.
I will also add a footnote for the marginal gains.
Thank you, this is a work in progress, as the version number suggests :)