“The evidence isn’t convincing” is a fine and true statement. I agree that IABI did not convince BB that the title thesis is clearly true. (Arguably that wasn’t the point of the book, and it did convince him that it was worryingly plausible and worth spending more attention on AI x-risk, but that’s pure speculation on my part and idk.)
My point is that “the evidence isn’t convincing” is (by default) a claim about the evidence, not the hypothesis. It is not a reason to disbelieve.
I agree[1] that sometimes having little evidence or only weak evidence should be an update against. These are cases where the hypothesis predicts that you will have compelling evidence. If the hypothesis were “it is obvious that if anyone builds it, everyone dies” then I think the current lack of consensus and inconclusive evidence would be a strong reason to disbelieve. This is why I picked the example with the stars/planets. It, I claim, is a hypothesis that does not predict you’ll have lots of easy evidence on Old Earth, and in that context the lack of compelling evidence is not relevant to the hypothesis.
I’m not sure if there’s a clearer way to state my point.[2] Sorry for not being easier to understand.
Perhaps relevant: MIRI thinks that it’ll be hard to get consensus on AGI before it comes.
As indicated in my final parenthetical paragraph, I my comment above:
(There are also cases where the “absence of evidence” is evidence of absence. But these are just null results, not a real absence of evidence. It seems fine to criticize an argument for doom that predicted we’d see all AIs the size of Claude being obviously sociopathic.)
We could try expressing things in math if you want. Like, what does the update on the book being unconvincing look like in terms of Bayesian probability?
Why did you title the book “If anyone builds it everyone dies” if the point of the book was not to convince people “If anyone builds it everyone dies”? If this really was some obscure philosophical project that has no bearing on the real question why not give it some obscure title like “On the Electrodynamics of Moving Bodies” to clearly indicate “this isn’t meant to be persuasive or even comprehensible to 99% of human beings”
IABIED is a 101-level book written for the general public that was deliberately kept nice and short. I kinda think anyone (who is not an expert) who reads IABIED and comes away with a similar level of pessimism as the authors is making an error. If you read any single book on a wild, controversial topic, you should not wind up extremely confident!
My sense is that the point of the book was to convince people that it’s important to take AI x-risk seriously (as BB does). I don’t really think it was intended to get people to think it’s title thesis is clearly true.
“The evidence isn’t convincing” is a fine and true statement. I agree that IABI did not convince BB that the title thesis is clearly true. (Arguably that wasn’t the point of the book, and it did convince him that it was worryingly plausible and worth spending more attention on AI x-risk, but that’s pure speculation on my part and idk.)
My point is that “the evidence isn’t convincing” is (by default) a claim about the evidence, not the hypothesis. It is not a reason to disbelieve.
I agree[1] that sometimes having little evidence or only weak evidence should be an update against. These are cases where the hypothesis predicts that you will have compelling evidence. If the hypothesis were “it is obvious that if anyone builds it, everyone dies” then I think the current lack of consensus and inconclusive evidence would be a strong reason to disbelieve. This is why I picked the example with the stars/planets. It, I claim, is a hypothesis that does not predict you’ll have lots of easy evidence on Old Earth, and in that context the lack of compelling evidence is not relevant to the hypothesis.
I’m not sure if there’s a clearer way to state my point.[2] Sorry for not being easier to understand.
Perhaps relevant: MIRI thinks that it’ll be hard to get consensus on AGI before it comes.
As indicated in my final parenthetical paragraph, I my comment above:
(There are also cases where the “absence of evidence” is evidence of absence. But these are just null results, not a real absence of evidence. It seems fine to criticize an argument for doom that predicted we’d see all AIs the size of Claude being obviously sociopathic.)
We could try expressing things in math if you want. Like, what does the update on the book being unconvincing look like in terms of Bayesian probability?
>Arguably that wasn’t the point of the book
Why did you title the book “If anyone builds it everyone dies” if the point of the book was not to convince people “If anyone builds it everyone dies”? If this really was some obscure philosophical project that has no bearing on the real question why not give it some obscure title like “On the Electrodynamics of Moving Bodies” to clearly indicate “this isn’t meant to be persuasive or even comprehensible to 99% of human beings”
My sense is that the point of the book was to convince people that it’s important to take AI x-risk seriously (as BB does). I don’t really think it was intended to get people to think it’s title thesis is clearly true.
Some things are hard to judge.