To criticize an idea on the grounds that the evidence for that idea isn’t conclusive is insane — that’s a problem with your body of evidence, not the ideas themselves!
What does this sentence even mean? The problem isn’t the idea, it’s that there’s not enough evidence for it… sounds like the problem is with the idea.
Suppose that, in the years before telescopes, I came to you and said that [wild idea X] was true.[1]
You’d be right to wonder why I think that. Now suppose that I offer some convoluted philosophical argument that is hard to follow (perhaps because it’s invalid). You are not convinced.
If you write down a list of arguments, for and against the idea, you could put my wacky argument in the “for” column, or not, if you think it’s too weak to be worth consideration. But what I am claiming would be insane is to list “lack of proof” as an argument against.
Lack of proof is an observation about the list of arguments, not about the idea itself. It’s a meta-level argument masquerading as an object level argument.
Let’s say on priors you think [X] is 1% likely, and your posterior is pretty close after hearing my argument. If someone asks you why you don’t believe, I claim that the most precise (and correct) response is “my prior is low,” not “the evidence isn’t convincing,” since the failure of your body of evidence is not a reason to disbelieve in the hypothesis.
Does that make sense?
(Admittedly, I think it’s fine to speak casually and not worry about this point in some contexts. But I don’t think BB’s blog is such a context.)
(There are also cases where the “absence of evidence” is evidence of absence. But these are just null results, not a real absence of evidence. It seems fine to criticize an argument for doom that predicted we’d see all AIs the size of Claude being obviously sociopathic.)
Edit warning! In the original version of this comment X = “the planets are other worlds, like ours, and a bunch of them have moons.” My point does not depend on the specific X.
Suppose that, in the years before telescopes, I came to you and said “the planets are other worlds, like ours, and a bunch of them have moons.”
Suppose you should believe, without evidence such a theory, as opposed to one of the many equally plausible but wrong theories that were going around at the time such as: “other planets will have different kinds of men on them” or “other planets have vegetation and life on them” or “other planets have rocky surfaces and air on them”.
And suppose that subsequently, evidence should be discovered proving that you, and you alone were correct.
Then you will be lauded throughout the world. People will declare you a thought-leader, an influencer a visionary of the future. Undoubtedly, wealth and fame will attract themselves to you. History books will sing your praises for centuries to come as “the man who knew other planets had moons.”
In one small, dark corner of the internet, however, you will encounter a strange group of people. These people have beliefs like “claims should be based off of evidence.” And those people will use a different word to describe you: lucky.
Sorry, I think you entirely missed my point. It seems my choice of hypothesis was distracting. I’ve edited my original comment to make that more clear. My point does not depend on the truth of the claim.
“my prior is low,” not “the evidence isn’t convincing,”
I still don’t follow.
You wrote an entire book and it didn’t move Bentham’s priors. If that’s not a clear cut example of “the evidence [in the book] isn’t convincing.” I don’t know what is.
In fact, if someone wrote an entire book (in which I would assume they would naturally collect the best arguments for a position) and I found no convincing evidence it, I would actively consider that evidence against the position. Because “I haven’t done much research but the evidence looks poor” is a less definitive conclusion than “I have read the foremost expert’s book on the topic and the evidence looks bad.”
“The evidence isn’t convincing” is a fine and true statement. I agree that IABI did not convince BB that the title thesis is clearly true. (Arguably that wasn’t the point of the book, and it did convince him that it was worryingly plausible and worth spending more attention on AI x-risk, but that’s pure speculation on my part and idk.)
My point is that “the evidence isn’t convincing” is (by default) a claim about the evidence, not the hypothesis. It is not a reason to disbelieve.
I agree[1] that sometimes having little evidence or only weak evidence should be an update against. These are cases where the hypothesis predicts that you will have compelling evidence. If the hypothesis were “it is obvious that if anyone builds it, everyone dies” then I think the current lack of consensus and inconclusive evidence would be a strong reason to disbelieve. This is why I picked the example with the stars/planets. It, I claim, is a hypothesis that does not predict you’ll have lots of easy evidence on Old Earth, and in that context the lack of compelling evidence is not relevant to the hypothesis.
I’m not sure if there’s a clearer way to state my point.[2] Sorry for not being easier to understand.
Perhaps relevant: MIRI thinks that it’ll be hard to get consensus on AGI before it comes.
As indicated in my final parenthetical paragraph, I my comment above:
(There are also cases where the “absence of evidence” is evidence of absence. But these are just null results, not a real absence of evidence. It seems fine to criticize an argument for doom that predicted we’d see all AIs the size of Claude being obviously sociopathic.)
We could try expressing things in math if you want. Like, what does the update on the book being unconvincing look like in terms of Bayesian probability?
Why did you title the book “If anyone builds it everyone dies” if the point of the book was not to convince people “If anyone builds it everyone dies”? If this really was some obscure philosophical project that has no bearing on the real question why not give it some obscure title like “On the Electrodynamics of Moving Bodies” to clearly indicate “this isn’t meant to be persuasive or even comprehensible to 99% of human beings”
IABIED is a 101-level book written for the general public that was deliberately kept nice and short. I kinda think anyone (who is not an expert) who reads IABIED and comes away with a similar level of pessimism as the authors is making an error. If you read any single book on a wild, controversial topic, you should not wind up extremely confident!
My sense is that the point of the book was to convince people that it’s important to take AI x-risk seriously (as BB does). I don’t really think it was intended to get people to think it’s title thesis is clearly true.
What does this sentence even mean? The problem isn’t the idea, it’s that there’s not enough evidence for it… sounds like the problem is with the idea.
Suppose that, in the years before telescopes, I came to you and said that [wild idea X] was true.[1]
You’d be right to wonder why I think that. Now suppose that I offer some convoluted philosophical argument that is hard to follow (perhaps because it’s invalid). You are not convinced.
If you write down a list of arguments, for and against the idea, you could put my wacky argument in the “for” column, or not, if you think it’s too weak to be worth consideration. But what I am claiming would be insane is to list “lack of proof” as an argument against.
Lack of proof is an observation about the list of arguments, not about the idea itself. It’s a meta-level argument masquerading as an object level argument.
Let’s say on priors you think [X] is 1% likely, and your posterior is pretty close after hearing my argument. If someone asks you why you don’t believe, I claim that the most precise (and correct) response is “my prior is low,” not “the evidence isn’t convincing,” since the failure of your body of evidence is not a reason to disbelieve in the hypothesis.
Does that make sense?
(Admittedly, I think it’s fine to speak casually and not worry about this point in some contexts. But I don’t think BB’s blog is such a context.)
(There are also cases where the “absence of evidence” is evidence of absence. But these are just null results, not a real absence of evidence. It seems fine to criticize an argument for doom that predicted we’d see all AIs the size of Claude being obviously sociopathic.)
Edit warning! In the original version of this comment X = “the planets are other worlds, like ours, and a bunch of them have moons.” My point does not depend on the specific X.
Suppose you should believe, without evidence such a theory, as opposed to one of the many equally plausible but wrong theories that were going around at the time such as: “other planets will have different kinds of men on them” or “other planets have vegetation and life on them” or “other planets have rocky surfaces and air on them”.
And suppose that subsequently, evidence should be discovered proving that you, and you alone were correct.
Then you will be lauded throughout the world. People will declare you a thought-leader, an influencer a visionary of the future. Undoubtedly, wealth and fame will attract themselves to you. History books will sing your praises for centuries to come as “the man who knew other planets had moons.”
In one small, dark corner of the internet, however, you will encounter a strange group of people. These people have beliefs like “claims should be based off of evidence.” And those people will use a different word to describe you: lucky.
Sorry, I think you entirely missed my point. It seems my choice of hypothesis was distracting. I’ve edited my original comment to make that more clear. My point does not depend on the truth of the claim.
I still don’t follow.
You wrote an entire book and it didn’t move Bentham’s priors. If that’s not a clear cut example of “the evidence [in the book] isn’t convincing.” I don’t know what is.
In fact, if someone wrote an entire book (in which I would assume they would naturally collect the best arguments for a position) and I found no convincing evidence it, I would actively consider that evidence against the position. Because “I haven’t done much research but the evidence looks poor” is a less definitive conclusion than “I have read the foremost expert’s book on the topic and the evidence looks bad.”
“The evidence isn’t convincing” is a fine and true statement. I agree that IABI did not convince BB that the title thesis is clearly true. (Arguably that wasn’t the point of the book, and it did convince him that it was worryingly plausible and worth spending more attention on AI x-risk, but that’s pure speculation on my part and idk.)
My point is that “the evidence isn’t convincing” is (by default) a claim about the evidence, not the hypothesis. It is not a reason to disbelieve.
I agree[1] that sometimes having little evidence or only weak evidence should be an update against. These are cases where the hypothesis predicts that you will have compelling evidence. If the hypothesis were “it is obvious that if anyone builds it, everyone dies” then I think the current lack of consensus and inconclusive evidence would be a strong reason to disbelieve. This is why I picked the example with the stars/planets. It, I claim, is a hypothesis that does not predict you’ll have lots of easy evidence on Old Earth, and in that context the lack of compelling evidence is not relevant to the hypothesis.
I’m not sure if there’s a clearer way to state my point.[2] Sorry for not being easier to understand.
Perhaps relevant: MIRI thinks that it’ll be hard to get consensus on AGI before it comes.
As indicated in my final parenthetical paragraph, I my comment above:
(There are also cases where the “absence of evidence” is evidence of absence. But these are just null results, not a real absence of evidence. It seems fine to criticize an argument for doom that predicted we’d see all AIs the size of Claude being obviously sociopathic.)
We could try expressing things in math if you want. Like, what does the update on the book being unconvincing look like in terms of Bayesian probability?
>Arguably that wasn’t the point of the book
Why did you title the book “If anyone builds it everyone dies” if the point of the book was not to convince people “If anyone builds it everyone dies”? If this really was some obscure philosophical project that has no bearing on the real question why not give it some obscure title like “On the Electrodynamics of Moving Bodies” to clearly indicate “this isn’t meant to be persuasive or even comprehensible to 99% of human beings”
My sense is that the point of the book was to convince people that it’s important to take AI x-risk seriously (as BB does). I don’t really think it was intended to get people to think it’s title thesis is clearly true.
Some things are hard to judge.