If an AI made a factual claim that was known to be false, I would start looking for the bug in the AI. Maybe it’s conceivable that we are all deluded about something we think is a known fact, but that is so much less likely than me being deluded about the performance of my AI program, that I’m better off just accepting that if the former is the case, it’s not going to be discovered by the method in question.
If the claim were about a political matter, I would give it more credence; there’s much more precedent for mass delusion about political matters. Suppose the AI claims, say, that communism can work well if implemented correctly. I wouldn’t believe it, but I would at least keep an open mind on the possibility that some part of its reasoning might have stumbled onto some useful truth, rather than dismissing the claim out of hand.
You sure have a lot of trust in “known facts”. It wasn’t until after my university education that I found out that the known fact that “people in the Middle Ages thought the world was flat because the Bible says so” was not really true at all. I uncover false “known facts” that I was taught during my formal education every month or so.
Not on the level of the things being discussed in this thread, you don’t!
I mean seriously, look at what’s going on here: apparently rational people are saying they would believe in vampires, talking cows and orbital mind control lasers on the unsupported word of an authority figure. I suppose I shouldn’t be shocked, human nature being what it is, but still.
Thinking about my own answer to the question:
If an AI made a factual claim that was known to be false, I would start looking for the bug in the AI. Maybe it’s conceivable that we are all deluded about something we think is a known fact, but that is so much less likely than me being deluded about the performance of my AI program, that I’m better off just accepting that if the former is the case, it’s not going to be discovered by the method in question.
If the claim were about a political matter, I would give it more credence; there’s much more precedent for mass delusion about political matters. Suppose the AI claims, say, that communism can work well if implemented correctly. I wouldn’t believe it, but I would at least keep an open mind on the possibility that some part of its reasoning might have stumbled onto some useful truth, rather than dismissing the claim out of hand.
You sure have a lot of trust in “known facts”. It wasn’t until after my university education that I found out that the known fact that “people in the Middle Ages thought the world was flat because the Bible says so” was not really true at all. I uncover false “known facts” that I was taught during my formal education every month or so.
“Known facts” are overrated.
Not on the level of the things being discussed in this thread, you don’t!
I mean seriously, look at what’s going on here: apparently rational people are saying they would believe in vampires, talking cows and orbital mind control lasers on the unsupported word of an authority figure. I suppose I shouldn’t be shocked, human nature being what it is, but still.
I’d believe in anything up to orbiting vampire cows, but beyond that I’d be sceptical.
Not “unsupported word”, see the post.
The word of a perfectly rational “authority figure” is strong evidence. See Auman’s Agreement Theorem.