You’d need perhaps 100, maybe even 1,000 times more arguments to get a perfectly open-minded and Bayesian agent to start from the point where the other person started and end up agreeing with you.
Modelling humans with Bayesian agent seems wrong.
For humans, I think the problem usually isn’t the number of arguments / number of angles you attacked the problem, but whether you have hit on the few significant cruxes of that person. This is especially because humans are quite far away from perfect Bayesians. For relatively small disargreements (i.e. not at the scale of convincing a Christian that God doesn’t exist), usually people just had a few wrong assumptions or cached thoughts. If you can accurately hit those cruxes, then you can convince them. It is very very hard to know which arguments can hit those cruxes though and it is why one of the viable strategies is to keep throwing arguments until one of them work.
(Also unlike convincing Bayesian agents where you can argue for W->X, X->Y, Y->Z in any order, sometimes you need to argue about things in the correct order)
Suppose you identify a single crux A. Now you need to convince them of A. But convincing them of A requires you to convince them of A.1, A.2, and A.3.
Ok, no problem. You get started trying to convince them of A.1. But then you realize that in order to convince them of A.1, you need to first convince them of A.1.1, A.1.2, and A.1.3.
I think this sort of thing is often the case, and is how large inferential distances are “shaped”.
Modelling humans with Bayesian agent seems wrong.
For humans, I think the problem usually isn’t the number of arguments / number of angles you attacked the problem, but whether you have hit on the few significant cruxes of that person. This is especially because humans are quite far away from perfect Bayesians. For relatively small disargreements (i.e. not at the scale of convincing a Christian that God doesn’t exist), usually people just had a few wrong assumptions or cached thoughts. If you can accurately hit those cruxes, then you can convince them. It is very very hard to know which arguments can hit those cruxes though and it is why one of the viable strategies is to keep throwing arguments until one of them work.
(Also unlike convincing Bayesian agents where you can argue for W->X, X->Y, Y->Z in any order, sometimes you need to argue about things in the correct order)
Suppose you identify a single crux
A
. Now you need to convince them ofA
. But convincing them ofA
requires you to convince them ofA.1
,A.2
, andA.3
.Ok, no problem. You get started trying to convince them of
A.1
. But then you realize that in order to convince them ofA.1
, you need to first convince them ofA.1.1
,A.1.2
, andA.1.3
.I think this sort of thing is often the case, and is how large inferential distances are “shaped”.