I suspect that human minds are vast (more like little worlds of our own than clockwork baubles) and even a superintelligence would have trouble predicting our outputs accurately from (even quite) a few conversations (without direct microscopic access) as a matter of sample complexity.
There is a large body of non-AI literature that already addresses this, for example the research of Gerd Gigerenzer which shows that often heuristics and “fast and frugal” decision trees substantially outperform fine grained analysis because of the sample complexity matter you mention.
Pop frameworks which elaborate on this, and how it may be applied include David Snowden’s Cynefin framework which is geared for government and organizations and of course Nicholas Nassim Taleb’s Incerto.
I seem to recall also that the gist of Dunbar’s Number, and the reason why certain Parrots and Corvids seem to have larger pre-frontal-crotex equivalents than non-monogamous birds, is basically so that they can have a internal model of their mating partner. (This is very interesting to think about in terms of intimate human relationships, what I’d poetically describe as the “telepathy” when wordlessly you communicate, intuit, and predict a wide range of each-other’s complex and specific desires and actions because you’ve spent enough time together).
The scary thought to me is that a superintelligence would quite simply not need to accurately model us, it would just need to fine tune it’s models in a way not dissimilar from the psychographic models utilized by marketers. Of course that operates at scale so the margin of error is much greater but more ‘acceptable’.
Indeed dumb algorithms already to this very well—think about how ‘addictive’ people claim their TikTok or Facebook feeds are. The rudimentary sensationalist clickbait that ensures eyeballs and clicks. A superintelligence doesn’t need accurate modelling—this is without having individual conversations with us, to my knowledge (or rather my experience) most social media algorithms are really bad at taking the information on your profile and using things like sentiment and discourse analysis to make decisions about which content to feed you; they rely on engagement like sharing, clicking like, watch time and rudimentary metrics like that. Similarly, the content creators are often casting a wide net, and using formulas to produce this content.
A superintelligence I wager would not need accuracy yet still be capable of psychological tactics geared to the individual that the Stasi who operated Zersetzung could only dream of. Marketers must be drooling at the possibilities of finding orders of magnitude more effective marketing campaigns that would make one to one sales obsolete.
One can showcase very simple examples of data that is easy to generate ( simple data soirce) yet very hard to predict.
E.g. there is a 2-state generating hidden markov model whose optimal prediction hidden markov model is infinite.
Ive heard it explained as follows: it’s much harder for the fox to predict where the hare is going than it is for the hare to decide where to go to shake off the fox.
I suspect that human minds are vast (more like little worlds of our own than clockwork baubles) and even a superintelligence would have trouble predicting our outputs accurately from (even quite) a few conversations (without direct microscopic access) as a matter of sample complexity.
Considering the standard rhetoric about boxed A.I.’s, this might have belonged in my list of heresies: https://www.lesswrong.com/posts/kzqZ5FJLfrpasiWNt/heresies-in-the-shadow-of-the-sequences
There is a large body of non-AI literature that already addresses this, for example the research of Gerd Gigerenzer which shows that often heuristics and “fast and frugal” decision trees substantially outperform fine grained analysis because of the sample complexity matter you mention.
Pop frameworks which elaborate on this, and how it may be applied include David Snowden’s Cynefin framework which is geared for government and organizations and of course Nicholas Nassim Taleb’s Incerto.
I seem to recall also that the gist of Dunbar’s Number, and the reason why certain Parrots and Corvids seem to have larger pre-frontal-crotex equivalents than non-monogamous birds, is basically so that they can have a internal model of their mating partner. (This is very interesting to think about in terms of intimate human relationships, what I’d poetically describe as the “telepathy” when wordlessly you communicate, intuit, and predict a wide range of each-other’s complex and specific desires and actions because you’ve spent enough time together).
The scary thought to me is that a superintelligence would quite simply not need to accurately model us, it would just need to fine tune it’s models in a way not dissimilar from the psychographic models utilized by marketers. Of course that operates at scale so the margin of error is much greater but more ‘acceptable’.
Indeed dumb algorithms already to this very well—think about how ‘addictive’ people claim their TikTok or Facebook feeds are. The rudimentary sensationalist clickbait that ensures eyeballs and clicks. A superintelligence doesn’t need accurate modelling—this is without having individual conversations with us, to my knowledge (or rather my experience) most social media algorithms are really bad at taking the information on your profile and using things like sentiment and discourse analysis to make decisions about which content to feed you; they rely on engagement like sharing, clicking like, watch time and rudimentary metrics like that. Similarly, the content creators are often casting a wide net, and using formulas to produce this content.
A superintelligence I wager would not need accuracy yet still be capable of psychological tactics geared to the individual that the Stasi who operated Zersetzung could only dream of. Marketers must be drooling at the possibilities of finding orders of magnitude more effective marketing campaigns that would make one to one sales obsolete.
One can showcase very simple examples of data that is easy to generate ( simple data soirce) yet very hard to predict.
E.g. there is a 2-state generating hidden markov model whose optimal prediction hidden markov model is infinite.
Ive heard it explained as follows: it’s much harder for the fox to predict where the hare is going than it is for the hare to decide where to go to shake off the fox.