[LINK]: Interview with Daniel Kahneman
Here is a Q & A with Daniel Kahneman. He gives a brief answer to a question about heuristics and AI:
Q.With the launch of Siri and a stated aim to be using the data collected to improve the performance of its AI, should we expect these types of quasi-intelligences to develop the same behavioral foibles that we exhibit, or should we expect something completely different? And if something different, would that something be more likely to reflect the old “rational” assumptions of behavior, or some totally other emergent set of biases and quirks based on its own underlying architecture? My money’s on emergent weirdness, but then, I don’t have a Nobel Prize.-Peter Bennett
A.Emergent weirdness is a good bet. Only deduction is certain. Whenever an inductive short-cut is applied, you can search for cases in which it will fail. It is always useful to ask “What relevant factors are not considered?” and “What irrelevant factors affect the conclusions?” By their very nature, heuristic shortcuts will produce biases, and that is true for both humans and artificial intelligence, but the heuristics of AI are not necessarily the human ones.
What about this:
Ah, that was interesting, thanks!
I am reading Kahneman’s recent Thinking fast and slow (TFaS) right now, and I am thoroughly enjoying it. As much as I enjoyed reading Ariely’s books or The invisible Gorilla and a bunch of similar books., TFaS tackles the same questions in a much more thorough, more fundamental way—without ever becoming dry or ‘academic’.
IMHO a must-read for LW-readers, I propose adding TFaS to the Sequences :-)