LW server reports: not allowed.
This probably means the post has been deleted or moved back to the author's drafts.
Sure, System 2 is your CPU and System 1 your GPU. System 1 can make you Ramanujan, and System 2 can make you Yudkowsky. It is System 2 that shuts up and multiplies, that debugs and steers. If our AIs end up having them, I expect us to die because of our failure to interpret its System 2, not 1.
I tend to see System 2 as being our search process, while System 1 is our world model. System 2 uses System 1 in its searches to perform “counterfactual updates” (predict how the world will respond to selected actions, so that we can pick the next action in advance and so on). I’ve realized that most of the individual differences in intelligence seem to be in how well System 1 works.
Please elaborate ‘the best “simplified” way of looking at human intelligence is how well your “System 1″ performs (approximately) Bayesian inference’
What I meant by “the best simplified” is something along the lines of “the best way of distilling it down to a very short description without losing too much important stuff”. Most people (including myself, previously) are overly concerned with concepts such as “working memory” and “processing speed”—very computer-like things.
Anyhow, the best way of defining intelligence in an abstract sense is probably still Legg and Hutter’s “Universal Intelligence”. You’re giving an agent a score by summing how well it does against all possible environments (Turing Machines), weighted by the Solomonoff prior.