There is a third component to actually knowing a lot about AI, which is having succeeded in having learnt about AI, which is to say, having “won” in a certain sense. If rationality is winning, or knowing how to use raw intelligence effectively, a baseline level of rationality is indicated.
Have you heard the anecdote about Kahneman and the planning fallacy? It’s from Thinking Fast and Slow, and deals with him creating curriculum to teach judgment and decision-making in high school. He puts together a team of experts, they meet for a year, and have a solid outline. They’re talking about estimating uncertain quantities, and he gets the bright idea of having everyone estimate how long it will take them until they submit a finished draft to the Ministry of Education. He solicits everyone’s probabilities using one of the approved-by-research methods they’re including in the curriculum, and their guesses are tightly centered around two years (ranging from about 1.5 to 2.5).
Then he decides to employ the outside view, and asks the curriculum expert how long it took similar teams in the past. That expert realizes that, in the past, about 40% of similar teams gave up and never finished; those who finished, no one took less than seven years to finish. (Kahneman tries to rescue them by asking about skills and resources, and turns out that this team is below average, but not by much.)
We should have quit that day. None of us was willing to invest six more years of work in a project with a 40% chance of failure. Although we must have sensed that persevering was not reasonable, the warning did not provide an immediately compelling reason to quit. After a few minutes of desultory debate, we gathered ourselves together and carried on as if nothing had happened. The book was eventually completed eight(!) years later.
It seems to me that if the person who discovered the planning fallacy is unable to make basic use of the planning fallacy when plotting out projects, a general sense that experts know what they’re doing and are able to use their symbolic manipulation skills on their actual lives is dangerously misplaced. If it is a bad idea to publish things about decision theory in academia (because the costs outweigh the benefits, say) then it will only be bad decision-makers who publish on decision theory!
If we live in a world where the discover of the planning fallacy can fall victim to it, we live in a world where teachers of rationality fail to improve anyone’s rationality skills.
This conclusion is way too strong. To just give one way: there’s a big space of possibilities where discovering the planning fallacy in fact makes you less susceptible to the planning fallacy, but not immune.
Actually, if the CFAR could reliably reduce susceptibility to the planning fallacy, they are wasting their time with AI safety—they could be making a fortune teaching their methods to the software industry, or engineers in general.
Wow, I’ve read the story but I didn’t quite realize the irony of it being a textbook (not a curriuculum, a textbook, right?) about judgment and decision making.
Have you heard the anecdote about Kahneman and the planning fallacy? It’s from Thinking Fast and Slow, and deals with him creating curriculum to teach judgment and decision-making in high school. He puts together a team of experts, they meet for a year, and have a solid outline. They’re talking about estimating uncertain quantities, and he gets the bright idea of having everyone estimate how long it will take them until they submit a finished draft to the Ministry of Education. He solicits everyone’s probabilities using one of the approved-by-research methods they’re including in the curriculum, and their guesses are tightly centered around two years (ranging from about 1.5 to 2.5).
Then he decides to employ the outside view, and asks the curriculum expert how long it took similar teams in the past. That expert realizes that, in the past, about 40% of similar teams gave up and never finished; those who finished, no one took less than seven years to finish. (Kahneman tries to rescue them by asking about skills and resources, and turns out that this team is below average, but not by much.)
It seems to me that if the person who discovered the planning fallacy is unable to make basic use of the planning fallacy when plotting out projects, a general sense that experts know what they’re doing and are able to use their symbolic manipulation skills on their actual lives is dangerously misplaced. If it is a bad idea to publish things about decision theory in academia (because the costs outweigh the benefits, say) then it will only be bad decision-makers who publish on decision theory!
If we live in a world where the discover of the planning fallacy can fall victim to it, we live in a world where teachers of rationality fail to improve anyone’s rationality skills.
This conclusion is way too strong. To just give one way: there’s a big space of possibilities where discovering the planning fallacy in fact makes you less susceptible to the planning fallacy, but not immune.
Actually, if the CFAR could reliably reduce susceptibility to the planning fallacy, they are wasting their time with AI safety—they could be making a fortune teaching their methods to the software industry, or engineers in general.
Wow, I’ve read the story but I didn’t quite realize the irony of it being a textbook (not a curriuculum, a textbook, right?) about judgment and decision making.