This conclusion is way too strong. To just give one way: there’s a big space of possibilities where discovering the planning fallacy in fact makes you less susceptible to the planning fallacy, but not immune.
Actually, if the CFAR could reliably reduce susceptibility to the planning fallacy, they are wasting their time with AI safety—they could be making a fortune teaching their methods to the software industry, or engineers in general.
This conclusion is way too strong. To just give one way: there’s a big space of possibilities where discovering the planning fallacy in fact makes you less susceptible to the planning fallacy, but not immune.
Actually, if the CFAR could reliably reduce susceptibility to the planning fallacy, they are wasting their time with AI safety—they could be making a fortune teaching their methods to the software industry, or engineers in general.