...you can’t conjure the right definition out of thin air if your knowledge is not adequate.
You can’t get to the definition of fire if you don’t know about atoms and molecules; you’re better off saying “that orangey-bright thing”. And you do have to be able to talk about that orangey-bright stuff, even if you can’t say exactly what it is, to investigate fire. But these days I would say that all reasoning on that level is something that can’t be trusted—rather it’s something you do on the way to knowing better, but you don’t trust it, you don’t put your weight down on it, you don’t draw firm conclusions from it, no matter how inescapable the informal reasoning seems.
I suppose this statement is qualified later on in the sequences? Otherwise, wouldn’t this contradict what SI is doing with respect to risks associated with artificial general intelligence?
I suppose this statement is qualified later on in the sequences? Otherwise, wouldn’t this contradict what SI is doing with respect to risks associated with artificial general intelligence?