How do you know the costs of your irrationality if you’re irrational?
We’re here to talk about rationality, which is the art generated when you want something more than your particular mode of thinking
Well, if you expect the future to be just like the past, calling that “realism” isn’t going to save you from the fact that you’re guaranteed to be wrong.
...there are specific propositions, right? You can’t just bundle all the propositions together and slay them with one mighty blow that consists of one thing you can do wrong if you believe this bundle of propositions.
Curiosity requires ignorance and the ability to relinquish your ignorance, and I see you attaching a lot of importance to your ignorance here.
This sounds to me more like a mistake you are making in your model of the world than something you could actually do to the world itself.
If you want a precise practical AI, you don’t get there by starting with an imprecise practical AI and going to a precise practical AI, you start with a precise impractical AI and then go to a precise and practical AI.
You can make mistakes even if you think you have a precise theory, but if you don’t even think you have a precise theory you’re completely doomed.
From video dialogues:
How do you know the costs of your irrationality if you’re irrational?
We’re here to talk about rationality, which is the art generated when you want something more than your particular mode of thinking
Well, if you expect the future to be just like the past, calling that “realism” isn’t going to save you from the fact that you’re guaranteed to be wrong.
...there are specific propositions, right? You can’t just bundle all the propositions together and slay them with one mighty blow that consists of one thing you can do wrong if you believe this bundle of propositions.
Curiosity requires ignorance and the ability to relinquish your ignorance, and I see you attaching a lot of importance to your ignorance here.
This sounds to me more like a mistake you are making in your model of the world than something you could actually do to the world itself.
If you want a precise practical AI, you don’t get there by starting with an imprecise practical AI and going to a precise practical AI, you start with a precise impractical AI and then go to a precise and practical AI.
You can make mistakes even if you think you have a precise theory, but if you don’t even think you have a precise theory you’re completely doomed.