The problem is that questions don’t come with little labels saying whether or not they are answerable. Ban all deep philosophy and you don’t get Francis Bacon or Isaac Newton. We can now say that trying to answer questions like “what is the true nature of god” isn’t going to work. We now know that an alchemist can’t turn lead into gold by rubbing lemons on it. However, it was a reasonable thing to try, given the knowledge of the time, and other alchemical experiments produced useful results like phosphorus.
Celebrating the people who dedicated their lives to building the first steam engine, while mocking people who tried to build perpetual motion machines before conservation of energy was understood, is just pure hindsight, and so can’t be used as a lesson for the future.
Go ahead and mock those who aim for perpetual motion in the modern day.
people wasting their time thinking about ill-phrased questions, just replacing “God” with “a simulation” or replacing “repenting for the end times” with “handling AI risk”.
Given current evidence, I suspect that this field is a steam engine not a perpetual motion machine. I suspect that good answers are possible. We might not be skilled enough to reach them, but we know little enough about how much skill is needed that we can’t be confident of failure. At least a few results, like mesa optimisers, look like successes.
Logical positivism asserted only statements verifiable through direct observation or logical proof should be considered meaningful. As a philosophical position, it’s self-refuting (if it’s true, it’s meaningless). As a rule of thumb about which questions are likely to reward investigation, it works pretty well.
For example, “AI risk” is incredibly vague. “AI” is a large class of possible devices and there are many forms of “risk”. If a problem can’t be clearly stated then logical proof is not a useful approach, and direct observation only works on things that actually exist. So I’d say that “AI risk” is not likely to be a tractable question, although “the effect of algorithmic trading on US agricultural commodities markets” or “the effect of social media ranking algorithms on the 2020 US elections” probably are.
We can now say that trying to answer questions like “what is the true nature of god” isn’t going to work
I mean, I don’t think and I’m not arguing we can do that. I just think that the question in itself is mistakenly formulate, the same way “How do we handle AI risk ?” is a mistaken formulation (see Jau Molstad’s answer to the post which seems to address this).
All that I am claiming is that certain ill-defined question on which no progress can be made exist and that they can be to some extent easily spotted because they would make no sense if de-constructed or if an outside observe were to judge your progress on them.
Celebrating the people who dedicated their lives to building the first steam engine, while mocking people who tried to build perpetual motion machines before conservation of energy was understood, is just pure hindsight
Ahm, I mean, Epicurus and Thales would have had pretty strong intuitions against this, and conservation of energy has been postulated in physics since Issac Newton and even before him, when the whole thing wasn’t even called “physics”.
Nor is there a way to “prove” conservation of energy other than purely philosophically, or in an empirical way by saying: “All our formulas make sense if this is a thing, so let’s assume the world works this way, and if there is some part of the world that doesn’t we’ll get to it when we find it”.
Also, building a perpetual motion machine (or trying to) is not working on an unsanswerable problem/question of the sort I refer to.
As in, working on one will presumably lead you to build better and better engines, and/or see your failure and give up. There is a “failure state”, and there’s no obvious way of getting into “metaphysics” from trying to research perpetual motion.
Indeed, “Can we build a perpetual motion machine ?” is a question I see as entirely valid, not worth pursuing, but it’s at worst harm-neutral and it has proven so in the last 2,000+ years of people trying to answer it.
The problem is that questions don’t come with little labels saying whether or not they are answerable. Ban all deep philosophy and you don’t get Francis Bacon or Isaac Newton. We can now say that trying to answer questions like “what is the true nature of god” isn’t going to work. We now know that an alchemist can’t turn lead into gold by rubbing lemons on it. However, it was a reasonable thing to try, given the knowledge of the time, and other alchemical experiments produced useful results like phosphorus.
Celebrating the people who dedicated their lives to building the first steam engine, while mocking people who tried to build perpetual motion machines before conservation of energy was understood, is just pure hindsight, and so can’t be used as a lesson for the future.
Go ahead and mock those who aim for perpetual motion in the modern day.
Given current evidence, I suspect that this field is a steam engine not a perpetual motion machine. I suspect that good answers are possible. We might not be skilled enough to reach them, but we know little enough about how much skill is needed that we can’t be confident of failure. At least a few results, like mesa optimisers, look like successes.
Logical positivism asserted only statements verifiable through direct observation or logical proof should be considered meaningful. As a philosophical position, it’s self-refuting (if it’s true, it’s meaningless). As a rule of thumb about which questions are likely to reward investigation, it works pretty well.
For example, “AI risk” is incredibly vague. “AI” is a large class of possible devices and there are many forms of “risk”. If a problem can’t be clearly stated then logical proof is not a useful approach, and direct observation only works on things that actually exist. So I’d say that “AI risk” is not likely to be a tractable question, although “the effect of algorithmic trading on US agricultural commodities markets” or “the effect of social media ranking algorithms on the 2020 US elections” probably are.
I mean, I don’t think and I’m not arguing we can do that. I just think that the question in itself is mistakenly formulate, the same way “How do we handle AI risk ?” is a mistaken formulation (see Jau Molstad’s answer to the post which seems to address this).
All that I am claiming is that certain ill-defined question on which no progress can be made exist and that they can be to some extent easily spotted because they would make no sense if de-constructed or if an outside observe were to judge your progress on them.
Ahm, I mean, Epicurus and Thales would have had pretty strong intuitions against this, and conservation of energy has been postulated in physics since Issac Newton and even before him, when the whole thing wasn’t even called “physics”.
Nor is there a way to “prove” conservation of energy other than purely philosophically, or in an empirical way by saying: “All our formulas make sense if this is a thing, so let’s assume the world works this way, and if there is some part of the world that doesn’t we’ll get to it when we find it”.
Also, building a perpetual motion machine (or trying to) is not working on an unsanswerable problem/question of the sort I refer to.
As in, working on one will presumably lead you to build better and better engines, and/or see your failure and give up. There is a “failure state”, and there’s no obvious way of getting into “metaphysics” from trying to research perpetual motion.
Indeed, “Can we build a perpetual motion machine ?” is a question I see as entirely valid, not worth pursuing, but it’s at worst harm-neutral and it has proven so in the last 2,000+ years of people trying to answer it.