I’m glad I actually read the article and didn’t just react based on the abstract. But I still disagree strongly.
As a PhD student, I have personal experience with this: working on something you think is wrong is even more of a trap than it seems.
This is because when you’re working on the wrong thing, it’s often because you think it might be the right thing, and want to get some results so that you can check. But one of the defining characteristics of wrong things is that they don’t tend to produce results, and so people often get stuck doing the wrong thing for much longer than they should. Another key issue is selection bias: when someone is doing the wrong thing, it’s usually because it’s a specific wrong thing that they are unusually blind to. The instant you notice that you’re doing something you think is wrong, you should start thinking that maybe it’s always been wrong, and you didn’t notice because this wrong thing is selected for gaps in your expertise.
Someone who wants to work on something they think is wrong might respond with something about exploration vs. exploitation, or multi-armed bandits, or how if nobody ever did things they thought were wrong, we wouldn’t have scientific progress. Sadly for my past self, this is a false view of scientific progress. Progress is overwhelmingly made by experts who have a good understanding of the area and try as hard as they can to work on the right thing, rather than the wrong thing.
Yes, I know that philosophy has basically no verification mechanisms and is therefore unable to make progress in the same sense. But I think the general lesson is a pretty important one.
This is tangential to the topic of the OP, but (imo) worth responding to:
Yes, I know that philosophy has basically no verification mechanisms and is therefore unable to make progress in the same sense.
Whatever its faults, philosophy excels in figuring out what questions to ask. Very often, once those questions begin to be answered in a decisive way, then the field of endeavor that results is no longer called “philosophy”, but something else. But clarifying the questions is an extremely valuable service!
Funny how most philosophers misunderstand what their job is about. They try answering questions instead of asking or clarifying them, finding a way to ask a question in a way that is answerable by an actual scientist.
Sturgeon’s law applies to philosophy and philosophers no less than it applies to everything else.
The contemporary philosopher whom, I think, I respect most is Daniel Dennett. It is not a coincidence that much of Dennett’s work may indeed be described as “asking or clarifying [questions], finding a way to ask a question in a way that is answerable by an actual scientists”.
There are often ways to reframe a research question that feels wrong into one which is at least open and answerable, hopefully before one runs out of grad school time. In this case it could be something like “What changes in the laws of the universe would make moral realism a useful model of the world, one that an AGI would be interested in adopting?”
I’m glad I actually read the article and didn’t just react based on the abstract. But I still disagree strongly.
As a PhD student, I have personal experience with this: working on something you think is wrong is even more of a trap than it seems.
This is because when you’re working on the wrong thing, it’s often because you think it might be the right thing, and want to get some results so that you can check. But one of the defining characteristics of wrong things is that they don’t tend to produce results, and so people often get stuck doing the wrong thing for much longer than they should. Another key issue is selection bias: when someone is doing the wrong thing, it’s usually because it’s a specific wrong thing that they are unusually blind to. The instant you notice that you’re doing something you think is wrong, you should start thinking that maybe it’s always been wrong, and you didn’t notice because this wrong thing is selected for gaps in your expertise.
Someone who wants to work on something they think is wrong might respond with something about exploration vs. exploitation, or multi-armed bandits, or how if nobody ever did things they thought were wrong, we wouldn’t have scientific progress. Sadly for my past self, this is a false view of scientific progress. Progress is overwhelmingly made by experts who have a good understanding of the area and try as hard as they can to work on the right thing, rather than the wrong thing.
Yes, I know that philosophy has basically no verification mechanisms and is therefore unable to make progress in the same sense. But I think the general lesson is a pretty important one.
This is tangential to the topic of the OP, but (imo) worth responding to:
Whatever its faults, philosophy excels in figuring out what questions to ask. Very often, once those questions begin to be answered in a decisive way, then the field of endeavor that results is no longer called “philosophy”, but something else. But clarifying the questions is an extremely valuable service!
Funny how most philosophers misunderstand what their job is about. They try answering questions instead of asking or clarifying them, finding a way to ask a question in a way that is answerable by an actual scientist.
Sturgeon’s law applies to philosophy and philosophers no less than it applies to everything else.
The contemporary philosopher whom, I think, I respect most is Daniel Dennett. It is not a coincidence that much of Dennett’s work may indeed be described as “asking or clarifying [questions], finding a way to ask a question in a way that is answerable by an actual scientists”.
Vocational prescriptivism? :)
There are often ways to reframe a research question that feels wrong into one which is at least open and answerable, hopefully before one runs out of grad school time. In this case it could be something like “What changes in the laws of the universe would make moral realism a useful model of the world, one that an AGI would be interested in adopting?”
It has falsification mechanisms, and it may be the case that nothing has verification mechanisms.