Well, you don’t have to answer this now. You’ll probably have a better idea once you’ve promoted a few more people.
To be honest, I’m not happy with my response here. There was also a second simultaneous discussion topic about whether CEA was net positive and even though I tried simplifying this into a single discussion, it seems that I accidentally mixed in n part of the other discussion (the original title of this post in draft was EA vs. rationality).
Update: I’ve now edited this response out.
I need to also add some discussion of the worries of movements, but mobile is playing up for me again
I didn’t systematically review his work, just clicked on random articles to see how much value I could extract. Feel free to look me to any reasonable accessible articles.
Thanks for writing this; I’ve briefly attempted looking at his ideas, but most of it is unreadable. Most of his remaining ideas seem at least somewhat mystical, which makes me skeptical, but it’s useful to know!
I don’t have a rigorous argument against an infinite chain, but here’s my current set of intuitions: Let’s suppose that we have an infinite chain of reasons. Where does the chain come from? Does it pop out of nowhere? Or is there some intuition or finite collection of intuitions that we can posit as an explanation for the chain? While technically possible that the infinite chain could require infinite different intuitions to justify, this seem rather unlikely to me. What then if we accept that there is an intuition or there are intuitions behind the chain? Well, now we ask why these intuitions are reliable. And if we hit an infinite chain again, we can try the same trick and so on until we actually find a cycle
“The point here is that no matter how we measure complexity, it seems likely that philosophy would have a “high computational complexity class” according to that measure.”—I disagree. The task of philosophy is to figure out how to solve the meta problem, not to actually solve all individual problems or the worst individual problem
How strongly do you think improving human meta-philosophy would improve computational meta-philosophy?
Perhaps it’d be useful if there was a group that took more of a dialectical approach, such as in a philosophy class? For example, it could collect different perspectives on what needs to happen for AI to go well and try to help people understand the assumptions underlying the project they are considering being valuable.
Yeah, this is better than my example of food & medicine.
Unfortunately, I didn’t see this comment and now I can’t remember, so I just removed it and we’ll never know.
This comment didn’t age well
“Noting that there is a certain level of verbal confusion does not imply that there is nothing going on except verbal confusion”—I’m not claiming that verbal confusion is all that is going on, but I will admit that I could have been clearer about what I meant. You are correct that Chalmer’s aim was to highlight something about consciousness and for many people discussion of zombies can be a useful way of illustrating how reductive the materialist theory of consciousness is. But from a logical standpoint, it’s not really any different from the argument you’d make if you were discussing consciousness directly. So if the zombie argument is easier to grasp for you, great; otherwise you can ignore it and focus on direct discussion of consciousness instead.
I’ve had at least one or two conversations with materialists who claim that the concept of a philosophical zombie is incoherent, although I couldn’t promise you that is a majority of them.
Anyway, if both parties agree on the definition of p-zombies, then they won’t fall into the issue that this post is trying to help people avoid, of using the same word “zombie” in different ways and mistaking a linguistic dispute for something more substantial. Indeed, when trying to determine whether we are all zombies or none of us are zombies, we end up trying to determine whether consciousness is substantial or reductive—this moves the question away from zombies to consciousness itself.
How does this apply here?
Is your point that we have no reason to label such processes as consciousness? If so, I’d agree with you, and I actually intent to write a post on this soon.
On the contrary, it is reasonable for people to update in response to this argument, such as if they realise they hold views that are inconsistent. For example, if they identify as a materialist, but haven’t actually thought through what a the materialist view of consciousness would entail, they might discover that this is not something they actually endorse.