Engineer at CoinList.co. Donor to LW 2.0.
ESRogs
Hmm, maybe it’s worth distinguishing two things that “mental states” might mean:
intermediate states in the process of executing some cognitive algorithm, which have some data associated with them
phenomenological states of conscious experience
I guess you could believe that a p-zombie could have #1, but not #2.
Consciousness/subjective experience describes something that is fundamentally non-material.
More non-material than “love” or “three”?
It makes sense to me to think of “three” as being “real” in some sense independently from the existence of any collection of three physical objects, and in that sense having a non-material existence. (And maybe you could say the same thing for abstract concepts like “love”.)
And also, three-ness is a pattern that collections of physical things might correspond to.
Do you think of consciousness as being non-material in a similar way? (Where the concept is not fundamentally a material thing, but you can identify it with collections of particles.)
If you just assume that there’s no primitive for consciousness, I would agree that the argument for illusionism is extremely strong since [unconscious matter spontaneously spawning consciousness] is extremely implausible.
How is this implausible at all? All kinds of totally real phenomena are emergent. There’s no primitive for temperature, yet it emerges out of the motions of many particles. There’s no primitive for wheel, but round things that roll still exist.
Maybe I’ve misunderstood your point though?
This is a familiar dialectic in philosophical debates about whether some domain X can be reduced to Y (meta-ethics is a salient comparison to me). The anti-reductionist (A) will argue that our core intuitions/concepts/practices related to X make clear that it cannot be reduced to Y, and that since X must exist (as we intuitively think it does), we should expand our metaphysics to include more than Y. The reductionist (R) will argue that X can in fact be reduced to Y, and that this is compatible with our intuitions/concepts/everyday practices with respect to X, and hence that X exists but it’s nothing over and above Y. The nihilist (N), by contrast, agrees with A that it follows from our intuitions/concepts/practices related to X that it cannot be reduced to Y, but agrees with D that there is in fact nothing over and above Y, and so concludes that there is no X, and that our intuitions/concepts/practices related to X are correspondingly misguided. Here, the disagreement between A vs. R/N is about whether more than Y exists; the disagreement between R vs. A/N is about whether a world of only Y “counts” as a world with X. This latter often begins to seem a matter of terminology; the substantive questions have already been settled.
Is this a well-known phenomenon? I think I’ve observed this dynamic before and found it very frustrating. It seems like philosophers keep executing the following procedure:
Take a sensible, but perhaps vague, everyday concept (e.g. consciousness, or free will), and give it a precise philosophical definition, but bake in some dubious, anti-reductionist assumptions into the definition.
Discuss the concept in ways that conflate the everyday concept and the precise philosophical one. (Failing to make clear that the philosophical concept may or may not be the best formalization of the folk concept.)
Realize that the anti-reductionist assumptions were false.
Claim that the everyday concept is an illusion.
Generate confusion (along with full employment for philosophers?).
If you’d just said that the precisely defined philosophical concept was a provisional formalization of the everyday concept in the first place, then you wouldn’t have to claim that the everyday concept was an illusion once you realize that your formalization was wrong!
No one ever thought that phenomenal zombies lacked introspective access to their own mental states
I’m surprised by this. I thought p-zombies were thought not to have mental states.
I thought the idea was that they replicated human input-output behavior while having “no one home”. Which sounds to me like not having mental states.
If they actually have mental states, then what separates them from the rest of us?
This may be a bit of a pedantic comment, but I’m a bit confused by how your comment starts:
I’ve done over 200 hours of research on this topic and have read basically all the sources the article cites. That said, I don’t agree with all of the claims.
The “That said, …” part seems to imply that what follows is surprising. As though the reader expects you to agree with all the claims. But isn’t the default presumption that, if you’ve done a whole bunch of research into some controversial question, that the evidence is mixed?
In other words, when I hear, “I’ve done over 200 hours of research … and have read … all the sources”, I think, “Of course you don’t agree with all the claims!” And it kind of throws me off that you seem to expect your readers to think that you would agree with all the claims.
Is the presumption that someone would only spend a whole bunch of hours researching these claims if they thought they were highly likely to be true? Or that only an uncritical, conspiracy theory true believer would put in so much time into looking into it?
I used SPX Dec ’22, 2700⁄3000 (S&P was closer to those prices when I entered the position). And smart routing I think. Whatever the default is. I didn’t manually choose an exchange.
I’ve been able to get closer to 0.6% on IB. I’ve done that by entering the order at a favorable price and then manually adjusting it by a small amount once a day until it gets filled. There’s probably a better way to do it, but that’s what’s worked for me.
That makes a lot of sense to me. Good points!
It seems to me that there has been enough unanswered criticism of the implications of coherence theorems for making predictions about AGI that it would be quite misleading to include this post in the 2019 review.
If the post is the best articulation of a line of reasoning that has been influential in people’s thinking about alignment, then even if there are strong arguments against it, I don’t see why that means the post is not significant, at least from a historical perspective.
By analogy, I think Searle’s Chinese Room argument is wrong and misleading, but I wouldn’t argue that it shouldn’t be included in a list of important works on philosophy of mind.
Would you (assuming you disagreed with it)? If not, what’s the difference here?
(Put another way, I wouldn’t think of the review as a collection of “correct” posts, but rather as a collection of posts that were important contributions to our thinking. To me this certainly qualifies as that.)
On the review: I don’t think this post should be in the Alignment section of the review, without a significant rewrite / addition clarifying why exactly coherence arguments are useful or important for AI alignment.
Assuming that one accepts the arguments against coherence arguments being important for alignment (as I tentatively do), I don’t see why that means this shouldn’t be included in the Alignment section.
The motivation for this post was its relevance to alignment. People think about it in the context of alignment. If subsequent arguments indicate that it’s misguided, I don’t see why that means it shouldn’t be considered (from a historical perspective) to have been in the alignment stream of work (along with the arguments against it).
(Though, I suppose if there’s another category that seems like a more exact match, that seems like a fine reason to put it in that section rather than the Alignment section.)
Does that make sense? Is your concern that people will see this in the Alignment section, and not see the arguments against the connection, and continue to be misled?
The workflow I’ve imagined is something like:
human specifies function in English
AI generates several candidate code functions
AI generates test cases for its candidate functions, and computes their results
AI formally analyzes its candidate functions and looks for simple interesting guarantees it can make about their behavior
AI displays its candidate functions to the user, along with a summary of the test results and any guarantees about the input output behavior, and the user selects the one they want (which they can also edit, as necessary)
In this version, you go straight from English to code, which I think might be easier than from English to formal specification, because we have lots of examples of code with comments. (And I’ve seen demos of GPT-3 doing it for simple functions.)
I think some (actually useful) version of the above is probably within reach today, or in the very near future.
Mostly it just seems significant in the grand scheme of things. Our mathematics is going to become formally verified.
In terms of actual consequences, it’s maybe not so important on its own. But putting a couple pieces together (this, Dan Selsam’s work, GPT), it seems like we’re going to get much better AI-driven automated theorem proving, formal verification, code generation, etc relatively soon.
I’d expect these things to start meaningfully changing how we do programming sometime in the next decade.
One of the most important things going on right now, that people aren’t paying attention to: Kevin Buzzard is (with others) formalizing the entire undergraduate mathematics curriculum in Lean. (So that all the proofs will be formally verified.)
See one of his talks here:
FYI it looks like the footnote links are broken. (Linking to “about:blank...”)
I’m not sure whether it’s the standard view in physics, but Sean Carroll has suggested that we should think of locality in space as deriving from entanglement. (With space itself as basically an emergent phenomenon.) And I believe he considers this a driving principle in his quantum gravity work.
Based on what you’ve said, Rt never goes below one
You’re saying nostalgebraist says Rt never goes below 1?
I interpreted “R is always ~1 with noise/oscillations” to mean that it could go below 1 temporarily. And that seems consistent with the current London data. No?
So you’re saying that you think that a more infectious virus will not increase infections by as high a percentage of otherwise expected infections under conditions with more precautions, versus conditions with less precautions? What’s the physical mechanism there?
Wouldn’t “the fractal nature of risk taking” cause this? If some people are taking lots of risk, but they comply with actually strict lockdowns, then those lockdowns would work better than might otherwise be expected. No?
Got it, thanks for the clarification.