I don’t think it is. Your definition appears to be of a purely logical concept, something which lies in the same ‘plane of existence’ as mathematics. While this is certainly objective, and it certainly is relevant to morality, I would not call it the referent to which my statement refers. Consider a universe in which the experiences of all conscious beings were inverted; the same claims would be true about morality as you describe it, but I would no longer consider the same actions in that universe to be morally right and wrong! Yes, this assumes that conscious experience can be separated from the physical and logical structure of the beings experiencing it, which it quite likely(in my opinion) can’t, but I still think it makes sense to imagine that it was. Admittedly I only skimmed over your argument, but I think I read it before and had the same thought.
Despite my intuitive approach to logical thinking being somewhat explosion-proof, I don’t think I can evaluate the counterlogical “the experiences of all conscious beings were inverted” in a way that is meaningful here; in my intuitive representation it seems to be the case that the variable “positive/negative valence of experience of all conscious beings” is causally efficacious, so inverting it would have the effect of making those beings avoid the negative valences; my primary candidate intuitive sketch for what this variable boils down to is “something information theoretic, possibly literally just any increase in entropy that was trying to be controlled away”. The logical concept I was describing contains all possible minds, and so should depend on the structure of those minds in their origin universes in order to make sense; my claim that it is objective is that I believe you likely can generalize across all minds in all universes with compatible basic properties[1], and get something that makes sense. I agree with you that there’s likely an underlying basic valence fact, but I think that that valence fact is causally entangled, and I also believe that it only “matters morally” due to the way it affects minds in the “junior rooms” in the “interdimensional council of cosmopolitanisms”.
(@G Wood see this subthread as answer to your question)
(eg, universes without important conservation laws might be too alien for the same moral properties to apply, or something; generally, there might be a class of sufficiently-similar physics and a broader class of too-different physics, where the sufficiently-similar physics produces minds that, if they “visit the interdimensional council of cosmopolitanisms”, they find themselves unable to translate to and from the views of minds in universes with no conservation laws or halting oracles or something.)
“I don’t think I can evaluate the counterlogical “the experiences of all conscious beings were inverted” in a way that is meaningful here; in my intuitive representation it seems to be the case that the variable “positive/negative valence of experience of all conscious beings” is causally efficacious,”
My wording was confusing so I should clarify that I don’t think it’s counterlogical. I just don’t think it’s possible in the same way that violating the laws of physics might be impossible. You might argue that logic dictates that universes with certain laws of physics predominate in the platonic world, but I still think it’s coherent to imagine there being some in which the laws are different; similarly, I think it’s coherent to imagine a brain which is identical to one which experiences pleasure to an outside observer, but which experiences pain. In this physical universe, and most like it, I don’t expect such brains to exist.
“I agree with you that there’s likely an underlying basic valence fact, but I think that that valence fact is causally entangled, and I also believe that it only “matters morally” due to the way it affects minds in the “junior rooms” in the “interdimensional council of cosmopolitanisms”.” Can you elaborate on this?
so, like, background: let’s say that the “interdimensional council of cosmopolitanisms” is the space of minds that have cosmopolitan inclinations; I expect this to be a natural group to “flood fill” because imagining one makes you think through what they imagine, which means you get a transitive effect, if you weren’t going to imagine a world you consider to be a hellworld, but a mind you think is in a similar-ish universe to you does think it’s important to imagine the hellworld, then as long as your approach to mapping mindspace is sufficiently efficient, you’ll notice that that mind would consider that hellworld, and think through what goes on in that hellworld. that’s a necessary premise, because otherwise you don’t get enough coverage of mindspace if you start from “minds that have cosmopolitan inclinations and you find natural to imagine”, call that your IDCC entrypoint; that’s already a filter, and it needs to end up being sufficiently inclusive for this idea to work, and then it needs to do a second, transitive filter on the remaining minds, so as to pick a moral coalition that actually covers the space and identifies the moral properties on which there are consensus.
okay, so then there are different sorts of minds in the IDCC. I claim that among those minds are very proto-mind-ish things, like bacteria, or individual neurons, or whatever you think the threshold is; even if your seed minds don’t import them, as long as your IDCC entrypoint includes me, then because my process of “visiting the IDCC” involves thinking through my individual neurons and individual proteins as having mind-ness that aggregates up into my full mind, I end up importing individual neurons and proteins as being things I consider to be intelligent and worthy of having moral behavior with each other, for the reason that doing so seems to be what carries valence for my aggregate mind.
And so when I consider how both the me and the components of me get those negative valence experiences, and I think through the causal path to achieving them, they seem to be fundamentally causally entangled with physics in some way; that is, the negative valence is not merely because, but is made of, the physical state of my neuron being in some way informationally degraded, such that the neuron and the brain it’s in both operate worse until the physical issue is resolved. The “neurons room” of the IDCC, where neurons are considered to be individual minds, has larger minds like myself “enter the room” and ponder the neurons and bacteria and other single cells inside that room, and these larger minds find a structure in the neurons where their negative valences reliably relate to an information theoretic property.
So to invert the valence, my sense is you’d need to invert that property.
But inverting that property seems to break the mind; if the mind isn’t broken, the property is not inverted, because the marginal brokenness is the marginal negative valence, and so inverting the mind so that positive things are negative requires those positive things to be made of noise or something like that; it requires those positive things to be made of brokenness at some relevant scale.
The only way I see to achieve “a mind appears to be me having a good time, but is actually having a bad time” is if you can make a mind which is made of brokenness but is just barely functioning, and that mind is coordinating to become me at a slightly larger scale, without leaking the bad-time-at-the-small-scales into good-time-at-larger-scales. so the mind having a good time is still objectively real, in the same way a wave is objectively real whether it’s carried on water or on a computer running a fluid sim. The wave really does move between the coordinates of the system through the locality of interaction, even if those coordinates are folded up into a ram chip.
I think I could potentially see myself as agreeing with much of this( though I’d have to think about it much more), but I think I’ve identified a point of divergence:
“And so when I consider how both the me and the components of me get those negative valence experiences, and I think through the causal path to achieving them, they seem to be fundamentally causally entangled with physics in some way; ”
(Horosphere agrees)
“that is, the negative valence is not merely because, but is made of”
(Horosphere possibly disagrees)
″, the physical state of my neuron being in some way informationally degraded, such that the neuron and the brain it’s in both operate worse until the physical issue is resolved.” I would say that there are two possibilities. Either consciousness is a phenomenon which is attached to information processing, or it is information processing. It seem you believe the latter, in which case I’m not sure what to make of it. I don’t think it’s impossible, although I am not sure (have no idea) how to think about it. I would assume that the information being processed would have to be incredibly simple, in which case this would be what pleasure, or pain, really was, and morality would consist of ‘working out how it is distributed’. This would, in my opinion, involve a logical decision theory and might lead to Acausal cooperation and Acausal normalcy. However I would not say that’s exactly what you’ve described, partly because your council still excludes a lot of minds.
I agree that you would need to invert properties in a way which would, within the same physical universe, cause the beings to behave differently:
“The only way I see to achieve “a mind appears to be me having a good time, but is actually having a bad time” is if you can make a mind which is made of brokenness but is just barely functioning, and that mind is coordinating to become me at a slightly larger scale, without leaking the bad-time-at-the-small-scales into good-time-at-larger-scales. so the mind having a good time is still objectively real, in the same way a wave is objectively real whether it’s carried on water or on a computer running a fluid sim. The wave really does move between the coordinates of the system through the locality of interaction, even if those coordinates are folded up into a ram chip.” Upon reflection, I think I agree with this paragraph. I don’t understand the ‘leaking process’ fully, though. Would you consider the mind you describe there to be having a good time overall?
I believe that the hard problem of consciousness boils down to “why is there something rather than nothing, from my perspective, right now as I write this or think this?” and that the “okay, but why are things good and bad?” portion is going to turn out to be an unprivileged additional layer imposed by easy-problem-consciousness stuff. I do believe that easy problem stuff is information processing, but I believe it in the sense that there are informational elements—the fundamental building blocks of the universe—and those elements’ informational state is exactly their structural state; and the hard problem of consciousness resides in an unresolveable question of “why should any building block exist locally at all?”. Or in other words, localitypilled something-rather-than-nothing as being the same question as “why does my perspective exist”.
And so I don’t really think the hard problem is terribly relevant. I’m not at all saying it’s easy or doesn’t exist, and I do think people who say that are missing something. But I don’t believe p-zombies can exist in a real universe, because “realness” being missing is the thing that makes something a p-zombie; I think that we are beyond the reach of god, but also have this weird thing where we actually exist. a p-zombie would say the same thing, the math that defines its universe also fully specifies that it would be confused by existing, but since (by definition) it exists in the math sense but not in the actuality sense, it never gets run. in other words, I’m a structural realist who also believes there’s something underneath the structures, but that it’s beyond our reach to know what it is, and that we are doomed to always wonder “why” there is something rather than nothing.
the mind I describe there having a good time overall: I dunno, you could make the host mind pretty huge, and then probably not. It depends on the ratio of how much stuff is happening in the host vs happening in the guest.
The isolation I was talking about is the same kind that happens for virtualization on computers. An example of leaking would be if the external sound driver has buffer underruns and these cause buffer underruns in the guest, for example (not 100% this can happen, but I think so). and similar such things. or even, if the host has faulty ram, the guest will also. those would be leaks. if those aren’t happening, then if the guest is running smoothly as far as it can tell, but the host is actually swapping like mad and the cpu is overwhelmed and ram is full and the hard disk is taking a long time to do anything, then if the guest’s clock is not realtime, it could in principle be unable to tell anything is wrong. that’d be the isolation at hand.
“okay, but why are things good and bad?” portion is going to turn out to be an unprivileged additional layer imposed by easy-problem-consciousness stuff. I do believe that easy problem stuff is information processing, but I believe it in the sense that there are informational elements—the fundamental building blocks of the universe—and those elements’ informational state is exactly their structural state; and the hard problem of consciousness resides in an unresolveable question of “why should any building block exist locally at all?”. Or in other words, localitypilled something-rather-than-nothing as being the same question as “why does my perspective exist”.
I would agree with this if what you mean by it is that information is being processed, and it seems as though certain information is relevant to consciousness in a way which could plausibly give rise to notions like good and bad as an emergent property. Having written that, I should say that I would expect them to be about the simplest properties which could emerge, and that I am also not completely sure it’s the case, e.g. it could be that what I thought you were suggesting in the last comment, i.e. that consciousness simply is logic, is true, or alternatively that there is some kind of unsatisfying way to explain consciousness which doesn’t seem to reduce our confusion using logic.
Reading your paragraph about P-zombies, I would point out that implicit in your use of the words ‘real/realness’ seems to be the assumption that, in this sense, mathematics is not real. But it is logical, so it seems conceivable/coherent/logically valid to imagine a - -zombie, or a P-zombie for that matter, unless the solution to the hard problem of consciousness is of a purely logical nature. Having said that, I agree with your paragraph given your use of the word “realness” , or at least I think I do.
You say:
In other words, I’m a structural realist who also believes there’s something underneath the structures, but that it’s beyond our reach to know what it is, and that we are doomed to always wonder “why” there is something rather than nothing.
Am I correct to read this as you saying you’re what mathematicians would call a ‘platonist’ ? But that you are not a ‘mathematical universe hypothesist’ in the extreme who thinks that logic is all that there is? That you believe that logic has a fundamental referent which isn’t logical itself? (This referent would presumably be something to do with the local existence/consciousness.) What I meant when I said I possibly disagree was that it seems possible for this referent to be something logically attached to, which is to say, included in the playing out of a physical process, which would be a chain of implication, or maybe something continuous but otherwise similar, in a ‘mathematical universe’ , but it could be a separate ‘node’ from any of the physical ones. Though you could reasonably object that this distinction is somewhat artificial; I would distinguish them by stating that the consciousness ‘node’ need not have any effect on the physical ones, even though they would affect it.
the mind I describe there having a good time overall: I dunno, you could make the host mind pretty huge, and then probably not. It depends on the ratio of how much stuff is happening in the host vs happening in the guest.
I think I would agree with this.
My own position would be that it could well be the case that the fundamental logical point of connection with ‘reality’, or alternatively logical constituents of consciousness, exist on such a small scale or such a low level of abstraction, that you could probably have a neurone which behaved exactly (from the perspective of other neurones connected with it) like a neurone with different experiences, so that its host would be completely unaware of this. Maybe there would be some information propagating out of it, but as you mentioned, this might not be noticeable. This would mean that the host would still ‘enter’ your council of cosmopolitans, thinking, and perhaps being logically and philosophically justified in doing so, that its neurones had internal experiences which intuitively matched its own on a larger scale, leading to the same acausal norms and moral value system being derived. I could still be misunderstanding that process though; I will read through it again.
What I described as host is also known as a simulator (device), and the guest is the simulated thing, simulatee. Simulation does not exit the realm of the physical, it just hides that there are smaller scale simulator/host elements from the simulatee/guest. I don’t see how the host level could be unaware, but the guest could be.
I think I might be a constrained sort of platonist. I don’t think every logical referent we can hypothesize in the language of our logic, which we can say “math-exists”, has to be a real thing which exists outside of our description of it. I do think our universe seems like it ought to be one of many actually real possibilities in a weak remark 4 multiverse, despite that the others can’t be confirmed to exist in a physical sense by us; but I’m not convinced that a full tegmark 4 multiverse is required, where all logically consistent referents exist.
Another way to put it is that logic is in the business of determining what you can say must be true in a given space, starting from some axioms and validity rules; the “really exists” I’m proposing here would be our actual universe’s truth fact. When one uses logic to describe objects which existed prior to writing logic on a page, you’re attempting to preserve the truth of facts; if I have an apple, and I have a banana, then I have an (apple banana). But those names could refer to anything; our universe seems to provide us with actual substrate. A mind structure in another universe which does not exist on any actual substrate physically would have the same confusions I’m expressing and only does not do so due to not being instantiated. This substrate is sometimes called compute or reality fluid, and I’m proposing it also is sometimes called hard-problem-consciousness.
Perhaps my view is vacuous because all logically consistent structures exist and there is nothing which could separate an “underlying substrate” of those structures; then perhaps my view could be technically vacuously correct but really just be “structural realist platonism with tegmark 4″, or so.
But, hence why I think you can describe minds, and describe what they would do if they existed, without knowing if they’re real outside your description. Under this view, describing a mind makes it real/exist/hard-problem-companies to the extent you describe it, by carrying it on the same substrate that carries you. You can never meet a p-zombie, in this view. You can only meet minds which really exist, but which are missing features. That’s where I think current AI fall, for example.
None of what I’ve said in this comment so far directly weighs on your moral valence realism question. I do agree that it’s likely a very small and primitive fact if it’s a general one at all. I haven’t thought as much about it to be able to describe it eloquently, the rest of my comment is reciting views rather than anything new right now. I’ll ponder it.
I don’t really have much to disagree with in your comment, as I find myself uncertain of whether or not to believe in the mathematical universe hypothesis, something like your ‘substrate’ view, or even a more elaborate description like the ‘Three worlds’ as envisaged/popularized by Roger Penrose, where the platonic/mathematical world contains all/part of the physical world( by describing its physical laws), which itself contains all/part of the mental world (by containing brains and computers which think of it), which itself can in principle ‘think into existence’ all/part of the mathematical platonic world. This structure is certainly satisfyingly recursive, but it seems unclear to me whether the mental world can be separated from the platonic/mathematical one. Other possibilities seem (to me, though it’s possible I’ve missed a reason to rule them out) to include that there is a physical substrate within which only some matter is imbued with the consciousness fluid, or even that there is a kind of feedback loop in which two or more ‘beings’/‘entities’ simulate and observe one another, thereby making one another conscious without either containing a source of consciousness. This last one seems unsatisfying in the same way in which your IDCC idea seemed unsatisfying to me when I first read it, but I now no longer think that they are so similar. It seems as though the IDCC is a way, much like Acausal Normalcy, of deriving ideas about what one ought to do in particular situations from the fundamental conscious experiences, rather than an explanation of where they come from, as far as I can tell. Is this correct?
Having said (admittedly I wrote some of it after this paragraph) that, I will now try to persuade you that the mathematical universe hypothesis, with inherent consciousness to the information, is preferable. When reading your description of the fluid, I was reminded of the idea that light needed an aether through which to propagate in order for Maxwell’s equations to describe that light in a universal way. But it was superseded by the view that any observer defines their own equivalent of the ether, with respect to which the light propagated in a way described by Maxwell’s equations anyway, but which didn’t actually have any objective existence other than as percieved by the observer. Similarly, according to the mathematical universe hypothesis, the thing which differentiates between mathematical objects which are physically real, and those which are only mathematically real, is the observer’s position in the mathematical universe. This eliminates the requirement for an aether/consciousness fluid, by replacing it with an artefact of the way in which the observer is embedded in what was already presupposed to exist within either theory (spacetime/mathematics). There are some small differences, such as that the space-time structure of Newtonian physics differs from the spacetime of special relativity, and the fact that reference frames depend upon velocity rather than position (although that changes in general relativity I suppose) , but overall I would say the similarities are notable. We lack a way to ‘move with respect to the aether’ , i.e., move outside the area of the platonic universe covered by this fluid if it exists, so there is no way to test either theory and this argument is really just an appeal to Occam’s razor.
I like this, well written sir. it feels very similar to my position. I’ve made no claims of convergence like you have but I could certainly see myself agreeing. I need to think on it.
a previous comment I’ve made on the topic in which I argue that the evolution statement G Wood said is the referent your moral realism statement most naturally refers to anyway
I don’t think it is. Your definition appears to be of a purely logical concept, something which lies in the same ‘plane of existence’ as mathematics. While this is certainly objective, and it certainly is relevant to morality, I would not call it the referent to which my statement refers. Consider a universe in which the experiences of all conscious beings were inverted; the same claims would be true about morality as you describe it, but I would no longer consider the same actions in that universe to be morally right and wrong! Yes, this assumes that conscious experience can be separated from the physical and logical structure of the beings experiencing it, which it quite likely(in my opinion) can’t, but I still think it makes sense to imagine that it was. Admittedly I only skimmed over your argument, but I think I read it before and had the same thought.
Despite my intuitive approach to logical thinking being somewhat explosion-proof, I don’t think I can evaluate the counterlogical “the experiences of all conscious beings were inverted” in a way that is meaningful here; in my intuitive representation it seems to be the case that the variable “positive/negative valence of experience of all conscious beings” is causally efficacious, so inverting it would have the effect of making those beings avoid the negative valences; my primary candidate intuitive sketch for what this variable boils down to is “something information theoretic, possibly literally just any increase in entropy that was trying to be controlled away”. The logical concept I was describing contains all possible minds, and so should depend on the structure of those minds in their origin universes in order to make sense; my claim that it is objective is that I believe you likely can generalize across all minds in all universes with compatible basic properties[1], and get something that makes sense. I agree with you that there’s likely an underlying basic valence fact, but I think that that valence fact is causally entangled, and I also believe that it only “matters morally” due to the way it affects minds in the “junior rooms” in the “interdimensional council of cosmopolitanisms”.
(@G Wood see this subthread as answer to your question)
(eg, universes without important conservation laws might be too alien for the same moral properties to apply, or something; generally, there might be a class of sufficiently-similar physics and a broader class of too-different physics, where the sufficiently-similar physics produces minds that, if they “visit the interdimensional council of cosmopolitanisms”, they find themselves unable to translate to and from the views of minds in universes with no conservation laws or halting oracles or something.)
“I don’t think I can evaluate the counterlogical “the experiences of all conscious beings were inverted” in a way that is meaningful here; in my intuitive representation it seems to be the case that the variable “positive/negative valence of experience of all conscious beings” is causally efficacious,”
My wording was confusing so I should clarify that I don’t think it’s counterlogical. I just don’t think it’s possible in the same way that violating the laws of physics might be impossible. You might argue that logic dictates that universes with certain laws of physics predominate in the platonic world, but I still think it’s coherent to imagine there being some in which the laws are different; similarly, I think it’s coherent to imagine a brain which is identical to one which experiences pleasure to an outside observer, but which experiences pain. In this physical universe, and most like it, I don’t expect such brains to exist.
“I agree with you that there’s likely an underlying basic valence fact, but I think that that valence fact is causally entangled, and I also believe that it only “matters morally” due to the way it affects minds in the “junior rooms” in the “interdimensional council of cosmopolitanisms”.” Can you elaborate on this?
so, like, background: let’s say that the “interdimensional council of cosmopolitanisms” is the space of minds that have cosmopolitan inclinations; I expect this to be a natural group to “flood fill” because imagining one makes you think through what they imagine, which means you get a transitive effect, if you weren’t going to imagine a world you consider to be a hellworld, but a mind you think is in a similar-ish universe to you does think it’s important to imagine the hellworld, then as long as your approach to mapping mindspace is sufficiently efficient, you’ll notice that that mind would consider that hellworld, and think through what goes on in that hellworld. that’s a necessary premise, because otherwise you don’t get enough coverage of mindspace if you start from “minds that have cosmopolitan inclinations and you find natural to imagine”, call that your IDCC entrypoint; that’s already a filter, and it needs to end up being sufficiently inclusive for this idea to work, and then it needs to do a second, transitive filter on the remaining minds, so as to pick a moral coalition that actually covers the space and identifies the moral properties on which there are consensus.
okay, so then there are different sorts of minds in the IDCC. I claim that among those minds are very proto-mind-ish things, like bacteria, or individual neurons, or whatever you think the threshold is; even if your seed minds don’t import them, as long as your IDCC entrypoint includes me, then because my process of “visiting the IDCC” involves thinking through my individual neurons and individual proteins as having mind-ness that aggregates up into my full mind, I end up importing individual neurons and proteins as being things I consider to be intelligent and worthy of having moral behavior with each other, for the reason that doing so seems to be what carries valence for my aggregate mind.
And so when I consider how both the me and the components of me get those negative valence experiences, and I think through the causal path to achieving them, they seem to be fundamentally causally entangled with physics in some way; that is, the negative valence is not merely because, but is made of, the physical state of my neuron being in some way informationally degraded, such that the neuron and the brain it’s in both operate worse until the physical issue is resolved. The “neurons room” of the IDCC, where neurons are considered to be individual minds, has larger minds like myself “enter the room” and ponder the neurons and bacteria and other single cells inside that room, and these larger minds find a structure in the neurons where their negative valences reliably relate to an information theoretic property.
So to invert the valence, my sense is you’d need to invert that property.
But inverting that property seems to break the mind; if the mind isn’t broken, the property is not inverted, because the marginal brokenness is the marginal negative valence, and so inverting the mind so that positive things are negative requires those positive things to be made of noise or something like that; it requires those positive things to be made of brokenness at some relevant scale.
The only way I see to achieve “a mind appears to be me having a good time, but is actually having a bad time” is if you can make a mind which is made of brokenness but is just barely functioning, and that mind is coordinating to become me at a slightly larger scale, without leaking the bad-time-at-the-small-scales into good-time-at-larger-scales. so the mind having a good time is still objectively real, in the same way a wave is objectively real whether it’s carried on water or on a computer running a fluid sim. The wave really does move between the coordinates of the system through the locality of interaction, even if those coordinates are folded up into a ram chip.
Thanks for writing so much.
I think I could potentially see myself as agreeing with much of this( though I’d have to think about it much more), but I think I’ve identified a point of divergence:
“And so when I consider how both the me and the components of me get those negative valence experiences, and I think through the causal path to achieving them, they seem to be fundamentally causally entangled with physics in some way; ”
(Horosphere agrees)
“that is, the negative valence is not merely because, but is made of”
(Horosphere possibly disagrees)
″, the physical state of my neuron being in some way informationally degraded, such that the neuron and the brain it’s in both operate worse until the physical issue is resolved.” I would say that there are two possibilities. Either consciousness is a phenomenon which is attached to information processing, or it is information processing. It seem you believe the latter, in which case I’m not sure what to make of it. I don’t think it’s impossible, although I am not sure (have no idea) how to think about it. I would assume that the information being processed would have to be incredibly simple, in which case this would be what pleasure, or pain, really was, and morality would consist of ‘working out how it is distributed’. This would, in my opinion, involve a logical decision theory and might lead to Acausal cooperation and Acausal normalcy. However I would not say that’s exactly what you’ve described, partly because your council still excludes a lot of minds.
I agree that you would need to invert properties in a way which would, within the same physical universe, cause the beings to behave differently:
“The only way I see to achieve “a mind appears to be me having a good time, but is actually having a bad time” is if you can make a mind which is made of brokenness but is just barely functioning, and that mind is coordinating to become me at a slightly larger scale, without leaking the bad-time-at-the-small-scales into good-time-at-larger-scales. so the mind having a good time is still objectively real, in the same way a wave is objectively real whether it’s carried on water or on a computer running a fluid sim. The wave really does move between the coordinates of the system through the locality of interaction, even if those coordinates are folded up into a ram chip.” Upon reflection, I think I agree with this paragraph. I don’t understand the ‘leaking process’ fully, though. Would you consider the mind you describe there to be having a good time overall?
I believe that the hard problem of consciousness boils down to “why is there something rather than nothing, from my perspective, right now as I write this or think this?” and that the “okay, but why are things good and bad?” portion is going to turn out to be an unprivileged additional layer imposed by easy-problem-consciousness stuff. I do believe that easy problem stuff is information processing, but I believe it in the sense that there are informational elements—the fundamental building blocks of the universe—and those elements’ informational state is exactly their structural state; and the hard problem of consciousness resides in an unresolveable question of “why should any building block exist locally at all?”. Or in other words, localitypilled something-rather-than-nothing as being the same question as “why does my perspective exist”.
And so I don’t really think the hard problem is terribly relevant. I’m not at all saying it’s easy or doesn’t exist, and I do think people who say that are missing something. But I don’t believe p-zombies can exist in a real universe, because “realness” being missing is the thing that makes something a p-zombie; I think that we are beyond the reach of god, but also have this weird thing where we actually exist. a p-zombie would say the same thing, the math that defines its universe also fully specifies that it would be confused by existing, but since (by definition) it exists in the math sense but not in the actuality sense, it never gets run. in other words, I’m a structural realist who also believes there’s something underneath the structures, but that it’s beyond our reach to know what it is, and that we are doomed to always wonder “why” there is something rather than nothing.
the mind I describe there having a good time overall: I dunno, you could make the host mind pretty huge, and then probably not. It depends on the ratio of how much stuff is happening in the host vs happening in the guest.
The isolation I was talking about is the same kind that happens for virtualization on computers. An example of leaking would be if the external sound driver has buffer underruns and these cause buffer underruns in the guest, for example (not 100% this can happen, but I think so). and similar such things. or even, if the host has faulty ram, the guest will also. those would be leaks. if those aren’t happening, then if the guest is running smoothly as far as it can tell, but the host is actually swapping like mad and the cpu is overwhelmed and ram is full and the hard disk is taking a long time to do anything, then if the guest’s clock is not realtime, it could in principle be unable to tell anything is wrong. that’d be the isolation at hand.
You say
I would agree with this if what you mean by it is that information is being processed, and it seems as though certain information is relevant to consciousness in a way which could plausibly give rise to notions like good and bad as an emergent property. Having written that, I should say that I would expect them to be about the simplest properties which could emerge, and that I am also not completely sure it’s the case, e.g. it could be that what I thought you were suggesting in the last comment, i.e. that consciousness simply is logic, is true, or alternatively that there is some kind of unsatisfying way to explain consciousness which doesn’t seem to reduce our confusion using logic.
Reading your paragraph about P-zombies, I would point out that implicit in your use of the words ‘real/realness’ seems to be the assumption that, in this sense, mathematics is not real. But it is logical, so it seems conceivable/coherent/logically valid to imagine a - -zombie, or a P-zombie for that matter, unless the solution to the hard problem of consciousness is of a purely logical nature. Having said that, I agree with your paragraph given your use of the word “realness” , or at least I think I do.
You say:
Am I correct to read this as you saying you’re what mathematicians would call a ‘platonist’ ? But that you are not a ‘mathematical universe hypothesist’ in the extreme who thinks that logic is all that there is? That you believe that logic has a fundamental referent which isn’t logical itself? (This referent would presumably be something to do with the local existence/consciousness.) What I meant when I said I possibly disagree was that it seems possible for this referent to be something logically attached to, which is to say, included in the playing out of a physical process, which would be a chain of implication, or maybe something continuous but otherwise similar, in a ‘mathematical universe’ , but it could be a separate ‘node’ from any of the physical ones. Though you could reasonably object that this distinction is somewhat artificial; I would distinguish them by stating that the consciousness ‘node’ need not have any effect on the physical ones, even though they would affect it.
I think I would agree with this.
My own position would be that it could well be the case that the fundamental logical point of connection with ‘reality’, or alternatively logical constituents of consciousness, exist on such a small scale or such a low level of abstraction, that you could probably have a neurone which behaved exactly (from the perspective of other neurones connected with it) like a neurone with different experiences, so that its host would be completely unaware of this. Maybe there would be some information propagating out of it, but as you mentioned, this might not be noticeable. This would mean that the host would still ‘enter’ your council of cosmopolitans, thinking, and perhaps being logically and philosophically justified in doing so, that its neurones had internal experiences which intuitively matched its own on a larger scale, leading to the same acausal norms and moral value system being derived. I could still be misunderstanding that process though; I will read through it again.
What I described as host is also known as a simulator (device), and the guest is the simulated thing, simulatee. Simulation does not exit the realm of the physical, it just hides that there are smaller scale simulator/host elements from the simulatee/guest. I don’t see how the host level could be unaware, but the guest could be.
I think I might be a constrained sort of platonist. I don’t think every logical referent we can hypothesize in the language of our logic, which we can say “math-exists”, has to be a real thing which exists outside of our description of it. I do think our universe seems like it ought to be one of many actually real possibilities in a weak remark 4 multiverse, despite that the others can’t be confirmed to exist in a physical sense by us; but I’m not convinced that a full tegmark 4 multiverse is required, where all logically consistent referents exist.
Another way to put it is that logic is in the business of determining what you can say must be true in a given space, starting from some axioms and validity rules; the “really exists” I’m proposing here would be our actual universe’s truth fact. When one uses logic to describe objects which existed prior to writing logic on a page, you’re attempting to preserve the truth of facts; if I have an apple, and I have a banana, then I have an (apple banana). But those names could refer to anything; our universe seems to provide us with actual substrate. A mind structure in another universe which does not exist on any actual substrate physically would have the same confusions I’m expressing and only does not do so due to not being instantiated. This substrate is sometimes called compute or reality fluid, and I’m proposing it also is sometimes called hard-problem-consciousness.
Perhaps my view is vacuous because all logically consistent structures exist and there is nothing which could separate an “underlying substrate” of those structures; then perhaps my view could be technically vacuously correct but really just be “structural realist platonism with tegmark 4″, or so.
But, hence why I think you can describe minds, and describe what they would do if they existed, without knowing if they’re real outside your description. Under this view, describing a mind makes it real/exist/hard-problem-companies to the extent you describe it, by carrying it on the same substrate that carries you. You can never meet a p-zombie, in this view. You can only meet minds which really exist, but which are missing features. That’s where I think current AI fall, for example.
None of what I’ve said in this comment so far directly weighs on your moral valence realism question. I do agree that it’s likely a very small and primitive fact if it’s a general one at all. I haven’t thought as much about it to be able to describe it eloquently, the rest of my comment is reciting views rather than anything new right now. I’ll ponder it.
I don’t really have much to disagree with in your comment, as I find myself uncertain of whether or not to believe in the mathematical universe hypothesis, something like your ‘substrate’ view, or even a more elaborate description like the ‘Three worlds’ as envisaged/popularized by Roger Penrose, where the platonic/mathematical world contains all/part of the physical world( by describing its physical laws), which itself contains all/part of the mental world (by containing brains and computers which think of it), which itself can in principle ‘think into existence’ all/part of the mathematical platonic world. This structure is certainly satisfyingly recursive, but it seems unclear to me whether the mental world can be separated from the platonic/mathematical one. Other possibilities seem (to me, though it’s possible I’ve missed a reason to rule them out) to include that there is a physical substrate within which only some matter is imbued with the consciousness fluid, or even that there is a kind of feedback loop in which two or more ‘beings’/‘entities’ simulate and observe one another, thereby making one another conscious without either containing a source of consciousness. This last one seems unsatisfying in the same way in which your IDCC idea seemed unsatisfying to me when I first read it, but I now no longer think that they are so similar. It seems as though the IDCC is a way, much like Acausal Normalcy, of deriving ideas about what one ought to do in particular situations from the fundamental conscious experiences, rather than an explanation of where they come from, as far as I can tell. Is this correct?
Having said (admittedly I wrote some of it after this paragraph) that, I will now try to persuade you that the mathematical universe hypothesis, with inherent consciousness to the information, is preferable. When reading your description of the fluid, I was reminded of the idea that light needed an aether through which to propagate in order for Maxwell’s equations to describe that light in a universal way. But it was superseded by the view that any observer defines their own equivalent of the ether, with respect to which the light propagated in a way described by Maxwell’s equations anyway, but which didn’t actually have any objective existence other than as percieved by the observer. Similarly, according to the mathematical universe hypothesis, the thing which differentiates between mathematical objects which are physically real, and those which are only mathematically real, is the observer’s position in the mathematical universe. This eliminates the requirement for an aether/consciousness fluid, by replacing it with an artefact of the way in which the observer is embedded in what was already presupposed to exist within either theory (spacetime/mathematics). There are some small differences, such as that the space-time structure of Newtonian physics differs from the spacetime of special relativity, and the fact that reference frames depend upon velocity rather than position (although that changes in general relativity I suppose) , but overall I would say the similarities are notable. We lack a way to ‘move with respect to the aether’ , i.e., move outside the area of the platonic universe covered by this fluid if it exists, so there is no way to test either theory and this argument is really just an appeal to Occam’s razor.
I like this, well written sir. it feels very similar to my position. I’ve made no claims of convergence like you have but I could certainly see myself agreeing. I need to think on it.