I think of myself as in camp 2 — I believe there is a fundamental sense of experience which is metaphysically independent of the physical description, I just don’t think it’s very mysterious.
Regardless of which camp is right or what the right metaphysical property is, I claim that a superintelligence would be able to deduce that such aliens would have the camp 2 intuitions, and that they would postulate certain metaphysical properties which it could accurately describe in broad terms (it might believe it’s all nonsense, but if it is true, then it would be able to see the local validity of it).
Being a superintelligence thinking about something is almost as good as actually observing and interacting with something when it comes to the broad shape of things.
Thanks for the clarification. I’m surprised by this, and I think it presents a problem for your view in the 0P/1P logic post you linked.
If I understand your view correctly, you want to say:
There are two effective ways of reasoning about concepts; 0P which is a third-person type of semantics and 1P which is a first-person type of semantics. Roughly, 1P logic is a way of reasoning about centered-worlds and self-locating facts.
When a system acquires a new input (such as a robot gaining a blue sensor), it would need to use 1P logic to reason effectively about it.
In this way, a lot of what we call “hard-problem intuitions” can be explained by the apparent conceptual isolation between concepts that require 0P and 1P logic. 1P concepts don’t reduce to 0P concepts so it feels like there’s an extra fact.
An unconscious superintelligence could fully grasp the formal difference between 0P and 1P logic and would therefore realise that evolved minds whose cognition involves 1P concepts would make camp 2 style metaphysical claims.
(Camp 2) These metaphysical properties do actually exist and the 1P logic is tracking something genuine.
As far as I can tell, this is the Phenomenal Concept Strategy i.e. from purely 3rd person facts about a cognitive architecture, we’re supposed to see that the system will form these primitive “experience” concepts, treat them as metaphysically independent and form hard-problem intuitions. My worry is that once you pair this with Camp #2 style phenomenal realism it’s vulnerable to Chalmers’ dilemma. Roughly, you have two horns to choose from
Horn 1: If phenomenal concepts are defined too strongly then a zombie won’t have those concepts, in which case, these concepts are part of the hard problem and require an explanation.
Horn 2: If phenomenal concepts are defined too weakly then zombies can have them. In this case, they don’t explain the Hard Problem of consciousness (since zombies don’t have phenomenal experience).
To put this in the language of your 0P/1P post, if “possessing phenomenal concepts” just boils down to being the kind of system that needs 1P logic to reason effectively about its internal states then zombies would possess them and zombies would be conscious which is a contradiction.
You could get around this by saying zombies are not conceivable (because any entity capable of deploying 1P logic is conscious) but this would kind’ve defeat the purpose of our thought-experiment as the superintelligence wouldn’t be unconscious if it can accurately reason about 1P logic.
That sounds about right. I simply disagree with Chalmers’ dilemma (at least as you describe it).
In my view, this metaphysical fact is necessary but not sufficient for explaining the Hard Problem. It applies to “zombies” in a fairly trivial way. A phenomenal experience is a type of experience (in my 1P sense), and must be understood in this frame — but not all such experiences are phenomenal. I don’t claim to know what exactly makes an experience phenomenal, but I’m pretty sure it will be something with non-trivial structure, and that this structure will sync-up in a predictable way with the 0P explanation of consciousness.
If you’re permitting a difference between 1P functional concepts and 1P phenomenal concepts then I’m happy to grant that an unconscious superintelligence would possess all the functional 1P resources and notice a kind of “functional analogue” of the hard problem intuitions due to the conceptual isolation of 0P/1P.
I’d push back if you’re making the stronger claim that the unconscious superintelligence would be able to fully grasp the actual hard problem of consciousness in anything like the sense that we do when we appeal to our 1P phenomenal concepts. By stipulation, it doesn’t possess 1P phenomenal concepts so it could never really “grok” the hard problem in the same way that we do. If it doesn’t possess the concepts I don’t see why it would be motivated to think evolved alien minds have genuinely additional metaphysical properties rather than just a certain kind of sophisticated self-model that lets them talk the way they do.
I don’t claim to know what exactly makes an experience phenomenal, but I’m pretty sure it will be something with non-trivial structure, and that this structure will sync-up in a predictable way with the 0P explanation of consciousness.
I’m not 100% sure if I’m interpreting this correctly. If the claim is that an ideal 0P observer would, in principle, be able to tell which concepts were 1P phenomenal for a given entity purely from 0P information and absent any of its own 1P phenomenal data points then I disagree and this is a crux for me.
I am making the stronger claim. I claim it could in-principle simulate us deeply enough to pull out the 1P phenomenal concepts, and could self-modify so as to legitimately experience them if it so chooses. It would be motivated to think this through carefully because it’s a huge part of our values (at least as we understand them), as long as it was interested enough to try to understand us (including as a special case of generic aliens) as agents at all.
I don’t believe there’s anything metaphysically “magical” going on such that it couldn’t or wouldn’t see this. Probably why I feel camp 1-ish.
As for the last point, my point of view is that any agent has a “bridge prior” which allows them to connect their 0P models with their 1P model. So I claim that in a sort of trivial way… it will have some prior here, and whatever the bridges spit out will inform what it deduces about the 1P experiences at play. I additionally claim that simple bridge priors will be adequate for finding 1P phenomenalism, and that you would have to have a pretty unnatural one in order to avoid seeing this.
I claim it could in-principle simulate us deeply enough to pull out the 1P phenomenal concepts
I don’t think this is right. A simulation (even an extremely detailed one) is ultimately only telling you about relational/dispositional facts i.e. given a physical state P it will evolve to state P’. It doesn’t say anything about the associated phenomenal state Q.
I additionally claim that simple bridge priors will be adequate for finding 1P phenomenalism, and that you would have to have a pretty unnatural one in order to avoid seeing this.
Ok I think this is the heart of the matter. I read the OP’s original point (“an unconscious super-intelligence would not guess that alien minds are conscious”) as essentially saying the most natural bridge prior for an unconscious system to posit is a null one i.e. that no real bridge exists.
Why think an unconscious system would be motivated to posit a bridge prior in which phenomenal properties actually exist? A prior that connects functional states to real phenomenal states is more complex than a prior in which phenomenal states are not real properties. The only reason to introduce the more complex prior is if the system had access to data points which require it as an explanation I.e. if it had access to phenomenal states Q.
If the bridge exists like I think it does, then it does say something about Q. But yeah, the bridge is necessary for this to work.
Under my framework, the bridge is necessary for connecting indexical (1P) statements with classical logical (0P) ones. Any system that is actually instantiated somewhere and has sensors will metaphysically have both of these (as in, once you fully specify everything that ‘actually instantiated somewhere’ means, you will have to have posited both of these for it), and a null bridge does not explain its own sensory data.
So I see the unconscious system as already having all of this inherently, and that the phenomenal structure is perhaps foreign, but understandable within the 1P side the same way a cell is understandable to it on the 0P side. Even if it does not itself experience the 1P things here as true, it will be able to use the bridges to understand the perspective of another agent that does have the phenomenal structure.
This is a cool position. Thanks for taking the time to explain it in so much detail.
I think I can see where we’re diverging. You want to place the metaphysical bridge between the 0P → 1P perspective and because there’s something metaphysically substantial happening in this bridge it’s a camp-2 position. But within the 1P side you’re treating 1P phenomenal states as a subset of the full 1P space and what links the phenomenal states to the other 1P states is some kind of structural relation. This idea is very camp 1-ish to me, the extra work done is being done by structural/relational links between phenomenal and non-phenomenal states in 1P.
By contrast, I want to place the metaphysical bridge between the physical and phenomenal states P → Q. This means I’m rejecting the claim that Q is related to P by purely structural relations or dispositional properties. This is also why I said the most parsimonious bridge was a null one unless you had access to Q. I agree the null bridge is incoherent if what you’re talking about is the 0P → 1P link that an instantiated agent needs to access is sensors. But that’s not the bridge the unconscious superintelligence needs. It needs a bridge to Q and since it doesn’t possess Q it could coherently postulate a null bridge between P and Q. From its perspective the null bridge would also be most parsimonious if it truly didn’t possess Q.
I’ve kind’ve sketched my reasons for thinking structural and dispositional properties don’t yield Q in the rest of the thread but I’ll throw in one more: from my perspective the structural/dispositional properties are inherently 0P there’s no 1P categorical/intrinsic properties which describe “what it’s like”
So when you say:
the phenomenal structure is perhaps foreign, but understandable within the 1P side the same way a cell is understandable to it on the 0P side
I reject the analogy. On my view, your link between the non-phenomenal 1P and phenomenal 1P is still structural/relational and these properties are always 0P in nature.
This might be a natural point to end the conversation as I think we’re at a point where our intuitions lie on opposite sides of a pretty large crux. But I’m happy to continue if you think there’s another angle I’m missing.
Thanks for pushing me to describe it better! This has been a lovely discussion.
I agree there is something very camp 1-ish about the idea (and just me as a person, frankly).
So your Q is not even a type of 1P thing, is that right? I’m not sure what sort of thing your Q is supposed to be, which I suppose is what my side of the crux looks like. (I kind of suspect that if you are right about Q, then I do not have access to it myself.)
I also (regardless of my other points and arguments) think you are wrong that structural/relational properties are always 0P! I think 0P can’t actually even have a proposition like “I always see blue right after I see red”, which still needs to use indexicals in order to refer. There’s a similar seeming “Environment X has a red-blue light sequence” on the 0P side which is not actually the same (e.g. what if I’m not actually in that environment?).
To me, “what it’s like” grounds to something like: an experience that there is something I observe which has its own 1P experiences (and a prediction of what those might be based on my observations). Phenomenal consciousness is then maybe something like: the observation that there is an observable entity ‘self’ such that ‘what-it’s-like_self(to see red)’ implies ‘to see red’. And this sort of fixed-point thing is inherently really weird and slippery just from a pure math point of view, e.g. Löb’s theorem (imagine ‘what-it’s-like’ as the box), which has the infamous Gödel’s 2nd Incompleteness theorem as a special case. And all of this is inherent to the 1P side; only on the 0P side can you just reduce things to neurons or atoms or whatever (though I claim a simple bridge would still reveal the 1P structure just from the 0P side). This formulation is speculative and off-the-cuff, and only intended to gesture at the sort of structure I think is possible here.
And happy to leave the discussion here if you’re done, but I am curious to know what you think of this idea.
Let me explain my view in a little more detail—it’s worth noting that I hold it pretty tentatively (around ~p(60%)) but I think you’ll find it appealing and hopefully see where it parts way with your view.
If you hold a blue image in your mind there’ll be something it’s like for you to experience blue. Call that Q. Now if you hold a red experience in your mind there’ll be a corresponding red experience that’s different to the blue one. Call it Q’. Hopefully you know what I’m talking about here! Some people with very strong camp 1 intuitions aren’t even willing to grant this, but I feel like if we’ve gotten this far in the thread we have some common ground here.
On my view, there is a fact of the matter about what blue and red look like for you and this is underdetermined by the physical/dispositional properties. The physical/dispositional properties could be held constant and these blue/red experiences could vary in principle. Granted, there’s a lot of structural constraints e.g. light cones in your retina, reflectance of surfaces, wiring of your brain etc.. But I claim that even if physics were fully fixed some aspect of your experience could vary in principle.
More precisely, a complete description of physics would tell you everything about the dispositional/relational properties of physical particles. Specifically, given a state P it will tell you how it evolves to P’. An example is an electron with charge q and mass m moving in an electric and gravitational field. The physics fully specifies the dispositional properties of the particles e.g. the electron will move in such-and-such a way. But this doesn’t tell you about any of their essential properties. If you switched the mass with something that played the same role but was intrinsically different (call is schmass) would that change anything? On standard physics, it wouldn’t matter what was playing the mass-role itself only that the structural form of the equations are intact.
On my view however, it does matter. The particles have an additional categorical/essential property that fixes something about the world. Importantly, these properties are physical in some sense (they’re all part of the same “stuff” that physicists talk about) but they’re not captured by the normal relational/dispositional properties of physics. This view is called Russellian Monism.
So with this formalism in place it actually connects up quite nicely with the other components of your view. The 0P/1P framework gives a nice overview of the difference between describing the disposition of the state (0P) and tokening the categorical essence of the state (1P). The hard problem intuitions just fall straight out of the difference between 0P/1P. Also on this view there are no zombies, since duplicating the physical particles necessarily duplicates their categorical properties — so there’s no gap between what I’ve been calling functional 1P and phenomenal 1P. As soon as experiences enter 1P they’re phenomenal.
Where this differs from your view is I think you need a categorical property to fix the phenomenal character of certain states. Whereas on your view it seems like you’re using the bridging law + structure to fix phenomenal character. In my previous comments I’m mostly pressing you about how much work structure is doing in your framework. If you’re happy for bridging laws to provide the jump then our views actually become really close.
Your point about Löb’s theorem is interesting and it seems like it could be a nice formalisation of the 0P/1P idea. I’d just emphasise that it’s still a structural argument for why 1P/phenomenal talk is really tricky—it doesn’t give you a metaphysical explanation for why 1P has a “what it’s likeness” in the first place. For this you need the bridging laws or the categorical properties.
I think of myself as in camp 2 — I believe there is a fundamental sense of experience which is metaphysically independent of the physical description, I just don’t think it’s very mysterious.
Regardless of which camp is right or what the right metaphysical property is, I claim that a superintelligence would be able to deduce that such aliens would have the camp 2 intuitions, and that they would postulate certain metaphysical properties which it could accurately describe in broad terms (it might believe it’s all nonsense, but if it is true, then it would be able to see the local validity of it).
Being a superintelligence thinking about something is almost as good as actually observing and interacting with something when it comes to the broad shape of things.
Thanks for the clarification. I’m surprised by this, and I think it presents a problem for your view in the 0P/1P logic post you linked.
If I understand your view correctly, you want to say:
There are two effective ways of reasoning about concepts; 0P which is a third-person type of semantics and 1P which is a first-person type of semantics. Roughly, 1P logic is a way of reasoning about centered-worlds and self-locating facts.
When a system acquires a new input (such as a robot gaining a blue sensor), it would need to use 1P logic to reason effectively about it.
In this way, a lot of what we call “hard-problem intuitions” can be explained by the apparent conceptual isolation between concepts that require 0P and 1P logic. 1P concepts don’t reduce to 0P concepts so it feels like there’s an extra fact.
An unconscious superintelligence could fully grasp the formal difference between 0P and 1P logic and would therefore realise that evolved minds whose cognition involves 1P concepts would make camp 2 style metaphysical claims.
(Camp 2) These metaphysical properties do actually exist and the 1P logic is tracking something genuine.
As far as I can tell, this is the Phenomenal Concept Strategy i.e. from purely 3rd person facts about a cognitive architecture, we’re supposed to see that the system will form these primitive “experience” concepts, treat them as metaphysically independent and form hard-problem intuitions. My worry is that once you pair this with Camp #2 style phenomenal realism it’s vulnerable to Chalmers’ dilemma. Roughly, you have two horns to choose from
Horn 1: If phenomenal concepts are defined too strongly then a zombie won’t have those concepts, in which case, these concepts are part of the hard problem and require an explanation.
Horn 2: If phenomenal concepts are defined too weakly then zombies can have them. In this case, they don’t explain the Hard Problem of consciousness (since zombies don’t have phenomenal experience).
To put this in the language of your 0P/1P post, if “possessing phenomenal concepts” just boils down to being the kind of system that needs 1P logic to reason effectively about its internal states then zombies would possess them and zombies would be conscious which is a contradiction.
You could get around this by saying zombies are not conceivable (because any entity capable of deploying 1P logic is conscious) but this would kind’ve defeat the purpose of our thought-experiment as the superintelligence wouldn’t be unconscious if it can accurately reason about 1P logic.
That sounds about right. I simply disagree with Chalmers’ dilemma (at least as you describe it).
In my view, this metaphysical fact is necessary but not sufficient for explaining the Hard Problem. It applies to “zombies” in a fairly trivial way. A phenomenal experience is a type of experience (in my 1P sense), and must be understood in this frame — but not all such experiences are phenomenal. I don’t claim to know what exactly makes an experience phenomenal, but I’m pretty sure it will be something with non-trivial structure, and that this structure will sync-up in a predictable way with the 0P explanation of consciousness.
If you’re permitting a difference between 1P functional concepts and 1P phenomenal concepts then I’m happy to grant that an unconscious superintelligence would possess all the functional 1P resources and notice a kind of “functional analogue” of the hard problem intuitions due to the conceptual isolation of 0P/1P.
I’d push back if you’re making the stronger claim that the unconscious superintelligence would be able to fully grasp the actual hard problem of consciousness in anything like the sense that we do when we appeal to our 1P phenomenal concepts. By stipulation, it doesn’t possess 1P phenomenal concepts so it could never really “grok” the hard problem in the same way that we do. If it doesn’t possess the concepts I don’t see why it would be motivated to think evolved alien minds have genuinely additional metaphysical properties rather than just a certain kind of sophisticated self-model that lets them talk the way they do.
I’m not 100% sure if I’m interpreting this correctly. If the claim is that an ideal 0P observer would, in principle, be able to tell which concepts were 1P phenomenal for a given entity purely from 0P information and absent any of its own 1P phenomenal data points then I disagree and this is a crux for me.
Alright.
I am making the stronger claim. I claim it could in-principle simulate us deeply enough to pull out the 1P phenomenal concepts, and could self-modify so as to legitimately experience them if it so chooses. It would be motivated to think this through carefully because it’s a huge part of our values (at least as we understand them), as long as it was interested enough to try to understand us (including as a special case of generic aliens) as agents at all.
I don’t believe there’s anything metaphysically “magical” going on such that it couldn’t or wouldn’t see this. Probably why I feel camp 1-ish.
As for the last point, my point of view is that any agent has a “bridge prior” which allows them to connect their 0P models with their 1P model. So I claim that in a sort of trivial way… it will have some prior here, and whatever the bridges spit out will inform what it deduces about the 1P experiences at play. I additionally claim that simple bridge priors will be adequate for finding 1P phenomenalism, and that you would have to have a pretty unnatural one in order to avoid seeing this.
I don’t think this is right. A simulation (even an extremely detailed one) is ultimately only telling you about relational/dispositional facts i.e. given a physical state P it will evolve to state P’. It doesn’t say anything about the associated phenomenal state Q.
Ok I think this is the heart of the matter. I read the OP’s original point (“an unconscious super-intelligence would not guess that alien minds are conscious”) as essentially saying the most natural bridge prior for an unconscious system to posit is a null one i.e. that no real bridge exists.
Why think an unconscious system would be motivated to posit a bridge prior in which phenomenal properties actually exist? A prior that connects functional states to real phenomenal states is more complex than a prior in which phenomenal states are not real properties. The only reason to introduce the more complex prior is if the system had access to data points which require it as an explanation I.e. if it had access to phenomenal states Q.
If the bridge exists like I think it does, then it does say something about Q. But yeah, the bridge is necessary for this to work.
Under my framework, the bridge is necessary for connecting indexical (1P) statements with classical logical (0P) ones. Any system that is actually instantiated somewhere and has sensors will metaphysically have both of these (as in, once you fully specify everything that ‘actually instantiated somewhere’ means, you will have to have posited both of these for it), and a null bridge does not explain its own sensory data.
So I see the unconscious system as already having all of this inherently, and that the phenomenal structure is perhaps foreign, but understandable within the 1P side the same way a cell is understandable to it on the 0P side. Even if it does not itself experience the 1P things here as true, it will be able to use the bridges to understand the perspective of another agent that does have the phenomenal structure.
This is a cool position. Thanks for taking the time to explain it in so much detail.
I think I can see where we’re diverging. You want to place the metaphysical bridge between the 0P → 1P perspective and because there’s something metaphysically substantial happening in this bridge it’s a camp-2 position. But within the 1P side you’re treating 1P phenomenal states as a subset of the full 1P space and what links the phenomenal states to the other 1P states is some kind of structural relation. This idea is very camp 1-ish to me, the extra work done is being done by structural/relational links between phenomenal and non-phenomenal states in 1P.
By contrast, I want to place the metaphysical bridge between the physical and phenomenal states P → Q. This means I’m rejecting the claim that Q is related to P by purely structural relations or dispositional properties. This is also why I said the most parsimonious bridge was a null one unless you had access to Q. I agree the null bridge is incoherent if what you’re talking about is the 0P → 1P link that an instantiated agent needs to access is sensors. But that’s not the bridge the unconscious superintelligence needs. It needs a bridge to Q and since it doesn’t possess Q it could coherently postulate a null bridge between P and Q. From its perspective the null bridge would also be most parsimonious if it truly didn’t possess Q.
I’ve kind’ve sketched my reasons for thinking structural and dispositional properties don’t yield Q in the rest of the thread but I’ll throw in one more: from my perspective the structural/dispositional properties are inherently 0P there’s no 1P categorical/intrinsic properties which describe “what it’s like”
So when you say:
I reject the analogy. On my view, your link between the non-phenomenal 1P and phenomenal 1P is still structural/relational and these properties are always 0P in nature.
This might be a natural point to end the conversation as I think we’re at a point where our intuitions lie on opposite sides of a pretty large crux. But I’m happy to continue if you think there’s another angle I’m missing.
Thanks for pushing me to describe it better! This has been a lovely discussion.
I agree there is something very camp 1-ish about the idea (and just me as a person, frankly).
So your Q is not even a type of 1P thing, is that right? I’m not sure what sort of thing your Q is supposed to be, which I suppose is what my side of the crux looks like. (I kind of suspect that if you are right about Q, then I do not have access to it myself.)
I also (regardless of my other points and arguments) think you are wrong that structural/relational properties are always 0P! I think 0P can’t actually even have a proposition like “I always see blue right after I see red”, which still needs to use indexicals in order to refer. There’s a similar seeming “Environment X has a red-blue light sequence” on the 0P side which is not actually the same (e.g. what if I’m not actually in that environment?).
To me, “what it’s like” grounds to something like: an experience that there is something I observe which has its own 1P experiences (and a prediction of what those might be based on my observations). Phenomenal consciousness is then maybe something like: the observation that there is an observable entity ‘self’ such that ‘what-it’s-like_self(to see red)’ implies ‘to see red’. And this sort of fixed-point thing is inherently really weird and slippery just from a pure math point of view, e.g. Löb’s theorem (imagine ‘what-it’s-like’ as the box), which has the infamous Gödel’s 2nd Incompleteness theorem as a special case. And all of this is inherent to the 1P side; only on the 0P side can you just reduce things to neurons or atoms or whatever (though I claim a simple bridge would still reveal the 1P structure just from the 0P side). This formulation is speculative and off-the-cuff, and only intended to gesture at the sort of structure I think is possible here.
And happy to leave the discussion here if you’re done, but I am curious to know what you think of this idea.
Let me explain my view in a little more detail—it’s worth noting that I hold it pretty tentatively (around ~p(60%)) but I think you’ll find it appealing and hopefully see where it parts way with your view.
If you hold a blue image in your mind there’ll be something it’s like for you to experience blue. Call that Q. Now if you hold a red experience in your mind there’ll be a corresponding red experience that’s different to the blue one. Call it Q’. Hopefully you know what I’m talking about here! Some people with very strong camp 1 intuitions aren’t even willing to grant this, but I feel like if we’ve gotten this far in the thread we have some common ground here.
On my view, there is a fact of the matter about what blue and red look like for you and this is underdetermined by the physical/dispositional properties. The physical/dispositional properties could be held constant and these blue/red experiences could vary in principle. Granted, there’s a lot of structural constraints e.g. light cones in your retina, reflectance of surfaces, wiring of your brain etc.. But I claim that even if physics were fully fixed some aspect of your experience could vary in principle.
More precisely, a complete description of physics would tell you everything about the dispositional/relational properties of physical particles. Specifically, given a state P it will tell you how it evolves to P’. An example is an electron with charge q and mass m moving in an electric and gravitational field. The physics fully specifies the dispositional properties of the particles e.g. the electron will move in such-and-such a way. But this doesn’t tell you about any of their essential properties. If you switched the mass with something that played the same role but was intrinsically different (call is schmass) would that change anything? On standard physics, it wouldn’t matter what was playing the mass-role itself only that the structural form of the equations are intact.
On my view however, it does matter. The particles have an additional categorical/essential property that fixes something about the world. Importantly, these properties are physical in some sense (they’re all part of the same “stuff” that physicists talk about) but they’re not captured by the normal relational/dispositional properties of physics. This view is called Russellian Monism.
So with this formalism in place it actually connects up quite nicely with the other components of your view. The 0P/1P framework gives a nice overview of the difference between describing the disposition of the state (0P) and tokening the categorical essence of the state (1P). The hard problem intuitions just fall straight out of the difference between 0P/1P. Also on this view there are no zombies, since duplicating the physical particles necessarily duplicates their categorical properties — so there’s no gap between what I’ve been calling functional 1P and phenomenal 1P. As soon as experiences enter 1P they’re phenomenal.
Where this differs from your view is I think you need a categorical property to fix the phenomenal character of certain states. Whereas on your view it seems like you’re using the bridging law + structure to fix phenomenal character. In my previous comments I’m mostly pressing you about how much work structure is doing in your framework. If you’re happy for bridging laws to provide the jump then our views actually become really close.
Your point about Löb’s theorem is interesting and it seems like it could be a nice formalisation of the 0P/1P idea. I’d just emphasise that it’s still a structural argument for why 1P/phenomenal talk is really tricky—it doesn’t give you a metaphysical explanation for why 1P has a “what it’s likeness” in the first place. For this you need the bridging laws or the categorical properties.