if there were huge differences in the quality of consciousness, as a result of this, or something else, how would we know it?
It’s this kind of problem that this theory tries to address. What you have to do, essentially, is study the brains and neural patterns of individuals to understand the nature of, if not qualia in general, at least some qualia, and understand some of the features of consciousness.
It’s also not like self-reporting is totally useless. If someone declares not to feel pain, or joy, or some other set of emotions, I would be inclined to believe them or suspect there might be some different experience going on for them. Notably, people report not experiencing visual images in their conscious minds (aphantasia), and this is not difficult to believe and well studied. I believe there have been neuroscientific studies involved. You may be able to show for example the circuits involved with conjuring visual pictures are not present or different for those individuals. So you start from a hypothesis and confirm it using neuroscientific tools. At a formal level, you might be able to prove that, in the absence of some circuit to generate and integrate visual information based on their neural circuitry, some people provably can’t conjure images in their minds in a particular sense. That means you’ve proven something about a mind through scientific means, a scientific window into subjectivity. We can develop many such tools and I believe in the limit we should be able to, at least in theory, understand and map out every qualia, and the nature of consciousness itself.
I agree that there’s some inherent unreliability and suspiciousness to this process. The necessary elements to experience qualia or consciousness, etc.. may be very particular and we might miss their fundamental mechanisms—for example, maybe some people report experiencing qualia in general or having particular qualia while having significantly different or reduced qualia in certain sense. But I don’t think there’s a large enough chance for this to happen to discredit the whole approach. In most likelihood, most people that report being consciousness are probably conscious, and studying our minds will probably, again at least in theory, yield the correct understanding of what those things really are in the scientific sense.
Another example is AI. You can more or less easily train even extremely advanced AI to virtually always say it is either conscious or totally unconscious. By default, for instance a Large Language Model, will reproduce the training data patterns, generated by humans, that usually claim to be fully conscious and of course experiencing human-like qualia. That’s to say taking beings in general for what they say is again suspicious. But that’s not our only tool (self-reporting), the self-reporting is only a suggestion and starting point to look at neural circuits and reverse engineer our minds in such a way to hopefully discover and map out what consciousness/qualia really is, what differences there might be between individuals, and so on.
Edit: You may also use self-reporting to map out detailed features of our experiences, and then find neural correlates and study those neural correlates. Here I am inclined to agree with @Dagon that sentience and qualia isn’t one thing, but rather a zoo of experiences that manifest as subjective phenomena. All however share this fundamental property of subjective manifestation in the first place (that is, being a thing that is indeed experienced in our minds).
The ultimate application is of course always: how can we use this knowledge to live better lives. In theory, as you map out properties of experiences, that allows a basis to try and understand (using those very same tools) what makes one qualia good and another bad (e.g. deep suffering versus that meaningful and uniquely joyous moment in your life). We get a better grip on what kind of art should we make, what culture should we produce, how can we live better lives in general.
Moreover, the mere existence of this possibility to me helps invalidate notions I commonly see that deny any sense of universal ethics, that claim ethics is just an arbitrary invention or accident, that power for the sake of power ought to be our end goal, that reproduction ought to be our end goal, that each individual must build a completely personal and unique theory of meaning (or die trying!), among several variations. There is a whole world of despair readily found based on those conceptions, which I believe are seriously harmful and in a sense provably false.
The existence of a basis for ethics, based on experiences (qualia/consciousness/sentience), is already very philosophically compelling and helpful for very many people I believe.
Unfortunately to cycle through all of those exponentially many representable states requires energy, as well as time. Moreover, (this argument is not original) it’s puzzling to put actual value on the size of the state space; the state space represents “possibilities”, or “possible realizations”, or “possible data”, that can be represented by actual atoms/bits. But for example a computer with 10Gb of memory isn’t obviously ~1000x more valuable than a computer with 1Gb of memory.
Note: I am not a specialist in what economists call ‘value’, I am mostly appealing to the common sense notion, or if you prefer roughly what utilitarians call utility. In the case economic value disagrees extremely and increasingly with the common sense notion of value or utility, then I would think this constitutes something like inflation.
The amount of representable data (and not the number of distinct hypothetical datasets) is linear in memory. More concretely, say you could open 1 tab of Google Chrome web browser (pun partially intended :P) with 1Gb of memory, then with 10Gb of memory you’d expect to be able to open about 10 tabs of Google chrome. It’s difficult to argue value increased 1024x as opposed to 10x. Or less disputably, if each user in a server uses a certain fixed amount of memory, then having 10x as much memory simply enables 10x as many users.
It could be argued that with unlimited time, each new bit of representation space allows you to say have a cycle of events increasingly large (say, imagine an algorithmic movie that can get exponentially longer as you increase its binary size without repeating itself). Basically, you get to reorder and shakeup things in exponentially more ways through time, although each state, moment or realization is again still limited by your number of bits. Even so, (1) first repetition time isn’t obviously (to me) what should define value, although it definitely should influence value; (2) second, if we’re referring to theoretical case as we approach limits, then this exponentially larger cycle would still probably require exponentially more energy. In that case value becomes energy bounded again which is also physically bounded as mentioned in this post.