A Sketch of Answers for Physicalists

Epistemic Status: Just quickly sketched out some rough solutions

Jessicata recently wrote a post Puzzles for Physicalists. I’m not a physicalist, but I’ll attempt answer these questions from this perspective anyway.

Indexicality:

Jessicata argues that indexicality is fundamental (without defining the exact sense she means fundamental) because our attempts to define things objectively don’t actually succeed. For example, the indexical “my phone” could be expanded to the objective-looking “Chris Leong’s phone”. However, this still actually needs to be defined relatively because there could be an exact clone of me somewhere in the universe.

I agree that the only way to refer to features of the world without encountering these non-uniqueness problems is relatively. So we never really encounter anything objectively. However, everyone already kind of knows the we can’t definitely show the existence of any objective reality behind our observations and that we can only posit it. This isn’t exactly news.

Pre-Reduction References

Jessicata argues that even though we often define water as H2O, if the chemical composition of the substance with the properties we observe from H20 had been XYZ, we would have defined water as that instead. She then argues that a) before we knew the science we still had an initial pre-reduction definition of water b) a philosophical account of science should contain these pre-reduction definitions so we can describe how they get attached to scientific definitions.

Why are these hard to define? Consider water. For simplicity we’ll pretend that it’s pre-reduction definition consists of “feels wet”, “transparent” and “behaves like a liquid”. Let’s zoom in on “transparent” as if we could explain one, we will most likely be able to explain the others too. Just like water, “transparent” has a high-level and low-level definition and to fully understand it we need both. The high-level may contain thing like “see”, “object” and “light” as primitives so that we can define an object as transparent if we can see another on the other side of it. The low-level definition describes the exact physics from the object to the light in your brain. The high-level is just a description of logical relations while the low-level is physical. So both will occur in a physicalist account.

Epistemic Status of Physics

The argument here is that in order to explain our justification of physics we need a concept of agents. Otherwise we we’ll be able to talk about running or observing experiments. Fair enough, but as to whether observations imply agents depends on what is meant by “observation”. If we imagine observations have a qualiatic element, then we can’t model them with science. On the other hand, if they don’t then we can apply the same double model trick. The high level relational model is something to do with informational processing; again we can define this as a logical relation. The low level contains a physical description of object observed, transmission of light to the observer, mechanism of the eyes and a description of the observers mind. This would explain the entanglement of the object and the observation.

Anthropics:

Argues that physics on its own doesn’t have the language to say what an observer is. If qualia existed then we could just say an observer is someone who experiences qualia, but without this we need to find a definition of observer. I agree that just given a physical system on its own we can’t define what counts as an observer. Observers are an information processing concept so in order to define them we need an interpretation scheme so we can understand how a physical system is processing information.

So given a physical system containing a human, how do we figure out which interpretation scheme to use? Well, humans need to interact with the world. We could hypothetically look at a bunch of humans, see how they interact with the world, scan their brains and then define an interpretation scheme of the information in their brains. Given such a scheme, we could then define information update operations that accurately match the real world as observation and observers as systems that are efficient at making observations.

Functionalism:

The argument is that functions like the hammer function are relative to observers. A hammer for humans isn’t the same as one for an octopus. The argument is that similarly the functions of a mind are relative to a user and so what counts as planning, observation, ect. is relative. I’m not a fan of functionalism at all, but I already explained about how to define observations without making it relative to each human. It’s not so hard to generalise other mental operations in the exact same way.

Causality vs. Logic:

I’ve broken down this conflict here. We really shouldn’t expect counterfactual “coulds” and logical “coulds” to be the same just because it’s the same word.

Final Thoughts

It didn’t actually turn out that challenging in the end to produce plausible solutions for all of these problems. I’m not definitely claiming that any of these solutions are correct, indeed they’d probably all need work since I just quickly sketched them out, but the point is that it isn’t at all clear that any of these issues are really problematic.