So… when do we get to the place where we aren’t using objects to explain how the impression of objects arises?
You’re very clever, young man, very clever. But it’s objects all the way down.
What you call “reification”, I call “modeling”. This is what it feels like to be an algorithm (at least, THIS algorithm—who knows about others?) which performs compression-of-experience and predictive-description-based decisionmaking. On many scales, it does seem to work to do the rough math of movement, behavior, and future local configurations based on aggregates and somewhat arbitrary clustering of perceptions.
Brains, to the extent that they are useful to think of as a class (another non-real concept) of things (unreal, as you say), are local configurations of universe-stuff that do this compression and prediction. When executing, they are processing information at a level different from the underlying quantum reality.
The universe IS (as far as any subset of the universe, like a brain, can tell) swirling magical reality fluid. A thing is it’s own best model, and this includes both “the universe” and “the portion of the universe in any given entity’s lightcone”. Brains are kind of amazing (aka magical aka sufficiently advanced technology) in that they make up models and stories which seem to make sense of at least part of what they experience, at some levels of abstraction. Note that they ALSO hallucinate abstractions of configurations that don’t occur (counterfactuals) or are outside their lightcones (for MWI, FAR outside).
I think reductionism is a very useful model for many (MANY!) levels of abstraction, but I have to admit to believing (also a synonym of “modeling”) that when taken far enough, there will be a mysterianism underlayer, where it’s un-measurably and un-modelably complex. It’s unknown how far that is—current ability is probably seeing mysteries that can at some point become models, even if there are true un-modelable layers much deeper. Scientific progress consists of peeling the onion one more level, but we will ALWAYS find mystery at the next level, which we will scratch at until we either dissolve the mystery into our models, or … something. It’s unknown even whether we’ll be able to know when we reach “the bottom”.
[ Epistemic status: I think there’s some valid modeling in what I say, but I don’t claim it’s complete nor perfectly coherent. ]
You’re very clever, young man, very clever. But it’s objects all the way down.
What you call “reification”, I call “modeling”. This is what it feels like to be an algorithm (at least, THIS algorithm—who knows about others?) which performs compression-of-experience and predictive-description-based decisionmaking. On many scales, it does seem to work to do the rough math of movement, behavior, and future local configurations based on aggregates and somewhat arbitrary clustering of perceptions.
Brains, to the extent that they are useful to think of as a class (another non-real concept) of things (unreal, as you say), are local configurations of universe-stuff that do this compression and prediction. When executing, they are processing information at a level different from the underlying quantum reality.
The universe IS (as far as any subset of the universe, like a brain, can tell) swirling magical reality fluid. A thing is it’s own best model, and this includes both “the universe” and “the portion of the universe in any given entity’s lightcone”. Brains are kind of amazing (aka magical aka sufficiently advanced technology) in that they make up models and stories which seem to make sense of at least part of what they experience, at some levels of abstraction. Note that they ALSO hallucinate abstractions of configurations that don’t occur (counterfactuals) or are outside their lightcones (for MWI, FAR outside).
I think reductionism is a very useful model for many (MANY!) levels of abstraction, but I have to admit to believing (also a synonym of “modeling”) that when taken far enough, there will be a mysterianism underlayer, where it’s un-measurably and un-modelably complex. It’s unknown how far that is—current ability is probably seeing mysteries that can at some point become models, even if there are true un-modelable layers much deeper. Scientific progress consists of peeling the onion one more level, but we will ALWAYS find mystery at the next level, which we will scratch at until we either dissolve the mystery into our models, or … something. It’s unknown even whether we’ll be able to know when we reach “the bottom”.
[ Epistemic status: I think there’s some valid modeling in what I say, but I don’t claim it’s complete nor perfectly coherent. ]