Scientist by training, coder by previous session,philosopher by inclination, musician against public demand.
Team Piepgrass: “Worried that typical commenters at LW care way less than I expected about good epistemic practice. Hoping I’m wrong.”
Scientist by training, coder by previous session,philosopher by inclination, musician against public demand.
Team Piepgrass: “Worried that typical commenters at LW care way less than I expected about good epistemic practice. Hoping I’m wrong.”
What happens if you care about reality per se?
Nowadays, because the lowest-hanging fruits have all been picked long ago
Not in the sense of actually solving epistemology, ethics, consciousness etc. There’s ole th of work to be done, but it involves backtracking..admitting you were.wrong and trying a different approach..and few are interested in that.
Indeterminism being false would allow libertarian free will to be possible. Libertarian free will is the poor relation in lesswrong thinking, usually dismissed without argument. And it can co exist with compatibilist free will.
The voting system can’t fix the people doing the voting. Clearly what they want is at least one of the (usual content, usual style)..the same old same old. What they don’t want is to be challenged or stretched.
My answer is that this is related to the problem of “distillation”. The world is already difficult. If we make things more difficult by not communicating as clearly as we could
That’s a great idea, and you, collectively, should try it. Just look at how badly you are communicating the central claim of AI threat.
Almost everything here is written in a needlessly complicated way. for instance, you dont have to write “utility function” when you mean “preferences”.
J.L. Mackie’s “argument from queerness”
Is moral philosophy 101.
Free will isn’t unconditionally like this, because everything you have said is conditional on determinism , which is not known to be true. So there is no need to abandon the incompatibilist definition. Or assume that there can only be one definition.
I was using the term loosely
Yet calling for literalism!
My point is that the current objectives do not reliably reflect human moral values
Assuming there is such a coherent entity. And assuming that it is still a problem when the AI is not an agent.
The historic AI doom arguments have a problem: that they assume a bunch of things which aren’t necessarily true. And many renditions of them for.public consumption have a further problem: that they gesture towards these assumptions as though they are widely accepted when they are not. The general public will reject an argument using the term “utility function” because they don’t know what it is; and those knowledgeable about AI will reject it because they do. ..in their eyes , you are saying something false. But you need to come up with arguments that are valid before you worry about the PR issue.
Most terms have multiple definitions. Probability/possibility can be thought of as a subjective , ignorance based thing...or as a basic feature of the universe.
“Free will” has multiple definitions, too. So you cant necessarily support any particular version of FW with any particular definition of probability/ possibility.
To reiterate this crucial point: probabilities are statements about you, not statements about the world
Not a fact. Ignorance, or Knightian uncertainty, can be part of the whole mishmash of probability, but it’s existence doesn’t prove the nonexistence of real, in-the-territory possibilities. Determinism is not a physical fact.
But there is also a meaningful subjective statement about the world, an objective statement about our knowledge of the world, that the coin has a 50% chance of heads and a 50% chance of tail
What’s that got to do with free will? If you see free will as an ability to steer to a future that you want to happen, and away from other possible futures …as an ability to effect or change things …subjective uncertainty is no help at all.
To put it another way, if free will is an inherent capacity of some entities, then they are not going to acquire it as a result of your own incapacity.
Other definitions of free will are possible...but it’s already known that compatibilist FW is compatible with the determinism you are assuming.
Why on earth has this been downvoted?
Current utility functions do not optimise toward values in which humans are treated as morally valuable.
You mean “utility functions” of current AI’s? How literally do you mean UF? Current AI’s dont have UF’s in the technical sense.
Therefore Mary’s incomplete knowledge about consciousness doesn’t have metaphysical implications, because it is incomplete in fundamentally same way.
No it isn’t. Mary doesn’t know what Red looks like. That’s not know-how
Mary doesn’t know how to ride, and therefore has incomplete understanding of riding. What’s the difference?
Things can be incomplete in different ways
Both need instantiation for what?
For gaining potential utility from specific knowledge representations, for knowledge that feels intuitively complete.
Theoretical knowledge isn’t about utility.
I mean all of them: if physicalism explains riding a bike
It doesn’t , in the sense that the theoretical knowledge gives you the know-how. That’s one of your own assumptions.
The analogy is that they both need instantiation
Both need instantiation for what?
So physicalism is false, because physical knowledge is incomplete without know-how.
That’s kind of munchkinning. Even if it’s incomplete in that way, it doesn’t have metaphysical implications.
Sure, they are different physical processes. But what’s the relevant epistemological difference
Mary doesn’t know what colour qualms look.like, and therefore has an incomplete understanding of consciousness. As stayed in all versions of the story.
Riding bikes? How they work? How they appear?
Yes.
Unhelpful.
Again, this is false—it is as much as necessary in case of riding
Riding is doing, not understanding.
The knowledge of representation is the whole thing! Qualia are appearances!
If you want to define things that way, ok.
As before , that’s the standard definition.
So Mary’s room implies that bikes are as unphysical as qualia.
Qualia aren’t defined as unphysical.
Bikes aren’t appearances , so there is no analogy.
It bypasses what you are calling representation … you have admitted that.
Mary also doesn’t have all representations for all physical knowledge. She doesn’t have to have a concept of fire, or equations in all possible notations, or riding skills.
Of course she knows what fire is , she is a super scientist.
Know-how, such as riding kills, is not an appearance, or physical.knowledge.
That doesn’t mean there isn’t a difference between different kinds of knowing.
Mary’s room doesn’t provide motivation for there being a fundamental difference between knowing how to ride a bike and knowing what it is like to see red.
Nonetheless , there is a difference.
And physicalism explains bikes, right?
Riding bikes? How they work? How they appear?
The physics equations representing a brain don’t contain qualia then, since they don’t exist as a brain.
Yes, of course, like they don’t contain atoms or fire or whatever.
But in most cases, that doesn’t matter, for the usual reason.
Physicalists sometimes respond to Mary’s Room by saying that one can not expect Mary actually to actually instantiate Red herself just by looking at a brain scan. It seems obvious to then that a physical description of brain state won’t convey what that state is like, because it doesn’t put you into that state. As an argument for physicalism, the strategy is to accept that qualia exist, but argue that they present no unexpected behaviour, or other difficulties for physicalism.
That is correct as stated but somewhat misleading: the problem is why is it necessary, in the case of experience, and only in the case of experience to instantiate it in order to fully understand it. Obviously, it is true a that a descirption of a brain state won’t put you into that brain state. But that doesn’t show that there is nothing unusual about qualia. The problem is that there in no other case does it seem necessary to instantiate a brain state in order to undertstand something. If another version of Mary were shut up to learn everything about, say, nuclear fusion, the question “would she actually know about nuclear fusion” could only be answered “yes, of course....didn’t you just say she knows everything”? The idea that she would have to instantiate a fusion reaction within her own body in order to understand fusion is quite counterintuitive. Similarly, a description of photosynthesis will make you photosynthesise, and would not be needed for a complete understanding of photosynthesis.
She already knew what red looks like, the knowledge just was in a different representation.
What it looks like is the representation! A different representation just isn’t a quale. #FF0000 just isnt a red quale!
Just like with knowing how to ride a bike. “no new”,
But reading a book on riding a bike isn’t knowing how to tide a bike...you get the knowledge from mounting a bike and trying!
like everything here, depends on definitions. But she definitely undergoes physical change, that may be viewed as her gaining new representation of knowledge, which may be valuable for her.
The knowledge of representation is the whole thing! Qualia are appearances!
Without additional assumptions, predicting pixels doesn’t bypasses anything
It bypasses what you are calling representation … you have admitted that.
there is a difference between being and knowing.
That doesn’t mean there isn’t a difference between different kinds of knowing.
I mean that from qualia requiring you to exist in a specific state you can make a jump to “consciousness is existence”.
The physics equations representing a brain don’t contain qualia then, since they don’t exist as a brain.
It isn’t that simple, because they need a motivation to use it.
The answer to “What part of physical equations says what red looks like?” is just “the part that describes brain”.
Expand on the “says”. If Mary looks at these equations ,in her monochrome room, does she go into the brain state that instantiates seeing something red? Does she somehow finds out what red looks like without that? Neither?
Mary’s situation only implies that you should track the difference between knowing and being
What does that mean? Are you saying Mary already knew what red looks like, and instantiating the brain state adds no new knowledge?
which I guess is a hint that panpsychism solves HP) -
Why?
Mary has a reductive+predictive theory of qualia, which is confirmed by her ability to predict every pixel of her future experience”.
Mary can “predict pixels” in some sense that bypasses her knowing what colour qualia look like. Just as a blind person can repeat, without understanding , that tomatoes look red, Mary can state that such and such a brain state would have an RGB value of #FF0000 at such and such a pixel. #FF0000 is a symbol for something unknown to here, just as much as r-e-d. So it’s not a prediction of a quale in the relevant sense.
According to decoherence theory, all measurement is decoherence, but not all decoherence is measurement. So the universe doesn’t particularly care about measurement as an act performed by humans.
If the simulators are interested in conserving resources, why we they make the universe so big? A mediaeval cosmology would have done.
They don’t spread much faster compared to “winning” branches I guess
They don’t spread faster, they spread wider. Their low amplitude information is smeared over an environmental already containing a lot of other low amplitude information, noise in effect. So the chances of recovering it are zero for all practical purposes.
With a remark that “decoherent branching” and “coherent branching” are presumably just one process differing in how much the information is contained or spreads out
Well, no. In a typical measurement, a single particle interacts with an apparatus containing trillions, and that brings about decoherence very quickly, so quickly it can appear like collapse. Decoherent branches, being macroscopic , stable and irreversible, for all practical purposes, are the opposite to coherent ones.
“The” simulation hypothesis is a range of hypotheses. At one extreme is a hypothesis that can be ruled out: our universe is not a fully detailed simulation running on a parent universe of exactly the same type and size. The laws of computation do not allow it. Something has to give. Either the simulated universe is somehow approximated, or the simulating universe, the parent, is larger than ours...or it’s just different. If we are in a simulation, we could be wrong, even deliberately fooled, about the laws of physics or computation. At the other extreme it might not even be computationally expensive to simulate our universe, because the parent universe is much larger, or it’s physics allow hypercomputation , or computation isn’t even how to it’s being done.
Bostrom style arguments assume a basic similarity between simulated and simulating universes. Computational irreducibility counts against them more than the other kind, where the simulating universe is large or weird. But the kind are less motivated.
Which problem are you trying to solve? What metaethics is, or what rational behaviour is?.