I’m surprised that you say this is hard? Humans maintain our goals across ontologies super easily; it’s barely an inconvenience for us. Like, physics undergrads don’t usually change their tastes in art or stop having sex after taking their intro to quantum mechanics course. I guess one could argue that’s because we have a special sauce that neural nets don’t yet have or something?
“super easily”? I would say it depends. Not if the ontology shift is believing or not in an all powerful all good creator God! That can sure change peoples goals and values. Some ontology changes make no difference, others make a huge difference.
The greater the intelligence increase, the more likely an agent (human or AI I expect) will experience an ontology change that causes a goal shift, and the more total ontology shifts you can expect. Those related to personal identity (what is “I” e.g. atoms, vs computation etc) seem more likely to cause goal shifts than say learning that solid objects are in fact forces interacting.
So if we are being formal its
Significant Increase in intelligence → many ontology shifts → some of these cause goal shifts.
I would just like to mention that “solid objects are in fact forces interacting” is massively underselling the size of the ontology shift associated with quantum mechanics to a degree that’s a bit hard to describe to someone who hasn’t studied it. It’s more like:
Fundamental physics no longer determines a single history where a particular series of things happens. It’s now the case that, even at a fundamental level, many different things all happen, and physics describes how relatively “real” each of those things are.
One consequence of this for our own universe, where entropy is increasing over time, is that the universe no longer has one history, but an exponential branching tree of histories, all of varying weights (numbers that describe how important we should consider the events in each of them). Oh, and the tree is emergent by the way, it’s not even a base component of the ontology.
The closest mathematical framework we previously had that kind-of worked this way was probability theory, and that was just meant to track our subjective uncertainties about things, not describe reality itself. But quantum mechanics doesn’t actually follow the rules of probability theory. It’s some kind of warped twisted version of probability where the probabilities are replaced with (the squared magnitudes of) complex numbers called “amplitudes”. Because complex numbers can have opposite signs, it’s possible that adding another way for something to happen can reduce the chances of it happening.
In regular probability theory (or, say, classical physics), you could describe a reversible map (like a symmetry transformation or time evolution) on a 10 state system by writing down a permutation. This initially seems like the only way it could be. You have ten possible states, and whatever you do has to be reversible, so all you can do is permute them. This could be represented with a 10x10 permutation matrix. But in quantum mechanics, such a map is not a permutation matrix, but a unitary matrix. Time evolution and symmetry transforms in QM are represented by unitary matrices, not permutations like you’d guess they should be!
Speaking of symmetry: Any non-trivial object that has all the rotational symmetry of a sphere (such as, for example, a sphere) must have infinitely many points. Except in QM, where the smallest such object has 3 points. Except that QM can tell the difference between a 360 degree rotation and doing nothing. (Not 720 degree rotations though. Those are still just like doing nothing, so at least there’s that.) As a consequence, the smallest non-trivial object that has all the rotational symmetry of a sphere actually has 2 points.
Because of the way QM works, it’s actually possible to exploit it to perform some kinds of computations faster than seems to be possible classically. This is also kind of weird.
EDIT: Made a few changes to this for clarity & accuracy based on Justin Sheek’s comment. (Thanks Justin!) List of edits:
Rewrote first sentence from “physics no longer describes what can happen” (misleading and just plain wrong) to its current form. I knew what I was trying to say here, but goofed on converting it into words. Sorry everyone.
Specified that we’re talking about fundamental physics here (since stat-mech does also involve assigning weights to various configurations).
Added paragraph break and “One consequence of this for our own universe, where entropy is increasing over time” to hopefully clarify that this part is talking about many worlds, and does not apply to every system that obeys quantum mechanics.
The bit about maps / functions was originally overstated for rhetorical reasons. This is probably not super detectable or helpful when describing a technical topic, so I’ve rewritten it to be more serious and direct.
I believe all that is written here is now something I can defend.
Oof, the amount of misinformation on QM even here on LW is staggering.
Physics no longer describes what can happen.
This is straightforwardly false. Maybe you meant to say “Physics no longer describes what definitely happens”? Still misleading, as that was already the case with statistical mechanics within the ontology of Boltzmann and Gibbs 50 years earlier.
Oh, and the tree is emergent by the way, it’s not even a base component of the ontology.
Coherent phenomena are definitely part of the base ontology of QM. The density matrix encodes the ensemble. (If by “the tree” you didn’t mean the ensemble, then your statement would make even less sense to me).
...QM will tell you that what it means to be a function has changed and you have to do it differently now.
No. QM has no bearing on “what it means to be a function”. Maybe you mean “QM encodes permutations in a surprising way”?
Except that QM can tell the difference between a 360 degree rotation and doing nothing. (Not 720 degree rotations though. Those are still just like doing nothing, so at least there’s that.)
Strictly speaking this is only sometimes true. It seems like you are alluding to the spin-statistics theorem or maybe the Aharonov-Bohm effect or Berry phase. Your quoted statement is specifically applicable only to fermionic states. It’s inapplicable to bosons or more exotic states like anyons (FQHE) or braid statistics.
Because of the way QM works, it’s actually possible to exploit it to perform some kinds of computations faster than seems to be possible classically. This is also kind of weird.
Thanks for the notes. I’ve made a few edits to my comment above based on this.
Also, for the benefit of the folks reading this: I’m not alluding to spin-statistics or Berry phase, merely the use of instead of as the group of rotational symmetries.
Finding out about quantum mechanics is a classic example of an ontology shift. You wrote “maintain aligned goals across ontologies”. If you actually meant “maintain aligned goals across orders of magnitude increases in intelligence”, then okay, but that’s a different thing.
Here strong orthogonality looks too neat. It imagines the agent’s ontology updating while its final target remains untouched by the update: if goals are expressed in an ontology, and intelligence changes the ontology, then intelligence and goals are correlated.
If students don’t change their goals when their ontology changes, but you expect that they will change their goals when they gain orders of magnitude in intelligence, that suggests that the thing that results in a change of goals is a large increase in intelligence, not an ontology change. This is true even if we put an arrow going from “intelligence increase” to “ontology change” in the causal graph.
Here where you’re describing difficult things about maintaining a long term paperclipping goal:
Keep the macroscopic concept of “paperclip” coherent across massive ontology shifts.
Also here, where you’re describing things that would update you:
If increasingly capable models perfectly preserve their literal training targets across major ontology shifts, that is a point for empirical orthogonality.
I mean, it seems pretty preposterous from my perspective too.
You propose a causal model: Intelligence -> Ontology Shifts -> Value Shifts
I question the Ontology Shifts -> Value Shifts part of the model, and provide a counterexample.
You then express concern that my example didn’t have the Intelligence variable”.
I am confused. “Maybe he actually meant to specify a Intelligence -> Value Shifts causal model? Otherwise, why would he care that my example didn’t have an Intelligence variable?” I think. I ask about it.
You say no, drop a quote that confirms that the original model is the one you’re thinking of.
Given confirmation that you’re going for Intelligence -> Ontology Shifts -> Value Shifts, I try to explain how my example is indeed a problem for your model. There is a model consistent with both the QM counterexample, and the students needing to be super-intelligent to have their values shifted, and with intelligence causing ontology shifts, namely Intelligence -> (Ontology Shifts, Value Shifts). (In words, highly increased intelligence separately causes both effects.) This model (like any model consistent with the counterexample) contradicts the one you describe. I try to point out the contradiction.
You: “Im sorry, can you point to the line where I claim otherwise”
I think “wait what? Is he claiming that this new thing was his model all along? I thought he already confirmed the other one.” I drop the quotes, specifically ones focusing on the Ontology Shifts -> Value Shifts part of the model, for lack of a better idea of what to do, and since you did make a direct request.
You: But I also have a Intelligence -> Ontology Shifts arrow!
So at this point, I am now even more sure that your model is Intelligence -> Ontology Shifts -> Value Shifts. What I am now unsure about is what else you could possibly have meant by “otherwise”, and still separately, why you think the students needed to have IQ 1000 or whatever.
I am certain that your explanations of these questions and of your side of this exchange must be fascinating. However, I also don’t mind ceasing to interact with you, since this was equally absurd on my end, and in addition you seem to have downvoted each of my replies in this thread, which makes talking to you sadly unprofitable for a karma whore such as myself.
I’m surprised that you say this is hard? Humans maintain our goals across ontologies super easily; it’s barely an inconvenience for us. Like, physics undergrads don’t usually change their tastes in art or stop having sex after taking their intro to quantum mechanics course. I guess one could argue that’s because we have a special sauce that neural nets don’t yet have or something?
“super easily”? I would say it depends. Not if the ontology shift is believing or not in an all powerful all good creator God! That can sure change peoples goals and values. Some ontology changes make no difference, others make a huge difference.
The greater the intelligence increase, the more likely an agent (human or AI I expect) will experience an ontology change that causes a goal shift, and the more total ontology shifts you can expect. Those related to personal identity (what is “I” e.g. atoms, vs computation etc) seem more likely to cause goal shifts than say learning that solid objects are in fact forces interacting.
So if we are being formal its
Significant Increase in intelligence → many ontology shifts → some of these cause goal shifts.
I would just like to mention that “solid objects are in fact forces interacting” is massively underselling the size of the ontology shift associated with quantum mechanics to a degree that’s a bit hard to describe to someone who hasn’t studied it. It’s more like:
EDIT: Made a few changes to this for clarity & accuracy based on Justin Sheek’s comment. (Thanks Justin!) List of edits:
Rewrote first sentence from “physics no longer describes what can happen” (misleading and just plain wrong) to its current form. I knew what I was trying to say here, but goofed on converting it into words. Sorry everyone.
Specified that we’re talking about fundamental physics here (since stat-mech does also involve assigning weights to various configurations).
Added paragraph break and “One consequence of this for our own universe, where entropy is increasing over time” to hopefully clarify that this part is talking about many worlds, and does not apply to every system that obeys quantum mechanics.
The bit about maps / functions was originally overstated for rhetorical reasons. This is probably not super detectable or helpful when describing a technical topic, so I’ve rewritten it to be more serious and direct.
I believe all that is written here is now something I can defend.
Oof, the amount of misinformation on QM even here on LW is staggering.
This is straightforwardly false. Maybe you meant to say “Physics no longer describes what definitely happens”? Still misleading, as that was already the case with statistical mechanics within the ontology of Boltzmann and Gibbs 50 years earlier.
Coherent phenomena are definitely part of the base ontology of QM. The density matrix encodes the ensemble. (If by “the tree” you didn’t mean the ensemble, then your statement would make even less sense to me).
No. QM has no bearing on “what it means to be a function”. Maybe you mean “QM encodes permutations in a surprising way”?
Strictly speaking this is only sometimes true. It seems like you are alluding to the spin-statistics theorem or maybe the Aharonov-Bohm effect or Berry phase. Your quoted statement is specifically applicable only to fermionic states. It’s inapplicable to bosons or more exotic states like anyons (FQHE) or braid statistics.
Indeed.
Thanks for the notes. I’ve made a few edits to my comment above based on this.
Also, for the benefit of the folks reading this: I’m not alluding to spin-statistics or Berry phase, merely the use of instead of as the group of rotational symmetries.
I don’t understand—are you saying that taking a college course makes undergrads orders of magnitude smarter?
Finding out about quantum mechanics is a classic example of an ontology shift. You wrote “maintain aligned goals across ontologies”. If you actually meant “maintain aligned goals across orders of magnitude increases in intelligence”, then okay, but that’s a different thing.
from the above essay. seems fairly clear to me
If students don’t change their goals when their ontology changes, but you expect that they will change their goals when they gain orders of magnitude in intelligence, that suggests that the thing that results in a change of goals is a large increase in intelligence, not an ontology change. This is true even if we put an arrow going from “intelligence increase” to “ontology change” in the causal graph.
Im sorry, can you point to the line where I claim otherwise
Sure.
Here where you’re describing difficult things about maintaining a long term paperclipping goal:
Also here, where you’re describing things that would update you:
Sorry, what are we doing here? You have quoted the second point of a list, which clearly included intelligence as the cause of such ontology shifts.
FYI, I will not interact further since this is clearly preposterous
I mean, it seems pretty preposterous from my perspective too.
You propose a causal model:
Intelligence -> Ontology Shifts -> Value ShiftsI question the
Ontology Shifts -> Value Shiftspart of the model, and provide a counterexample.You then express concern that my example didn’t have the
Intelligencevariable”.I am confused. “Maybe he actually meant to specify a
Intelligence -> Value Shiftscausal model? Otherwise, why would he care that my example didn’t have anIntelligencevariable?” I think. I ask about it.You say no, drop a quote that confirms that the original model is the one you’re thinking of.
Given confirmation that you’re going for
Intelligence -> Ontology Shifts -> Value Shifts, I try to explain how my example is indeed a problem for your model. There is a model consistent with both the QM counterexample, and the students needing to be super-intelligent to have their values shifted, and with intelligence causing ontology shifts, namelyIntelligence -> (Ontology Shifts, Value Shifts). (In words, highly increased intelligence separately causes both effects.) This model (like any model consistent with the counterexample) contradicts the one you describe. I try to point out the contradiction.You: “Im sorry, can you point to the line where I claim otherwise”
I think “wait what? Is he claiming that this new thing was his model all along? I thought he already confirmed the other one.” I drop the quotes, specifically ones focusing on the
Ontology Shifts -> Value Shiftspart of the model, for lack of a better idea of what to do, and since you did make a direct request.You: But I also have a
Intelligence -> Ontology Shiftsarrow!So at this point, I am now even more sure that your model is
Intelligence -> Ontology Shifts -> Value Shifts. What I am now unsure about is what else you could possibly have meant by “otherwise”, and still separately, why you think the students needed to have IQ 1000 or whatever.I am certain that your explanations of these questions and of your side of this exchange must be fascinating. However, I also don’t mind ceasing to interact with you, since this was equally absurd on my end, and in addition you seem to have downvoted each of my replies in this thread, which makes talking to you sadly unprofitable for a karma whore such as myself.
cool