Here where you’re describing difficult things about maintaining a long term paperclipping goal:
Keep the macroscopic concept of “paperclip” coherent across massive ontology shifts.
Also here, where you’re describing things that would update you:
If increasingly capable models perfectly preserve their literal training targets across major ontology shifts, that is a point for empirical orthogonality.
I mean, it seems pretty preposterous from my perspective too.
You propose a causal model: Intelligence -> Ontology Shifts -> Value Shifts
I question the Ontology Shifts -> Value Shifts part of the model, and provide a counterexample.
You then express concern that my example didn’t have the Intelligence variable”.
I am confused. “Maybe he actually meant to specify a Intelligence -> Value Shifts causal model? Otherwise, why would he care that my example didn’t have an Intelligence variable?” I think. I ask about it.
You say no, drop a quote that confirms that the original model is the one you’re thinking of.
Given confirmation that you’re going for Intelligence -> Ontology Shifts -> Value Shifts, I try to explain how my example is indeed a problem for your model. There is a model consistent with both the QM counterexample, and the students needing to be super-intelligent to have their values shifted, and with intelligence causing ontology shifts, namely Intelligence -> (Ontology Shifts, Value Shifts). (In words, highly increased intelligence separately causes both effects.) This model (like any model consistent with the counterexample) contradicts the one you describe. I try to point out the contradiction.
You: “Im sorry, can you point to the line where I claim otherwise”
I think “wait what? Is he claiming that this new thing was his model all along? I thought he already confirmed the other one.” I drop the quotes, specifically ones focusing on the Ontology Shifts -> Value Shifts part of the model, for lack of a better idea of what to do, and since you did make a direct request.
You: But I also have a Intelligence -> Ontology Shifts arrow!
So at this point, I am now even more sure that your model is Intelligence -> Ontology Shifts -> Value Shifts. What I am now unsure about is what else you could possibly have meant by “otherwise”, and still separately, why you think the students needed to have IQ 1000 or whatever.
I am certain that your explanations of these questions and of your side of this exchange must be fascinating. However, I also don’t mind ceasing to interact with you, since this was equally absurd on my end, and in addition you seem to have downvoted each of my replies in this thread, which makes talking to you sadly unprofitable for a karma whore such as myself.
Im sorry, can you point to the line where I claim otherwise
Sure.
Here where you’re describing difficult things about maintaining a long term paperclipping goal:
Also here, where you’re describing things that would update you:
Sorry, what are we doing here? You have quoted the second point of a list, which clearly included intelligence as the cause of such ontology shifts.
FYI, I will not interact further since this is clearly preposterous
I mean, it seems pretty preposterous from my perspective too.
You propose a causal model:
Intelligence -> Ontology Shifts -> Value ShiftsI question the
Ontology Shifts -> Value Shiftspart of the model, and provide a counterexample.You then express concern that my example didn’t have the
Intelligencevariable”.I am confused. “Maybe he actually meant to specify a
Intelligence -> Value Shiftscausal model? Otherwise, why would he care that my example didn’t have anIntelligencevariable?” I think. I ask about it.You say no, drop a quote that confirms that the original model is the one you’re thinking of.
Given confirmation that you’re going for
Intelligence -> Ontology Shifts -> Value Shifts, I try to explain how my example is indeed a problem for your model. There is a model consistent with both the QM counterexample, and the students needing to be super-intelligent to have their values shifted, and with intelligence causing ontology shifts, namelyIntelligence -> (Ontology Shifts, Value Shifts). (In words, highly increased intelligence separately causes both effects.) This model (like any model consistent with the counterexample) contradicts the one you describe. I try to point out the contradiction.You: “Im sorry, can you point to the line where I claim otherwise”
I think “wait what? Is he claiming that this new thing was his model all along? I thought he already confirmed the other one.” I drop the quotes, specifically ones focusing on the
Ontology Shifts -> Value Shiftspart of the model, for lack of a better idea of what to do, and since you did make a direct request.You: But I also have a
Intelligence -> Ontology Shiftsarrow!So at this point, I am now even more sure that your model is
Intelligence -> Ontology Shifts -> Value Shifts. What I am now unsure about is what else you could possibly have meant by “otherwise”, and still separately, why you think the students needed to have IQ 1000 or whatever.I am certain that your explanations of these questions and of your side of this exchange must be fascinating. However, I also don’t mind ceasing to interact with you, since this was equally absurd on my end, and in addition you seem to have downvoted each of my replies in this thread, which makes talking to you sadly unprofitable for a karma whore such as myself.
cool