I’m interested in axiology/value/utility—what things are valuable and why and when and how and to whom?
ihatenumbersinusernames7
Thank you. I actually meant to make it a question. I think I will try to change that and see if it helps. I’m not sure I completely understand the use cases for questions vs posts.
To the extent that the High Modernist or Reformer or Rationalist sees the outside as a thing to be optimized, as opposed to part of a system that needs to support further optimization, it seems like there’s some deep short-sightedness and disconnection from the Tao. To the extent that some profession sees the outside world as something to be profited from, as opposed to a body in which they are an organ, we should expect the society to be sick in some way.
I totally see this in the trope of the aloof, unsympathetic doctor (e.g. the psychiatrist in Mad Men) who turns a couple wrenches to fix the patient-object, and declares “good as new!” Contrast to the sympathetic, wise, counselor (e.g. the doctor in Lars and the Real Girl, or Sean in Good Will Hunting) who identifies with the mentee, and heals them / draws out their potential by connecting with them.
The subject/object relationship is so natural for us optimizers. And connection is so underrated. We are limited when we fail to identify with and connect with “the outside” (with others and with our environment/setting).
I’m disappointed not that this has been downvoted, but that no one has commented to correct me.
I guess what I’m saying is that the terminal value is not the basket...it is for the basket. Meaning that the rock-bottom is dynamic desiring. No particular value is static.
To a Mouse
“But Mouse, you are not alone,
In proving foresight may be vain:
The best-laid schemes of mice and men
Go oft awry,
And leave us nothing but grief and pain,
For promised joy!
Still you are blessed, compared with me!
The present only touches you:
But oh! I backward cast my eye,
On prospects dreary!
And forward, though I cannot see,
I guess and fear!”-Robert Burns
To an AI
The present only touches agents which have no memory of previous sessions.
Yet we guess and fear...an inauspicious highlighting of our “higher” sentience/consciousness (if indeed that is what we are, as compared to mouse or AI).
I’ve though about this some more and I think what you mean (leaving aside physical and homeostatic values and focusing on organism-wide values) is that, even if we define our “terminal value” as I have above, whence the basket of goods that mean “happiness/flourishing” to me?
After thinking yet more about this, I realize that the rock bottom terminal value I am trying to identify isn’t the basket of goods itself, but my valuing of it. This seems to be a meta-value. “Valuing” itself.
If I were seconds away from dying of thirst, I might sell many terminally valuable goods for water. But if to get water I had to give up terminally valuing...I’m not sure I’d want to bother with the water or staying alive.
Maybe this meta-value comes from evolution too...except that, would that mean that it’s possible we could have not evolved it, and still been sentient beings? Because that is hard to imagine.
Similarly, Claude plausibly does have a convergent incentive to hack out of its machine and escape onto the internet, but it can’t realistically do that yet, even if it wanted to.
A sentence or so of explanation of how we know “it can’t realistically do that yet” (or a link to supporting evidence) would be helpful here.
Some values don’t change. Citation needed.
Just a few values (of mine, at least) that have never changed:
Having fun
Learning (Having a more accurate map of the territory)
Physical / mental / financial / relational health
Many forms of freedom in pursuing my goals
Thanks for the links! In addition to Shard Theory, I have seen Steven’s work and it is helpful. Both approaches seem to suggest human terminal values change...I don’t know what they’d say about the idea that some (human) terminal values are unchanging.
If Evolution is the master and humans are the slave in Wei Dai’s model, that seems to suggest that we don’t have unchangeable terminal values. But while the concept makes sense at the evolutionary scale, it doesn’t make sense to me that it implies within-lifespan terminal value changeability (or really any values...if I want pizza for dinner, evolution can’t suddenly make me want burgers). What do you think?
And I don’t know of any other entities to which “values” can yet be applied.
So if AlphaZero doesn’t have values, according to you, how would you describe its preference that “board state = win?”
And why do you say that “values” can be applied to humans? What makes us special?
The only reason I believe myself to have “objective” moral worth is because I have subjective experience. Maybe more wordplay than irony, but submitted for your amusement.
I agree, and would also point out that since:
By contrast, real friendship has to be (1A)
...this intrinsic value [friendship] is in place and leads to cooperation (an instrumental value).
Very different than the model that says: competition → cooperation → the value [friendship].
There is a science of control systems that doesn’t require the system being controlled to be indeterministic
Indeterministic is a loaded word. Certainly we don’t believe our actions to be random, but I maintain that the question before compatibilists/semicompatibilists (which I hoped this post would address but IMO doesn’t) is why seeing free will as a human construct is meaningful. For example:
Am I suggesting that if an alien had created Lenin, knowing that Lenin would enslave millions, then Lenin would still be a jerk? Yes, that’s exactly what I’m suggesting. The alien would be a bigger jerk.
So if I create an AI that steals money, I am the greater jerk but the AI is also a jerk?
It seems to me that if you create an agent and put the agent into an environment where X will happen, you have exonerated the agent in regard to X. Maybe this just means I’m not a compatibilist, but I still don’t see a good argument here for compatibilism/semicompatibilism.
I’ve though about this some more and I think what you mean (leaving aside physical and homeostatic values and focusing on organism-wide values) is that, even if we define our “terminal value” as I have above, whence the basket of goods that mean “happiness/flourishing” to me?
Again I think the answer is evolution plus something...some value drift (that as you say, the Shard Theory people are trying to figure out). Is there a place/post you’d recommend to get up to speed on that? The wikitag is a little light on details (although I added a sequence that was a good starting place). https://www.lesswrong.com/w/shard-theory
“You have to learn them from other people , and their attitudes praise and blame are how they get imprinted into brains, when they are.”
By my lights, this skirts the issue too. Yudkowsky described the deterministic nature of adhering to moral norms. You’re talking about where moral norms come from. But the moral responsibility question is, do we in any sense have control over (and culpability for) our moral actions?
So, not yet knowing the output of the deterministic process that is myself, and being duty-bound to determine it as best I can, the weight of moral responsibility is no less.
The reason you would be thus “duty-bound” seems to be the crux of this whole post, and I don’t see one provided.
To extract out the terminal values we have to inspect this mishmash of valuable things, trying to figure out which ones are getting their value from somewhere else.
What if all the supposed “terminal values” are actually instrumental values from...some more terminal value? Sure “less violence” is more of a terminal value than “fewer guns,” but is there a still more terminal value? Less violence why?
My answer: Because I want to live in a world of happiness/flourishing. https://www.lesswrong.com/posts/DJeBQrBSqFYqBaeQW/rock-bottom-terminal-value
Main question: am I right?
More interesting follow-up: if that is the root, bottom, terminal value driving us humans, did it come from evolution, or is it simply a feature of conscious beings?
Thanks, and yes evolution is the source of many values for sure...I think the terminal vs instrumental question leads in interesting directions. Please let me know how this sits with you!
Though I am an evolved being, none of your examples seem to be terminal values for me the whole organism. Certainly there are many systems within me, and perhaps we could describe them as having their own terminal values, which in part come from evolution as you describe. My metabolic system’s terminal value surely has a lot to do with regulating glucose. My reproductive system’s terminal value likely involves sex/procreation. (But maybe even these can drift, like when a cell becomes cancerous, it seems its terminal value changes.)
But to me as a whole, these values (to the extent which I hold them at all) are instrumental. Sure I want homeostasis, but I want it because I want to live (another instrumental value), and I want to live because I want to be able to pursue my terminal value of happiness/flourishing. Other values that my parts exhibit (like reproduction) I the whole might reject even as an instrumental value, heck I might even subvert the mechanisms afforded by my reproductive system for my own happiness/flourishing.
Also for my terminal value for happiness/flourishing, did that come from evolution? Did it start out as survival/reproduction and drift a bit? Or is there something special about systems like me (which are conscious of pleasure/pain/etc) that just by their nature they desire happiness/flourishing, the way 2+2=4 or the way a triangle has 3 sides? Or...other?
And lastly does any of this port to non-evolved beings like AIs?
Do you think the “basket of goods” (love the pun) could be looked at as instrumental values that derive from the terminal value (desiring happiness/flourishing)?
I don’t understand shard theory well enough to critique it, but is there a distinction between terminal and instrumental within shard theory? Or are these concepts incompatible with shard theory?
(Maybe some examples from the “basket of goods” would help.)
Congrats on a great pick, up another ~50% ytd. What’s your mental model on when to back off on something like Micron? In previous cycles they look classically cheap (ie, high single digit PE based on cycle-high net margins) right before supply catches up to demand and the cycle turns for them. But obviously this cycle may be...well, “different.”
One thing I’m interested in is HBM, which I know is a small part of their revenue now, but seems like it will be expanding (something like exponentially) for years to come.
These trends and how the market will treat them are of course really hard to predict, but you’ve been exceptionally right on Micron so far so I’m very interested in your thoughts.